[ { "repo": "pytorch/audio", "number": 4165, "title": "Does TorchAudio include any RISC-V / RVV specific optimizations?", "body": "### \ud83d\ude80 The feature\n\nHi TorchAudio maintainers,\n\nI would like to ask whether TorchAudio currently contains any architecture-specific optimizations for RISC-V, especially for the RISC-V Vector Extension (RVV).\n\nSo far, I have checked the TorchAudio (audio-2.8.0) repository and observed that:\n- There are no RISC-V or RVV related source files or directories.\n- No RVV intrinsics (e.g. vsetvli, vle*, vfmul*) or `` usage is present.\n- No RISC-V\u2013specific conditional compilation or CMake logic is found.\n- TorchAudio code mainly relies on PyTorch tensor operations, with no explicit CPU kernel implementations inside TorchAudio itself.\n\nBased on this, my understanding is that:\n- TorchAudio does not include RISC-V / RVV specific optimizations.\n- Any RISC-V or RVV performance would come from PyTorch core (ATen / CPU backend) or compiler auto-vectorization, rather than TorchAudio.\n\nCould you please help confirm whether this understanding is correct?\nAdditionally, are there any plans or discussions to introduce RISC-V / RVV\u2013specific optimizations in TorchAudio in the future?\n\nThank you very much for your time and clarification.\n\n\n### Motivation, pitch\n\nI am currently evaluating TorchAudio on RISC-V platforms and investigating whether there are any existing architecture-specific optimizations, particularly related to the RISC-V Vector Extension (RVV).\n\nDuring my review of the TorchAudio (audio-2.8.0) source code, I did not find any RISC-V or RVV\u2013specific implementations, intrinsics, or conditional compilation logic. Since TorchAudio relies heavily on PyTorch for performance-critical computation, I would like to confirm whether this understanding is correct.\n\nThe motivation for this question is to better understand the current optimization scope of TorchAudio on RISC-V, and to determine whether any performance considerations or future work related to RISC-V / RVV should be expected at the TorchAudio level, or if such efforts are entirely handled within PyTorch core.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/4165", "state": "open", "labels": [], "created_at": "2026-01-06T07:24:55Z", "updated_at": "2026-01-06T07:24:55Z", "comments": 0, "user": "zhouying12" }, { "repo": "pytorch/pytorch", "number": 171687, "title": "gfx1151 (Strix Halo) \u2014 LLM decode is ~90% hipMemcpyWithStream in FP16 & 4-bit; kernels not compute-bound", "body": "[benchmark-results_preauth.log](https://github.com/user-attachments/files/24424966/benchmark-results_preauth.log)\n\n### \ud83d\udc1b Describe the bug\n\nSummary\n\nOn gfx1151 (Strix Halo / Ryzen AI MAX 395), autoregressive LLM inference is consistently dominated by hipMemcpyWithStream during decode in both:\nFP16 / BF16 (no quantization)\n4-bit bitsandbytes quantized models\n\neven though:\nGEMM throughput benchmarks are normal\nGPU kernels dispatch continuously\nthe model and KV cache are resident on device\nbehavior is reproducible across HuggingFace models and configs\n\nDuring decode, ~92\u201395% of time is spent in host/device memcpy and only a small fraction in kernels. Token throughput is ~1.4\u20131.6 tok/s on a 70B model, which is far below what available compute bandwidth suggests.\n\nThis looks similar to prior reports where HuggingFace decode is memcpy-bound rather than compute-bound.\n\nHardware\nAMD Ryzen AI MAX 395 (Strix Halo APU)\nArchitecture: gfx1151\nMemory: LPDDR5 UMA\nUMA / VRAM reservation: 96 GB (tests repeated at 64 GB and AUTO)\n\nSoftware\nUbuntu 25.04\nROCm 7.10 / 7.11 (behavior same across versions tested)\nPyTorch ROCm wheels\nHuggingFace Transformers\nBitsandbytes (only for 4-bit runs \u2014 issue still occurs without it)\n\nTest conditions (to rule out confounders)\nThe behavior reproduces under:\nFP16 / BF16 (no quantization)\n4-bit (bitsandbytes)\nmodel.eval()\nuse_cache=True\ngreedy decode\ndevice_map={\"\": 0}\nKV cache on device\n\nWe confirmed it is not caused by:\nGEMM kernel throughput\nSDPA / Flash / Math attention backend selection\nquantization behavior\nCPU fallback execution\nOOM / retry logic\ntokenizer staging\n\nThe issue appears tied specifically to decode-time tensor residency / paging.\n\nWhat is working (compute path)\nGEMM performance looks normal at both 96 GB and 64 GB UMA:\n\n=== GEMM Benchmark (bf16, 4096x4096) ===\nUMA 96G\nAvg: 0.007659 s ~17.94 TFLOP/s\n\nUMA 64G\nAvg: 0.007315 s ~18.79 TFLOP/s\n\nSo compute kernels are healthy and do not appear to be the bottleneck.\n\nWhat is failing (decode path)\nAcross all UMA modes (96G / 64G / AUTO\u224864G), decode profiling shows:\n~92\u201395% in hipMemcpyWithStream\nonly ~4\u20136% in hipLaunchKernel\n\nThis is consistent across:\nFP16 / BF16 and 4-bit\nshort and long prompts\nmultiple runs\n\nExample (96G, 4-bit decode):\nhipMemcpyWithStream 95.47%\nhipLaunchKernel 4.37%\nSelf CPU total: ~42.7s\n\nExample (96G, FP16 decode):\nhipMemcpyWithStream 92.80%\nhipLaunchKernel 6.09%\nSelf CPU total: ~37.7s\n\n64G and AUTO (~64G) produce almost identical profiles.\n\nThis suggests decode-time tensors / KV cache are being re-materialized in host / UMA memory and copied back to the GPU on each generation step instead of remaining resident \u2014 even in the non-quantized FP16 path.\n\nHSA / rocminfo excerpt (gfx1151 APU memory pools)\n(excerpt preserved \u2014 full output attached)\n\nMemory Properties: APU\nCoherent Host Access: FALSE\nPool 1/2: GLOBAL (coarse / extended fine)\nSize: 100663296 KB (~96GB)\nAllocatable: TRUE\n\n\n\n[repro_4bit_decode_profiler.py](https://github.com/user-attachments/files/24424918/repro_4bit_decode_profiler.py)\n[repro_gemm_baseline.py](https://github.com/user-attachments/files/24424919/repro_gemm_baseline.py)\n[repro_fp16_decode_profiler.py](https://github.com/user-attachments/files/24424917/repro_fp16_decode_profiler.py)\n\n[rocm-info_preauth.log](https://github.com/user-attachments/files/24424935/rocm-info_preauth.log)\n\n.\n\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.9.1+rocm7.11.0a20251216\nIs debug build: False\nCUDA used to build PyTorch: N/A\nROCM used to build PyTorch: 7.2.53150-676f9ed34d\n\nOS: Ubuntu 25.04 (x86_64)\nGCC version: (Ubuntu 14.2.0-19ubuntu2) 14.2.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.41\n\nPython version: 3.12.12 | packaged by conda-forge | (main, Oct 22 2025, 23:25:55) [GCC 14.3.0] (64-bit runtime)\nPython platform: Linux-6.16.12-061612-generic-x86_64-with-glibc2.41\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: \nGPU models and configuration: Radeon 8060S Graphics (gfx1151)\nNvidia driver version: Could not collect\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: 7.2.53150\nMIOpen runtime version: 3.5.1\nIs XNNPACK available: True\nCaching allocator config: N/A\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: AuthenticAMD\nModel name: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S\nCPU family: 26\nModel: 112\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): ", "url": "https://github.com/pytorch/pytorch/issues/171687", "state": "open", "labels": [ "module: rocm", "triaged" ], "created_at": "2026-01-04T23:53:11Z", "updated_at": "2026-01-05T12:45:47Z", "comments": 0, "user": "BellaDoggie" }, { "repo": "pytorch/pytorch", "number": 171656, "title": "torch.distributed.pipelining fails on models having DynamicCache (esp. Llama)", "body": "### \ud83d\udc1b Describe the bug\n\ntorch.distributed.pipelining fails on model having DynamicCache.\n\nShould this work? It's pared down from the PiPPy Llama2 example from the documentation (https://docs.pytorch.org/docs/stable/distributed.pipelining.html#hugging-face-examples)\n\nOriginally I was trying to use Llama 3.1 but was having the same issue so I fell back to the example.\n\nIt looks like pipelining can't handle DynamicCache (and doesn't provide a fix). From what I read they're pretty common in Huggingface models. Is there an approach to making torch pipelining applicable?\n\n```\n[host:Pipeline] cat bug1.py\nimport os\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nfrom torch.distributed.pipelining import SplitPoint, pipeline\n\nmodel_dir = \"NousResearch/Llama-2-7b-chat-hf\"\nwith torch.device('cpu') :\n llama = AutoModelForCausalLM.from_pretrained(model_dir)\nprint(llama)\ntokenizer = AutoTokenizer.from_pretrained(model_dir)\ntokenizer.pad_token = tokenizer.eos_token\nmb_prompts = (\n \"How do you\", \"I like to\",\n) # microbatch size = 2\n\nrank = 0\nworld_size = 4\n\n# Cut model by equal number of layers per rank\nlayers_per_rank = llama.config.num_hidden_layers // world_size\nprint(f\"layers_per_rank = {layers_per_rank}\")\nsplit_spec = {\n f\"model.layers.{i * layers_per_rank}\": SplitPoint.BEGINNING\n for i in range(1, world_size)\n}\n\n# Create a pipeline representation from the model\nmb_inputs = tokenizer(mb_prompts, return_tensors=\"pt\", padding=True)\npipe = pipeline(llama, mb_args=(mb_inputs[\"input_ids\"],))\n\nprint(\"Pipe:\\n\", pipe)\n```\n\n```\n[host:Pipeline] python bug1.py\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [01:49<00:00, 54.80s/it]\nLlamaForCausalLM(\n (model): LlamaModel(\n (embed_tokens): Embedding(32000, 4096, padding_idx=0)\n (layers): ModuleList(\n (0-31): 32 x LlamaDecoderLayer(\n (self_attn): LlamaAttention(\n (q_proj): Linear(in_features=4096, out_features=4096, bias=False)\n (k_proj): Linear(in_features=4096, out_features=4096, bias=False)\n (v_proj): Linear(in_features=4096, out_features=4096, bias=False)\n (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\n )\n (mlp): LlamaMLP(\n (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\n (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\n (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\n (act_fn): SiLUActivation()\n )\n (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05)\n (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05)\n )\n )\n (norm): LlamaRMSNorm((4096,), eps=1e-05)\n (rotary_emb): LlamaRotaryEmbedding()\n )\n (lm_head): Linear(in_features=4096, out_features=32000, bias=False)\n)\nlayers_per_rank = 8\n/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/distributed/pipelining/_IR.py:1005: FutureWarning: `torch.export.export_for_training` is deprecated and will be removed in PyTorch 2.10. Please use `torch.export.export` instead, which is functionally equivalent.\n ep = torch.export.export_for_training(\n/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/_dynamo/output_graph.py:1711: UserWarning: While exporting, we found certain side effects happened in the model.forward. Here are the list of potential sources you can double check: ['']\n warnings.warn(\nTraceback (most recent call last):\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/distributed/pipelining/_IR.py\", line 1005, in _trace_with_export\n ep = torch.export.export_for_training(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/typing_extensions.py\", line 3004, in wrapper\n return arg(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/__init__.py\", line 154, in export_for_training\n return _export_for_training(\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/_trace.py\", line 1163, in wrapper\n raise e\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/_trace.py\", line 1129, in wrapper\n ep = fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/exported_program.py\", line 124, in wrapper\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/_trace.py\", line 2071, in _export_for_training\n export_artifact = export_func(\n ^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/_trace.py\", line 1415, in _strict_export\n gm_torch_level = _export_to_torch_ir(\n ^^^^^^^^^^^^^^^^^^^^\n File \"/opt/AI/training-2.9.0/lib/python3.12/site-packages/torch/export/_trace.py\", line 812, in _e", "url": "https://github.com/pytorch/pytorch/issues/171656", "state": "open", "labels": [ "oncall: distributed" ], "created_at": "2026-01-03T21:32:58Z", "updated_at": "2026-01-05T12:48:54Z", "comments": 2, "user": "hpcpony" }, { "repo": "pytorch/pytorch", "number": 171594, "title": "Can you tell me which kernel function be used?", "body": "I'm newer for pytorch source code, but I want copy some pytorch cuda kernel to my project. \n\nFor example, \"images data format nchw use torch.nn.functional.interpolate(..., antialias=False)\",\nthen I find the function torch._C._nn.upsample_bilinear2d(...) in functional.py to use.\n\nI find some kernel in https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UpSampleBilinear2d.cu\n\nis torch._C._nn.upsample_bilinear2d use kernel in this file? and which kernel use?", "url": "https://github.com/pytorch/pytorch/issues/171594", "state": "closed", "labels": [], "created_at": "2026-01-01T07:37:53Z", "updated_at": "2026-01-03T06:58:52Z", "comments": 2, "user": "lzcchl" }, { "repo": "pytorch/pytorch", "number": 171592, "title": "When does it make sense to compile DDP vs not?", "body": "Hello,\n\nI have been looking online, but have seen conflicting information.\n\nSay I can `fullgraph` compile a model with `max-autotune`:\n```python\n compiled_model = torch.compile(raw_model, fullgraph=True, mode=\"max-autotune\")\n ddp_model = DDP(\n compiled_model,\n device_ids=[local_rank],\n output_device=local_rank,\n bucket_cap_mb=100,\n )\n```\nDoes it make sense to do it this way?\n\nOr would it be better to turn off `fullgraph` and then compile the DDP model instead?\n\nThis is quite unclear to me what the correct set of steps is.\n\nThank you,\n\nEnrico", "url": "https://github.com/pytorch/pytorch/issues/171592", "state": "closed", "labels": [], "created_at": "2026-01-01T02:12:06Z", "updated_at": "2026-01-05T14:54:02Z", "comments": 1, "user": "conceptofmind" }, { "repo": "pytorch/executorch", "number": 16422, "title": "java linux cannot work , we need executorch java jar format package \uff0cplease support", "body": "### \ud83d\udc1b Describe the bug\n\njava linux cannot work ,\nI just can't figure it out. I've been communicating with you for a month now, so why can you still not compile a pure Java JAR that allows Java to use executors on Linux, macOS, and Windows? You insist on using JNI to bundle androidx.core in an AAR format, which is completely unusable in Java Maven and SBT projects. This is such a practical need. I've seen how many users in the issues are requesting you to provide an official JAR package format, but you always turn a blind eye. Why is that? Are you worried about something? Isn't it a good thing to expand to more platforms and users? As the project's management, can you really bear to do this? Users simply don't have the ability to package things with C++ or JavaCPP, so why make them do the packaging themselves? That is unreasonable in itself.\n\n### Versions\n\ndd\n\ncc @kirklandsign @cbilgin", "url": "https://github.com/pytorch/executorch/issues/16422", "state": "open", "labels": [ "module: android" ], "created_at": "2025-12-31T10:09:02Z", "updated_at": "2026-01-06T07:52:28Z", "comments": 2, "user": "mullerhai" }, { "repo": "pytorch/pytorch", "number": 171537, "title": "`torch.compile(dynamic=True)` + `torch.func` triggers internal assertion error.", "body": "### \ud83d\udc1b Describe the bug\n\nThis is a bug in pytorch 2.8, with `nvcc` version `release 12.9, V12.9.86` on Ubuntu linux. It repros on BOTH my `RTX 5060 TI 16GB` AND on CPU.\n\nThe specific error message is `RuntimeError('isIntList() INTERNAL ASSERT FAILED at \"/pytorch/aten/src/ATen/core/ivalue_inl.h\":1979, please report a bug to PyTorch. Expected IntList but got GenericList')`\n\nI spent hours trying to find a simple repro and can't. But whoever is assigned to investigate I can provide access to my (currently private) github repo so they can repro it themselves. The specific scenario seems to require:\n\n- Must be `torch.compile`d (does not repro when using eager mode)\n- Must use `torch.func` stack (does not repro with `torch.autograd`, though admittedly I cant test compiled with `autograd` due to pytorch limitations)\n- Must specifically be compiled with `dynamic=True` (the code succeeds with `dynamic=False`)\n\nAgain, the below is NOT a repro case, but an example usage. The relevant code for my use case is:\n\n```\n def functional_loss_step(\n params_dict: dict[str, torch.Tensor],\n buffers_dict: dict[str, torch.Tensor],\n pc: MyPytreeStructure,\n species: torch.Tensor,\n target_energy: torch.Tensor,\n target_forces: torch.Tensor,\n ) -> torch.Tensor:\n def compute_energy_functional(\n input_pc: MyPytreeStructure,\n ):\n result = torch.func.functional_call( # type: ignore[no-any-return]\n model,\n (params_dict, buffers_dict),\n (input_pc, species),\n )\n return result[1]\n\n per_batch_energies, vjp_fn = torch.func.vjp(compute_energy_functional, pc)\n\n # Compute second order derivitives.\n cotangents = torch.ones_like(per_batch_energies)\n (pc_grads,) = vjp_fn(cotangents)\n forces = -pc_grads.edges._positions\n\n predictions = LossData(per_batch_energies, forces)\n targets = LossData(target_energy, target_forces)\n\n return criterion(predictions, targets) # type: ignore[no-any-return]\n```\n\nWhere `MyPytreeStructure` is a custom object registered with pytree.\n\nPlease investigate - there is no alternative path to combining `torch.compile` with second order derivitives.\n\n### Error logs\n\n```\nTraceback (most recent call last):\n File \"/home/ryan/src/environment/examples/nequip/smoke_test.py\", line 65, in \n train_losses, val_losses = train_nequip(hyperparameters)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/src/environment/examples/nequip/main.py\", line 595, in train_nequip\n grads_dict, current_loss = calculate_loss_compiled(\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py\", line 736, in compile_wrapper\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 1495, in __call__\n return self._torchdynamo_orig_callable(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 629, in __call__\n return _compile(\n ^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 1111, in _compile\n guarded_code = compile_inner(code, one_graph, hooks, transform)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_utils_internal.py\", line 97, in wrapper_function\n return function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 793, in compile_inner\n return _compile_inner(code, one_graph, hooks, transform)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 832, in _compile_inner\n out_code = transform_code_object(code, transform)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py\", line 1424, in transform_code_object\n transformations(instructions, code_options)\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 267, in _fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 753, in transform\n tracer.run()\n File \"/home/ryan/anaconda3/envs/environment/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 3497, in run\n super().run()\n File \"/home/ryan/ana", "url": "https://github.com/pytorch/pytorch/issues/171537", "state": "open", "labels": [ "oncall: pt2" ], "created_at": "2025-12-30T20:35:47Z", "updated_at": "2026-01-02T10:19:24Z", "comments": 0, "user": "rwkeane" }, { "repo": "pytorch/pytorch", "number": 171516, "title": "How to verify that default_decompositions successfully reduce operators to the Core ATen IR set?", "body": "Hi\uff5e\n\nIs there a way to test if all ops in `default_decompositions` can be fully decomposed into the Core ATen IR (~180 ops) using `ep.run_decompositions`, as specified in the Export IR documentation (https://docs.pytorch.org/docs/stable/export.html#export-ir-decompositions)?\n\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/171516", "state": "open", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2025-12-30T09:22:16Z", "updated_at": "2026-01-05T16:23:29Z", "user": "Tongkaio" }, { "repo": "pytorch/pytorch", "number": 171501, "title": "Several Windows-related GitHub Actions not running \u2014 are they intentionally disabled?", "body": "Hi PyTorch team,\nI noticed that several Windows-related GitHub Actions workflows have not run for quite some time. Could you please help confirm whether each of these workflows is intentionally not running, and if not, whether there are plans or timelines for re\u2011enabling them?\nThe workflows in question are:\n\n- https://github.com/pytorch/pytorch/actions/workflows/win-arm64-build-test.yml\n- https://github.com/pytorch/pytorch/actions/workflows/generated-windows-arm64-binary-libtorch-nightly.yml\n- https://github.com/pytorch/pytorch/actions/workflows/_win-arm64-build.yml\n- https://github.com/pytorch/pytorch/actions/workflows/generated-windows-binary-conda-nightly.yml\n- https://github.com/pytorch/pytorch/actions/workflows/generated-windows-binary-libtorch-nightly.yml\n- https://github.com/pytorch/pytorch/actions/workflows/_win-build.yml\n\nIn particular, the workflow https://github.com/pytorch/pytorch/actions/workflows/win-arm64-build-test.yml appears to have been manually disabled and was not re\u2011enabled even after a related fix was merged: https://github.com/pytorch/pytorch/actions/workflows/win-arm64-build-test.yml\n\nThanks in advance for your help!\n\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @seemethere @malfet @pytorch/pytorch-dev-infra @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @nWEIdia", "url": "https://github.com/pytorch/pytorch/issues/171501", "state": "open", "labels": [ "module: windows", "module: ci", "triaged", "module: arm" ], "created_at": "2025-12-30T05:29:20Z", "updated_at": "2026-01-05T14:46:01Z", "comments": 2, "user": "vortex-captain" }, { "repo": "pytorch/executorch", "number": 16413, "title": "Batch Inference On 8255 device", "body": "Hi, I want to perform batch inference on the 8255 device now. \nI noticed there is a --num_iters parameter in qnn_llama_runner. Is this parameter for batch inference? Additionally, how can I use the KV cache, that is, load the model and system_prompt once and then perform multiple inferences. \nLooking forward to your reply.\n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/16413", "state": "open", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-12-30T02:55:46Z", "updated_at": "2026-01-06T07:15:45Z", "comments": 6, "user": "imjking" }, { "repo": "pytorch/tutorials", "number": 3710, "title": "[DCP] Add DefaultStager example to distributed async checkpoint recipe", "body": "### \ud83d\ude80 Feature Request\n\n**Description**\nThe current `distributed_async_checkpoint_recipe` covers basic usage of `dcp.async_save` and Pinned Memory optimization. However, it does not cover the **fully asynchronous staging** capabilities introduced in PyTorch 2.9 via `DefaultStager`.\n\nEven with `async_save`, the Device-to-Host (D2H) copy (staging phase) typically happens on the main thread, which can block the training loop.\n\n**Proposal**\nI would like to update the tutorial to include a new section on **\"Fully Asynchronous Staging with DefaultStager\"**.\n\nThis update will demonstrate:\n1. How to use the `async_stager=DefaultStager()` argument.\n2. How to correctly synchronize staging to achieve full overlap between the D2H copy and the **Forward + Backward** pass of the next step.\n3. Timeline comparison between standard async save and stager-based async save.\n\nI have already prepared the content and code example.", "url": "https://github.com/pytorch/tutorials/issues/3710", "state": "open", "labels": [], "created_at": "2025-12-29T13:28:55Z", "updated_at": "2025-12-29T13:28:55Z", "comments": 0, "user": "niyunsheng" }, { "repo": "pytorch/pytorch", "number": 171392, "title": "[Bug] c10::SmallVector: getNewCapacity has unused TSize parameter \u2014 remove or use for overflow-safety?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIn [`c10/util/SmallVector.cpp`](https://github.com/pytorch/pytorch/blob/913ea815a4555747729eb2206266411782f29370/c10/util/SmallVector.cpp#L87C53-L87C58) we have:\n\n`template static size_t getNewCapacity(size_t MinSize, size_t TSize, size_t OldCapacity)`\n\nCurrently `TSize` is unused.\n\nWe can:\n1. Remove TSize from getNewCapacity (simplify signature), or\n2. Use TSize to clamp the maximum capacity (e.g. MaxSize = min(numeric_limits::max(), SIZE_MAX / TSize)) and make growth arithmetic overflow-safe.\n\nWhat is preferred? I can send a PR with the better option later.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @jbschlosser", "url": "https://github.com/pytorch/pytorch/issues/171392", "state": "open", "labels": [ "module: cpp", "triaged" ], "created_at": "2025-12-27T22:54:34Z", "updated_at": "2026-01-05T17:48:08Z", "comments": 4, "user": "yewentao256" }, { "repo": "pytorch/ao", "number": 3543, "title": "[MXLinear]Where is the operator call for implementing MXFP8 in NVD?", "body": "In the forward method of the MXLinear class, `mx_mm.apply` is called, although `MXTensor.to_mx` is also invoked. The following code implements the quantization processing of MXFP8\uff1a\nscale_e8m0_biased, data_lp = to_mx(data_hp, elem_dtype, block_size, scaling_mode, is_swizzled_scales)\n\nWhen examining the implementation of to_mx, I noticed that it does not call any CUDA-related low-precision operators; instead, it uses simulated low-precision implementations. What could be the reason for this? And where are the CUDA MXFP8 low-precision operators called? Thank you.", "url": "https://github.com/pytorch/ao/issues/3543", "state": "open", "labels": [], "created_at": "2025-12-25T09:58:57Z", "updated_at": "2025-12-26T07:21:30Z", "user": "LucaHW" }, { "repo": "pytorch/executorch", "number": 16392, "title": "Reasoning without using the think function", "body": "Hi, i want to use Qwen3_0.6B model in 8255 device, i exported pte model and run it on device successfully. Now i want to disable the \"think\" function to verify something, how can i achieve it ?\nI use the following command and get outputs.txt:\n./qnn_llama_runner_ndk27 --decoder_model_version qwen3 --tokenizer_path tokenizer.json --model_path hybrid_llama_qnn.pte --prompt \"who are you\" --seq_len 512 --eval_mode 1 --temperature 0.8 && cat outputs.txt\n\n\"Image\"\n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/16392", "state": "closed", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-12-24T12:24:35Z", "updated_at": "2025-12-30T02:32:04Z", "comments": 2, "user": "imjking" }, { "repo": "pytorch/executorch", "number": 16391, "title": "Tokenizer fails on iOS (RE2 lookahead unsupported) \u2013 need regex_lookahead static lib or guidance", "body": "### \ud83d\udc1b Describe the bug\n\nSummary\niOS Flutter app using ExecuTorch LLM (Qwen3 0.6B) cannot load the tokenizer because RE2 does not support lookahead (?!\\S).\nSPM branch: swiftpm-1.1.0.20251223 (no visible regex_lookahead target/lib).\nLogs ask to link regex_lookahead, but SPM did not produce the static lib.\nEnvironment\nPlatform: iOS Simulator (iPhone 16 Pro), macOS, Xcode 15.\nExecuTorch via SwiftPM branch swiftpm-1.1.0.20251223.\nApp: Flutter, native plugin calling TextRunner.load(modelPath, tokenizerPath).\nModel: qwen3_0.6B_model.pte (~518MB).\nTokenizer: tokenizer (1).json (~11MB) containing lookahead.\nLogs (Xcode)\nE re2.cc:237 Error parsing ... invalid perl operator: (?!E tokenizers:regex.cpp:66 RE2 doesn't support lookahead patterns. Link with `regex_lookahead` to enable support.I tokenizers:hf_tokenizer.cpp:166 Could not parse pre_tokenizer: Error: 9\nWhat I\u2019ve tried\nPatched tokenizer to remove (?!\\S) \u2192 error disappears, but this is a workaround.\nSearched for libregex_lookahead*.a in DerivedData: not found (this SPM branch doesn\u2019t seem to include it).\nBackends force-loaded fine; only regex_lookahead is missing.\nQuestions / help needed\n1) Does the swiftpm-1.1.0.x branch ship a regex_lookahead target/static lib? If yes, how to enable it so SPM produces libregex_lookahead.a?\n2) If not, can you provide guidance or a prebuilt libregex_lookahead.a (simulator/device) for manual linking?\n3) Is there a \u201cclean\u201d tokenizer (no lookahead) recommended for Qwen3 0.6B in the ExecuTorch LLM samples?\nMore info\nI can share the 11MB tokenizer via a private link if needed.\n\n\n### Versions\nswiftpm-1.1.0.20251223\n", "url": "https://github.com/pytorch/executorch/issues/16391", "state": "open", "labels": [], "created_at": "2025-12-24T09:14:42Z", "updated_at": "2025-12-24T09:43:59Z", "comments": 0, "user": "quocanh0712" }, { "repo": "pytorch/pytorch", "number": 171204, "title": "Dynamo can't trace a code when we construct nn.Parameter in the forward.", "body": "### \ud83d\udc1b Describe the bug\n\n```python \nimport torch\nimport torch._dynamo\n\ntorch._dynamo.config.graph_break_on_nn_param_ctor = False\n\ndef fn(x):\n w = torch.nn.Parameter(torch.ones(4, 4))\n if w.grad is None:\n w.grad = torch.zeros_like(w)\n return w.grad + x\n\nx = torch.randn(4, 4)\ncompiled_fn = torch.compile(fn, backend='eager', fullgraph=True)\nresult = compiled_fn(x)\n```\n```\nUnsupported: Failed to trace builtin operator\n Explanation: Dynamo does not know how to trace builtin operator `add` with argument types ['', 'Tensor'] (has_kwargs False)\n Hint: Avoid calling builtin `add` with argument types ['', 'Tensor']. Consider using an equivalent alternative function/method to `add`.\n Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.\n Hint: Please report an issue to PyTorch.\n\n Developer debug context: builtin add [, ] False\n\n For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0059.html\n\nfrom user code:\n File \"/tmp/ipykernel_616085/151731544.py\", line 10, in fn\n return w.grad + x\n\nSet TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS=\"+dynamo\"\n```\nI think this is due to the fact we are emitting generic GetAttr node for w.grad instead of proper type. \n\n### Versions\n\nmain\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/171204", "state": "open", "labels": [ "oncall: pt2" ], "created_at": "2025-12-23T19:41:48Z", "updated_at": "2026-01-05T14:52:45Z", "comments": 1, "user": "tugsbayasgalan" }, { "repo": "pytorch/executorch", "number": 16374, "title": "`strided_copy` operator in output graph when sample input has been transposed", "body": "### \ud83d\udc1b Describe the bug\n\nI occasionally read existing model calibration data from Numpy arrays that are in NHWC order when deploying with ExecuTorch. Whenever I do that and transpose the calibration data to NCHW, the output graph contains an `as_strided_copy` operator, even if I have previously called `.contiguous()` on the tensor.\n\nMinimal working example to reproduce the behavior:\n\n```python\n\"\"\"\nMinimal working example of ExecuTorch PTE export functionality.\n\"\"\"\n\nimport os\nimport torch\nimport torch.nn as nn\nfrom torch.export import export\nfrom executorch.exir import to_edge_transform_and_lower, EdgeCompileConfig, ExecutorchBackendConfig\nfrom executorch.extension.export_util.utils import save_pte_program\nfrom executorch.backends.arm.ethosu import EthosUCompileSpec, EthosUPartitioner\nfrom executorch.backends.arm.quantizer import EthosUQuantizer, get_symmetric_quantization_config\nfrom torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e\n\n# Constants\nOUTPUT_DIR = \"./output\"\nMODEL_NAME = \"test_model\"\nACCELERATOR_CONFIG = \"ethos-u55-128\"\nSYSTEM_CONFIG = \"Ethos_U55_High_End_Embedded\"\nMEMORY_MODE = \"Shared_Sram\"\n\n\nclass SimpleModel(nn.Module):\n \"\"\"Simple CNN model for testing.\"\"\"\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 8, 3, padding=1)\n self.pool = nn.AdaptiveAvgPool2d((1, 1))\n self.fc = nn.Linear(8, 10)\n \n def forward(self, x):\n x = torch.relu(self.conv1(x))\n x = self.pool(x)\n x = x.view(x.size(0), -1)\n return self.fc(x)\n\n\ndef prepare_input_data(single_sample=False):\n \"\"\"Prepare dummy input data in channels-first format.\"\"\"\n dummy_data_nchw = torch.rand(100, 1, 49, 10) # Dummy data for example\n \n if single_sample:\n dummy_data_nchw = dummy_data_nchw[0:1]\n \n return (dummy_data_nchw,)\n\n\ndef quantize_model(model):\n \"\"\"Quantize model using EthosU quantizer.\"\"\"\n # Create compile spec\n compile_spec = EthosUCompileSpec(\n ACCELERATOR_CONFIG,\n system_config=SYSTEM_CONFIG,\n memory_mode=MEMORY_MODE,\n extra_flags=[\"--verbose-operators\", \"--verbose-cycle-estimate\"],\n )\n \n # Setup quantizer\n quantizer = EthosUQuantizer(compile_spec)\n operator_config = get_symmetric_quantization_config()\n quantizer.set_global(operator_config)\n \n model = torch.export.export(model, prepare_input_data(single_sample=True), strict=True).module()\n prepared_model = prepare_pt2e(model, quantizer)\n \n # Calibrate with dummy data\n calibration_inputs = prepare_input_data()[0]\n for x in calibration_inputs:\n prepared_model(x)\n \n # Convert to quantized model\n return convert_pt2e(prepared_model)\n\n\ndef lower_to_arm_backend(exported_program):\n \"\"\"Apply ARM backend transformations.\"\"\"\n compile_spec = EthosUCompileSpec(\n ACCELERATOR_CONFIG,\n system_config=SYSTEM_CONFIG,\n memory_mode=MEMORY_MODE,\n extra_flags=[\"--verbose-operators\", \"--verbose-cycle-estimate\"],\n )\n \n partitioner = EthosUPartitioner(compile_spec)\n \n edge_program_manager = to_edge_transform_and_lower(\n exported_program,\n partitioner=[partitioner],\n compile_config=EdgeCompileConfig(_check_ir_validity=False),\n )\n\n return edge_program_manager\n\n\ndef export_pte_example():\n \"\"\"Main export function - minimal working example.\"\"\"\n os.makedirs(OUTPUT_DIR, exist_ok=True)\n \n print(\"Creating model...\")\n model = SimpleModel()\n \n print(\"Preparing input data...\")\n tracing_inputs = prepare_input_data(single_sample=True)\n \n print(\"Quantizing model...\")\n quantized_model = quantize_model(model)\n \n print(\"Exporting quantized model...\")\n exported_program = export(quantized_model, tracing_inputs, strict=True)\n \n print(\"Lowering to ARM backend...\")\n edge_program = lower_to_arm_backend(exported_program)\n \n print(\"Creating ExecutorTorch program...\")\n exec_prog = edge_program.to_executorch(\n config=ExecutorchBackendConfig(extract_delegate_segments=False)\n )\n \n print(\"Saving PTE model...\")\n output_path = os.path.join(OUTPUT_DIR, f\"{MODEL_NAME}_quantized.pte\")\n save_pte_program(exec_prog, output_path)\n \n print(f\"Successfully exported model to: {output_path}\")\n return output_path\n\n\nif __name__ == \"__main__\":\n try:\n pte_path = export_pte_example()\n print(\"Export completed successfully!\")\n except Exception as e:\n print(f\"Export failed: {e}\")\n raise\n```\nI get an output graph that can entirely be delegated to U55.\n\nWhen I change `prepare_input_data` to this:\n```python\ndef prepare_input_data(single_sample=False):\n \"\"\"Prepare dummy input data in channels-first format.\"\"\"\n dummy_data_nhwc = torch.rand(100, 49, 10, 1)\n \n # Transpose from channels-last to channels-first\n axes = [0, -1] + list(range(1, len(dummy_data_nhwc.shape) - 1))\n dummy_data_nchw = dummy_data_nhwc.permute(axes).contiguous()", "url": "https://github.com/pytorch/executorch/issues/16374", "state": "open", "labels": [ "module: exir", "module: arm" ], "created_at": "2025-12-23T14:45:30Z", "updated_at": "2025-12-24T15:40:34Z", "comments": 1, "user": "etrommer" }, { "repo": "pytorch/pytorch", "number": 171158, "title": "`torch.func.grad` to allow some inplace ops", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nAt least for those under `@torch.no_grad` context manager\n\nCurrently only `torch.func.grad` allows to fullgraph-compile computing grads wrt inputs because of:\n- https://github.com/pytorch/pytorch/issues/170487\nso it's an important usecase\n\n\n`torch.autograd.grad` is fine with some leaf nodes being inplace updated if this happens under `@torch.no_grad`:\n- https://github.com/huggingface/transformers/issues/43010\n\nan example of this is model's forward modifying a static cache\n\nif we want to compute gradients wrt input embeds, it passes with `torch.autograd.grad` but breaks with `torch.func.grad`:\n```\ntorch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_method index_copy_(*(FakeTensor(..., device='cuda:0', size=(64, 4, 512, 64), dtype=torch.bfloat16), 2, GradTrackingTensor(lvl=1, value=\n FakeTensor(..., device='cuda:0', size=(145,), dtype=torch.int64)\n), GradTrackingTensor(lvl=1, value=\n FakeTensor(..., device='cuda:0', size=(64, 4, 145, 64), dtype=torch.bfloat16,\n grad_fn=)\n)), **{}): got RuntimeError('During a grad (vjp, jvp, grad, etc) transform, the function provided attempted to call in-place operation (aten::index_copy_) that would mutate a captured Tensor. This is not supported; please rewrite the function being transformed to explicitly accept the mutated Tensor(s) as inputs.')\n```\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan @bobrenjc93 @Chillee @samdow @kshitij12345", "url": "https://github.com/pytorch/pytorch/issues/171158", "state": "open", "labels": [ "module: autograd", "triaged", "module: functorch" ], "created_at": "2025-12-23T04:16:33Z", "updated_at": "2026-01-05T17:20:24Z", "comments": 2, "user": "vadimkantorov" }, { "repo": "pytorch/pytorch", "number": 171080, "title": "NxN BlockMask / Cumulative Sequence Length", "body": "Hi,\nI tried to implement FlexAttention for large batch training. Each attention layer computes attention within a window. My tensor is a batch-packed tensor to handle variable sequence lengths. _The size of each batch sequence changes with the data (some batch samples are longer than other)_\n\nThis means that I not only need to mask the window, but also the batches. In FlashAttention2 this is simple, i just pass a cumulative sequence length tensor, e.g.\n\n```python\nbounds = torch.tensor([ 0, 1024, 2048, 3072, 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, 30720, 31744, 32768, 33792, 34816, 35840, 36864, 37888, 38912, 39936, 40960, 41984, 43008, 44032, 45056, 46080, 47104, 48128, 49152, ... ]) # Here window size = 1024\n```\n\n\n**The issue:**\nIm training with a very large batch size, so I have thousands of window/batch-sequence blocks.\n\nI tried various ways to calculate the BlockMask, but it always materializes a NxN matrix. Using compile is extremely slow (~20x slower than FA2), since I have to recalculate the BlockMask at each sample. Padding is not an option, as this is too inefficient.\n\n**My question**\nIs there a way to use a similar cumulative sequence length tensor, as in FA2, without materializing memory-prohibitive NxN masks in any part of flex attention?\n\ncc @chauhang @penguinwu @Chillee @drisspg @yanboliang @BoyuanFeng", "url": "https://github.com/pytorch/pytorch/issues/171080", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: flex attention" ], "created_at": "2025-12-22T09:58:22Z", "updated_at": "2025-12-22T17:50:15Z", "comments": 1, "user": "L-Reichardt" }, { "repo": "pytorch/pytorch", "number": 170926, "title": "Could we have a unified method on c10::Stream to access the underlying pointer that the c10::Stream wraps?", "body": "As title.\n\nAs I understand it, the device generic c10::Stream object is intended to wrap an underlying pointer to the stream object for the accelerator (e.g. `cudaStream_t` for CUDA, `hipStream_t` for ROCM `sycl::queue&` for XPU etc.). I see that there are methods like the following on `CUDAStream`/`XPUStream` that allow users to access the underlying pointer to the respective underlying object.\n\nhttps://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/c10/cuda/CUDAStream.h#L143-L144\n\nhttps://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/c10/xpu/XPUStream.h#L116-L117\n\nWould it make sense to have a unified method on the base c10::Stream that returns this to the user? e.g. perhaps `void** native_ptr()` that the user then casts to the type that they expect?\n\n\n\n## Use Case\n\nI've added a [device-generic ABI stable wrapper for c10::Stream](https://github.com/pytorch/pytorch/blob/main/torch/csrc/stable/accelerator.h#L45-L64) to torch/csrc/stable/accelerator.h . It is returned when the user uses the ABI stable variant of `getCurrentStream` that wraps `at::accelerator::getCurrentStream` https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/torch/csrc/stable/accelerator.h#L66-L70\n\nhttps://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/torch/csrc/inductor/aoti_torch/shim_common.cpp#L1510-L1518\n\nI'm looking for a unified way to access the underlying pointer (e.g. so a user can pass it to a raw CUDA/HIP/XPU API but it seems like there is no unified method to access this. The only method that seems close is [`id()`](https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/c10/xpu/XPUStream.h#L89-L93) which returns a StreamId which is not directly interpretable by the user (for example on CUDA some part of it might be an index into the internal pool of streams used by pytorch).\n\n\n\ncc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD @guangyey @EikanWang", "url": "https://github.com/pytorch/pytorch/issues/170926", "state": "open", "labels": [ "triaged", "module: PrivateUse1", "module: accelerator" ], "created_at": "2025-12-20T00:35:56Z", "updated_at": "2025-12-31T02:31:09Z", "comments": 3, "user": "mikaylagawarecki" }, { "repo": "pytorch/torchtitan", "number": 2168, "title": "Wrong commands in compiler_toolkit .md?", "body": "### Bug description\n\nThe commands in the readme page of https://github.com/pytorch/torchtitan/tree/main/torchtitan/experiments/compiler_toolkit are wrong? \nOnly the first flex_attention command has `--model.flavor=debugmodel_flex_attn`, the other three don't, and I don't see flex_attention ops in the graph modules if I don't specify the model.flavor.\n\n\n### Versions\n\nmain 1bd2548b14da014b1ec560830f8bdefb6ca568f4 ", "url": "https://github.com/pytorch/torchtitan/issues/2168", "state": "open", "labels": [], "created_at": "2025-12-19T23:25:19Z", "updated_at": "2025-12-19T23:31:37Z", "comments": 2, "user": "yushangdi" }, { "repo": "pytorch/pytorch", "number": 170867, "title": "Operator benchmark: option to measure GPU execution time only (less CPU noise)", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHello,\n\n[Operator benchmark](https://github.com/pytorch/pytorch/tree/main/benchmarks/operator_benchmark) currently measures time in a way that [could be prone to CPU noise](https://github.com/pytorch/pytorch/blob/eba9265a580c6dc3e928ef341c23cab96ccf8b07/benchmarks/operator_benchmark/benchmark_core.py#L350). Is it possible to only measure GPU execution time using `torch.cuda.Event`?\nIf this change is made, this benchmark can be used more robustly for detecting possible regressions across updates, since it would produce more repeatable results.\n\n### Alternatives\n\n* Make the time-measuring code leaner, primarily measuring time spent on the GPU using `torch.cuda.Event` (with appropriate synchronization)..\n* Currently, each operator has a separate file and its settings are [hardcoded in separate files](https://github.com/pytorch/pytorch/blob/999d94b5ede5f4ec111ba7dd144129e2c2725b03/benchmarks/operator_benchmark/pt/as_strided_test.py#L10). We could instead define all operators in a single file, similar to something like:\n```\nop_defs = {\n \"add\": {\n \"init\": lambda input_dict: {\n \"input1\": torch.rand(input_dict[\"shape\"], dtype=getattr(torch, input_dict[\"dtype\"]), device=\"cuda\"),\n \"input2\": torch.rand(input_dict[\"shape\"], dtype=getattr(torch, input_dict[\"dtype\"]), device=\"cuda\"),\n },\n \"func\": lambda input_dict: input_dict[\"input1\"] + input_dict[\"input2\"],\n },\n```\n\n### Additional context\n\n_No response_\n\ncc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @mwootton", "url": "https://github.com/pytorch/pytorch/issues/170867", "state": "open", "labels": [ "oncall: profiler" ], "created_at": "2025-12-19T10:31:38Z", "updated_at": "2025-12-20T22:52:15Z", "comments": 0, "user": "apakbin" }, { "repo": "pytorch/pytorch", "number": 170750, "title": "CUDA: Tensor.index_select out-of-bounds index triggers device-side assert (Indexing.cu:1237) instead of a regular error", "body": "### \ud83d\udc1b Describe the bug\n\n### Bug description\nOn CUDA, calling `Tensor.index_select` with an out-of-bounds index triggers a device-side assert in `../aten/src/ATen/native/cuda/Indexing.cu:1237` (`indexSelectSmallIndex`), and then raises `RuntimeError: CUDA error: device-side assert triggered`.\n\nOn CPU, similar out-of-bounds indexing typically raises a regular Python exception (e.g. `IndexError` / \u201cindex out of range\u201d) without poisoning the CUDA context. On CUDA, the device-side assert is harsh and can cause subsequent CUDA ops to fail as well.\n\n### Minimal repro\n```python\nimport torch\n\n# out-of-bounds index on an empty tensor\nx = torch.empty((0,), device=\"cuda\")\nidx = torch.tensor([1], device=\"cuda\") # OOB\nx.index_select(0, idx)\n\n# Force sync so the error is reported at the correct line\ntorch.cuda.synchronize()\n\n```\n### How to run\nCUDA_LAUNCH_BLOCKING=1 TORCH_SHOW_CPP_STACKTRACES=1 \\ python mini_repro.py\nIf symbolization hangs:TORCH_DISABLE_ADDR2LINE=1 CUDA_LAUNCH_BLOCKING=1 TORCH_SHOW_CPP_STACKTRACES=1 python \n\n### Expected behavior\nRaise a normal, non-fatal Python exception for out-of-bounds indices (similar to CPU behavior).\nAvoid a device-side assert that poisons the CUDA context.\n\n### Actual behavior\n../aten/src/ATen/native/cuda/Indexing.cu:1237: indexSelectSmallIndex: block: [0,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\nTraceback (most recent call last):\n File \"/home/lhj/callChainBuild/output/targeted_mutation/validation/minimal_code/torch_Tensor_index_select_repro.py\", line 5, in \n x.index_select(0, idx)\n**RuntimeError: CUDA error: device-side assert triggered**\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\n### Traceback\n```\n../aten/src/ATen/native/cuda/Indexing.cu:1237: indexSelectSmallIndex: block: [0,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\n[W Module.cpp:156] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...\n\nTraceback (most recent call last):\n File \"/home/lhj/callChainBuild/output/targeted_mutation/validation/minimal_code/torch_Tensor_index_select_repro.py\", line 5, in \n x.index_select(0, idx)\nRuntimeError: CUDA error: device-side assert triggered\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\nException raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):\nC++ CapturedTraceback:\n#4 c10::Error::Error(c10::SourceLocation, std::string) from ??:0\n#5 c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) from ??:0\n#6 c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) from ??:0\n#7 at::native::(anonymous namespace)::index_select_out_cuda_impl(at::Tensor&, at::Tensor const&, long, at::Tensor const&)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const from ??:0\n#8 void at::native::(anonymous namespace)::index_select_out_cuda_impl(at::Tensor&, at::Tensor const&, long, at::Tensor const&) from ??:0\n#9 at::native::index_select_out_cuda(at::Tensor const&, long, at::Tensor const&, at::Tensor&) from ??:0\n#10 at::native::index_select_cuda(at::Tensor const&, long, at::Tensor const&) from ??:0\n#11 at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA__index_select(at::Tensor const&, long, at::Tensor const&) from RegisterCUDA.cpp:0\n#12 c10::impl::wrap_kernel_functor_unboxed_, at::Tensor, c10::guts::typelist::typelist >, at::Tensor (at::Tensor const&, long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from RegisterCUDA.cpp:0 #13 at::_ops::index_select::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from ??:0\n#14 torch::autograd::VariableType::(anonymous namespace)::index_select(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from VariableType_0.cpp:0\n#15 c10::impl::wrap_kernel_functor_unboxed_, at::Tensor, c10::guts::typelist::typelist >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from VariableType_0.cpp:0 ", "url": "https://github.com/pytorch/pytorch/issues/170750", "state": "open", "labels": [ "module: cuda", "triaged" ], "created_at": "2025-12-18T06:48:01Z", "updated_at": "2025-12-20T23:31:57Z", "comments": 0, "user": "DeLightor" }, { "repo": "pytorch/pytorch", "number": 170635, "title": "Use cvt.rp.satfinite.ue8m0x2.f32 PTX instruction in Inductor codegen for mxfp8 quantization", "body": "## Summary\n\nFor MXFP8 quantization, NVIDIA recommends using the \"RCEIL\" rounding mode to convert a fp32 scale factor to the e8m0 format for MXFP8. On Blackwell/sm100, they support a PTX instruction to convert fp32 scales to the e8m0 format for MXFP8 using a single instruction, rather than several operations: `cvt.rp.satfinite.ue8m0x2.f32`\n\nIn torchao, for RCEIL rounding mode in MXFP8 quantization, we use this with inline PTX. Examples:\n- https://github.com/pytorch/ao/pull/3498\n- https://github.com/pytorch/ao/blob/85557135c93d3429320a4a360c0ee9cb49f84a00/torchao/csrc/cuda/mx_kernels/mxfp8_quantize.cuh#L211\n\nHowever, our [torch native to_mx() function does not yet support this](https://github.com/pytorch/ao/blob/b9e5780b56088daaf01d4fa3d4828efc4868cbed/torchao/prototype/mx_formats/mx_tensor.py#L106).\n\nWould it be possible for Inductor codegen to pattern match this and codegen using the PTX instruction above? Or is there an alternate approach we should consider? Thanks! \n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @aakhundov @coconutruben @jataylo", "url": "https://github.com/pytorch/pytorch/issues/170635", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: inductor", "module: floatx (formerly float8)" ], "created_at": "2025-12-17T02:03:40Z", "updated_at": "2025-12-19T09:36:51Z", "comments": 0, "user": "danielvegamyhre" }, { "repo": "pytorch/pytorch", "number": 170604, "title": "CUDAGraph capturing of iterating the same function/module (outside and inside fullgraph)", "body": "### \ud83d\udc1b Describe the bug\n\nThe example from https://docs.pytorch.org/docs/stable/torch.compiler_cudagraph_trees.html#limitations throws an error as warned in the docs:\n\n```\nRuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File \".../bug.py\", line 7, in my_model\n y = torch.matmul(x, x). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.\n```\n\n```python\nimport torch\n\n@torch.compile(mode=\"reduce-overhead\")\ndef my_model(x):\n y = torch.matmul(x, x)\n return y\n\nx = torch.randn(10, 10, device=\"cuda\")\ny1 = my_model(x)\ny2 = my_model(x)\nprint(y1)\n```\n\nThe docs suggest that `torch.compiler.cudagraph_mark_step_begin()` can be used, but\n\n```python\ntorch.compiler.cudagraph_mark_step_begin()\ny1 = my_model(x)\ntorch.compiler.cudagraph_mark_step_begin()\ny2 = my_model(x)\n```\n\nproduces anyway:\n```\nRuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File \".../bug.py\", line 7, in my_model\n y = torch.matmul(x, x). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.\n```\n\n---\n\nAnd more importantly, how to do several invocations of the same model inside fullgraph-capture and make it work with CUDAGraph/reduce-overhead? I've tried placing the call `torch.compiler.cudagraph_mark_step_begin()` inside fullgraph'd region, but it throws with a forced graph break:\n```\ntorch._dynamo.exc.Unsupported: Attempted to call function marked as skipped\n Explanation: Dynamo developers have intentionally marked that the function `cudagraph_mark_step_begin` in file `.../.venv/lib/python3.12/site-packages/torch/compiler/__init__.py` should not be traced.\n Hint: Avoid calling the function `cudagraph_mark_step_begin`.\n Hint: Apply `@torch._dynamo.dont_skip_tracing` to the function `cudagraph_mark_step_begin` to force tracing into the function. More graph breaks may occur as a result of attempting to trace into the function.\n Hint: Please file an issue to PyTorch.\n\n Developer debug context: module: torch.compiler, qualname: cudagraph_mark_step_begin, skip reason: \n\n For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0007.html\n\n```\n\n### Versions\n\n2.9.1\n\ncc @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @nWEIdia @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng", "url": "https://github.com/pytorch/pytorch/issues/170604", "state": "open", "labels": [ "module: cuda", "triaged", "module: cuda graphs" ], "created_at": "2025-12-16T22:07:19Z", "updated_at": "2025-12-17T05:01:56Z", "comments": 0, "user": "vadimkantorov" }, { "repo": "pytorch/executorch", "number": 16271, "title": "Android: load model from assets", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIt is simple, there is no way to read model directly from assets. The assets are files bundled in the Android apps.\nThe assets are not handled the same way as regular files -- they can be accessed only through [assets manager](https://developer.android.com/reference/kotlin/android/content/res/AssetManager.html). \n\n\n### Alternatives\n\nAs a workaround, the model could be loaded from assets and stored into regular file which is then read by the ExecuTorch\n```kotlin\nval file = File(context.filesDir, modelName)\ncontext.assets.open(modelAssetsPath).use { inputStream ->\n\tFileOutputStream(file).use { outputStream ->\n\t\tinputStream.copyTo(outputStream)\n\t}\n}\n\n// here we can initialize the model\nval model = Module.load(file.absolutePath)\n```\n\n### Additional context\n\nThere is generally two way how it could look like:\n- initialization from `ByteArray`(Kotlin) / `byte[]` (Java) like in ONNX runtime\n- directly from the assets using asset path and assets manager as parameters like done in LiteRT/TFLite.\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/executorch/issues/16271", "state": "open", "labels": [], "created_at": "2025-12-16T03:40:03Z", "updated_at": "2025-12-17T21:10:12Z", "comments": 2, "user": "Bludator" }, { "repo": "pytorch/executorch", "number": 16265, "title": "viable/strict is advancing even if docker build failed", "body": "### \ud83d\udc1b Describe the bug\n\nCan we block viable/strict advancement when docker build failed?\n\n### Versions\n\nCI only", "url": "https://github.com/pytorch/executorch/issues/16265", "state": "closed", "labels": [], "created_at": "2025-12-15T20:42:25Z", "updated_at": "2025-12-17T22:56:47Z", "comments": 0, "user": "kirklandsign" }, { "repo": "pytorch/executorch", "number": 16263, "title": "Android Documentation - Improve Llama example", "body": "### \ud83d\udcda The doc issue\n\nFeedback from UnSloth on how to run Android llama example : https://docs.google.com/document/d/1GB3edTlBQfc4Ar0yiBTELKynhwa1hstwKhJxpq3ATVE/edit?tab=t.0\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/executorch/issues/16263", "state": "open", "labels": [ "android_ux" ], "created_at": "2025-12-15T19:32:41Z", "updated_at": "2025-12-15T19:32:41Z", "comments": 0, "user": "psiddh" }, { "repo": "pytorch/executorch", "number": 16260, "title": "Android UX: Prebuilt APKs for Android apps", "body": "Helps in overall E2E experience for the Devs, With least friction Android Devs can install and test prebuilt apk w/o having to setup a more cumbersome path of building from sources.\n\n- Llama Demo apk\n- dl3 demo apk", "url": "https://github.com/pytorch/executorch/issues/16260", "state": "open", "labels": [ "android_ux" ], "created_at": "2025-12-15T19:23:14Z", "updated_at": "2025-12-15T19:38:33Z", "comments": 0, "user": "psiddh" }, { "repo": "pytorch/torchtitan", "number": 2153, "title": "[Question] composable activation checkpoint", "body": "I'm looking for a way that not to use module wrapper to apply activation checkpoint, and I found this https://github.com/pytorch/pytorch/pull/87664/files.\nIs this method works fine? Or it just a demo code", "url": "https://github.com/pytorch/torchtitan/issues/2153", "state": "open", "labels": [ "question" ], "created_at": "2025-12-15T13:54:08Z", "updated_at": "2025-12-16T22:25:59Z", "user": "Irvingwangjr" }, { "repo": "pytorch/pytorch", "number": 170426, "title": "argmax over multiple axis", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIs there any chance we are getting `argmax` to work also on multiple axis?\nI feel that the usage of [unravel_index](https://docs.pytorch.org/docs/stable/generated/torch.unravel_index.html) is so error prone that would make sense to just have it part of the library... and to be fair does not so hard to implement\n\nprobably duplicate of `torch.max().indices`... but that also does not support multiple axis\n\n\ncc @albanD", "url": "https://github.com/pytorch/pytorch/issues/170426", "state": "open", "labels": [ "triaged", "module: python frontend" ], "created_at": "2025-12-15T11:03:36Z", "updated_at": "2025-12-18T15:37:31Z", "comments": 0, "user": "AlbertoSinigaglia" }, { "repo": "pytorch/executorch", "number": 16244, "title": "How to let executorch export intput output int8", "body": "Hi,\n\nI use https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb to export example model and run on FVP.\nThe output is 2.0(float).\n\nBut I modify the code and that it output and intput to be int8. But the output at FVP show 1(char).\nI think that it is wrong. How can I fix it?\n\n\"Image\"\n\n```\n\nfrom executorch.backends.arm.ethosu import EthosUPartitioner\nfrom executorch.exir import (\n EdgeCompileConfig,\n ExecutorchBackendConfig,\n to_edge_transform_and_lower,\n)\nfrom executorch.extension.export_util.utils import save_pte_program\nfrom executorch.exir.passes.quantize_io_pass import QuantizeInputs, QuantizeOutputs\n\n\n\n# Create partitioner from compile spec\npartitioner = EthosUPartitioner(compile_spec)\n\n# Lower the exported program to the Ethos-U backend\nedge_program_manager = to_edge_transform_and_lower(\n quantized_exported_program,\n partitioner=[partitioner],\n compile_config=EdgeCompileConfig(\n _check_ir_validity=False,\n ),\n )\nedge_program_manager.transform(passes=[QuantizeInputs(edge_program_manager, [0, 1]), QuantizeOutputs(edge_program_manager, [0])])\n# Convert edge program to executorch\nexecutorch_program_manager = edge_program_manager.to_executorch(\n config=ExecutorchBackendConfig(extract_delegate_segments=False)\n )\n\n_ = executorch_program_manager.exported_program().graph_module.print_readable()\n\n# Save pte file\nsave_pte_program(executorch_program_manager, \"ethos_u_minimal_example_test_inout_int8.pte\")\n```\n\nBy the way, according to https://github.com/pytorch/executorch/issues/7590\nHow can the embedded application access the quantisation scale & zero point finally? \n\nThanks,\nKris\n\ncc @freddan80 @per @zingo @oscarandersson8218 @digantdesai", "url": "https://github.com/pytorch/executorch/issues/16244", "state": "open", "labels": [ "partner: arm" ], "created_at": "2025-12-15T06:45:32Z", "updated_at": "2025-12-24T01:36:24Z", "user": "kris-himax" }, { "repo": "pytorch/pytorch", "number": 170400, "title": "Clarify inverted boolean mask logic between nn.MultiHeadAttention and F.scaled_dot_product_attention", "body": "### \ud83d\udcda The doc issue\n\n### \ud83d\ude80 Motivation\n\nI am opening this issue to suggest a documentation improvement regarding a common \"gotcha\" when migrating between `nn.MultiHeadAttention` (MHA) and `F.scaled_dot_product_attention` (SDPA).\n\nMany users (including myself) have noticed that the boolean mask semantics are inverted between these two APIs, which can lead to silent bugs during migration.\n\n### \ud83d\udd0d The Inconsistency\n\n * **`nn.MultiHeadAttention` (`key_padding_mask`)**: `True` means **PADDING** (Ignore/Mask out).\n * **`F.scaled_dot_product_attention` (`attn_mask`)**: `True` means **KEEP** (Attend to).\n\nWhile this behavior is hinted at in the SDPA docstring's pseudo-code implementation:\n\n```python\nif attn_mask.dtype == torch.bool:\n attn_bias.masked_fill_(attn_mask.logical_not(), float(\"-inf\"))\n```\n\nThe use of `.logical_not()` confirms that SDPA expects `True` to be kept, whereas MHA expects `True` to be masked. This implicit difference is easy to overlook if one relies solely on parameter names or prior MHA experience.\n\n### \u2705 Verification\n\nI have verified this behavior with a minimal reproduction script on PyTorch 2.5.1, confirming that passing the identical boolean mask to both APIs results in opposite attention patterns (MHA ignores the `True` index, while SDPA attends to it).\n\nThanks for considering this clarification\\!\n\n### Suggest a potential alternative/fix\n\nTo improve Developer Experience (DX) and prevent confusion for users moving in either direction (MHA -\\> SDPA or SDPA -\\> MHA), I suggest adding a **Note** or **Warning** block in the documentation for `F.scaled_dot_product_attention`.\n\n**Example phrasing:**\n\n> **Note:** The boolean mask semantics for `attn_mask` here are the **inverse** of `nn.MultiHeadAttention.forward`'s `key_padding_mask`.\n>\n> * In `F.scaled_dot_product_attention`, `True` indicates values to **participate** in attention.\n> * In `nn.MultiHeadAttention`, `True` indicates values to be **masked out** (padding).\n>\n> If migrating from MHA, ensure you invert your boolean mask (e.g., using `~mask` or `mask.logical_not()`).\n\ncc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki", "url": "https://github.com/pytorch/pytorch/issues/170400", "state": "closed", "labels": [ "module: docs", "module: nn", "triaged", "module: sdpa" ], "created_at": "2025-12-14T13:07:19Z", "updated_at": "2025-12-23T20:44:24Z", "comments": 1, "user": "konodiodaaaaa1" }, { "repo": "pytorch/pytorch", "number": 170361, "title": "[Dynamo] Use VariableBuilder/SourcelessBuilder consistently", "body": "There are many places in Dynamo where we directly call a VariableTracker subclass' `create`/`__init__` from a different VariableTracker's, e.g. `call_function`, `var_getattr`. This was done in order to skip the overhead required to go through `VariableBuilder`/`SourcelessBuilder`.\n\nHowever, this has resulted in a number of soundness issues in the past (I can't find an example off the top of my head though). The reason is that when we directly construct a `VariableTracker`, we are assuming that the wrapped value is represented a certain way, which `VariableBuilder`/`SourcelessBuilder` may represent differently. The latter often has additional checks that result in greater specialization and slightly differing behavior.\n\nWe should:\n- Audit places where we manually construct `VariableTracker`s and make the construction go through `VariableBuilder`/`SourcelessBuilder` more conservatively\n- Reduce the overhead of `VariableBuilder` and `SourcelessBuilder` (esp. `VariableBuilder`, since it has a large if-statement)\n\n\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo", "url": "https://github.com/pytorch/pytorch/issues/170361", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "dynamo-variable-tracker" ], "created_at": "2025-12-13T01:26:10Z", "updated_at": "2025-12-13T02:18:29Z", "comments": 1, "user": "williamwen42" }, { "repo": "pytorch/pytorch", "number": 170320, "title": "Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under '/home/ec2-user/actions-runner/_work/pytorch/pytorch/.github/actions/check-tpu'", "body": "> NOTE: Remember to label this issue with \"`ci: sev`\"\n> If you want autorevert to be disabled, keep the ci: disable-autorevert label\n\n \n\n> [!IMPORTANT]\n> Comment the following on your PR to rebase\n> ```\n> @pytorchbot rebase -b main\n> ```\n\n\n## Current Status\n*Status could be: preemptive, ongoing, mitigated, closed. Also tell people if they need to take action to fix it (i.e. rebase)*.\n\nWith the introduction of:\n\n* #170269 \n\nDevelopers may experience workflow failures related to a composite action named check-tpu.\n\n## Error looks like\n*Provide some way users can tell that this SEV is causing their issue.*\n\n```\nCan't find 'action.yml', 'action.yaml' or 'Dockerfile' under '/home/ec2-user/actions-runner/_work/pytorch/pytorch/.github/actions/check-tpu'\n```\n\n## Incident timeline (all times pacific)\n*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*\n\n## User impact\n*How does this affect users of PyTorch CI?*\n\nDevelopers should rebase their PRs past:\n\n* #170269 \n\n> [!IMPORTANT]\n> Comment the following on your PR to rebase\n> ```\n> @pytorchbot rebase -b main\n> ```\n\n## Root cause\n*What was the root cause of this issue?*\n\n* #170269 \n\n## Mitigation\n*How did we mitigate the issue?*\n\nDevelopers should rebase their PRs past:\n\n* #170269 \n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\n\nWe should probably introduce a linter that prevents us from adding composite actions and referencing them in a workflow in the same PR.\n", "url": "https://github.com/pytorch/pytorch/issues/170320", "state": "closed", "labels": [ "ci: sev" ], "created_at": "2025-12-12T19:30:03Z", "updated_at": "2025-12-14T15:36:06Z", "comments": 1, "user": "seemethere" }, { "repo": "pytorch/pytorch", "number": 170302, "title": "DISABLED test_opaque_obj_training_ir_to_decomp_nonstrict (__main__.TrainingIRToRunDecompExportNonStrictTestExport)", "body": "Platforms: rocm, xpu\n\nThis test was disabled because it is failing on [main and PRs](https://hud.pytorch.org/failure?name=rocm-mi200%20%2F%20linux-jammy-rocm-py3.10%20%2F%20test%20(default%2C%201%2C%206%2C%20linux.rocm.gpu.2%2C%20unstable)&jobName=linux-jammy-rocm-py3.10%20%2F%20test%20(default%2C%201%2C%206%2C%20linux.rocm.gpu.2%2C%20unstable)&failureCaptures=RuntimeError%3A%20Type%20%27test_export.TestExport.test_opaque_obj.%3Clocals%3E.MyInput%27) (couldn't find a more targeted link to show just this test failure). Example of [MI200 failure](https://github.com/pytorch/pytorch/actions/runs/20158110270/job/57866162668) and [MI300 failure](https://github.com/pytorch/pytorch/actions/runs/20160197548/job/57872771424)\n\ncc @gujinghui @EikanWang @fengyuan14 @guangyey @jeffdaily @sunway513 @pruthvistony @ROCmSupport @jataylo @hongxiayang @naromero77amd @pragupta @jerrymannil @xinyazhang", "url": "https://github.com/pytorch/pytorch/issues/170302", "state": "open", "labels": [ "triaged", "skipped", "rocm-skipped-tests" ], "created_at": "2025-12-12T16:04:41Z", "updated_at": "2025-12-25T00:24:56Z", "comments": 2, "user": "jithunnair-amd" }, { "repo": "pytorch/pytorch", "number": 170293, "title": "[wheels] Missing CUDA wheels for pytorch<2.6.0", "body": "### \ud83d\udc1b Describe the bug\n\nFor older versions of pytorch<2.6.0, the CUDA wheels cannot be reached anymore.\n\nSystem: Windows-11-10.0.22631-SP0\nPython version: 3.13\nUsing pip 25.3\n\nExample of failing installation:\n\n` pip install torch==2.5.1 --index-url https://download.pytorch.org/whl/cu124 --isolated --verbose`\n\nOutput is mentioning pytorch 2.6.0:\n\n```\nLooking in indexes: https://download.pytorch.org/whl/cu124\nERROR: Could not find a version that satisfies the requirement torch==2.5.1 (from versions: 2.6.0+cu124)\nERROR: No matching distribution found for torch==2.5.1\n```\n\nReproducible with other pytorch versions and CUDA variants, when pytorch<2.6.0.\nExample of successful installation:\n\n` pip install torch==2.6.0 --index-url https://download.pytorch.org/whl/cu124 --isolated --verbose`\n\n\n### Versions\n\nPython version: 3.13.7 (main, Sep 18 2025, 19:43:45) [MSC v.1944 64 bit (AMD64)] (64-bit runtime)\nPython platform: Windows-11-10.0.22631-SP0\n", "url": "https://github.com/pytorch/pytorch/issues/170293", "state": "closed", "labels": [], "created_at": "2025-12-12T10:33:42Z", "updated_at": "2025-12-12T12:02:58Z", "comments": 1, "user": "guibruand" }, { "repo": "pytorch/data", "number": 1520, "title": "Are there any plans to optimize the fetcher_state in StatefulDataLoader?", "body": "Since `_IterableDatasetFetcher` has no state attribute: https://github.com/pytorch/pytorch/blob/v2.6.0/torch/utils/data/_utils/fetch.py#L19, and the current `fetcher_state:dataset_iter_state` is None: https://github.com/meta-pytorch/data/blob/v0.11.0/torchdata/stateful_dataloader/worker.py#L277, could this cause prefetched data to be discarded during resume?", "url": "https://github.com/meta-pytorch/data/issues/1520", "state": "open", "labels": [], "created_at": "2025-12-12T09:50:08Z", "updated_at": "2025-12-17T05:23:35Z", "comments": 5, "user": "howitry" }, { "repo": "pytorch/pytorch", "number": 170286, "title": "Can torch has a relaxed dependencies instead of strict dependencies on nvidia-cuda-runtime", "body": "### \ud83d\udc1b Describe the bug\n\nRight now, torch uses strict == pins for these packages (see\nhttps://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L106C2-L123C7).\nIs there a specific reason these must be strict == requirements? Would it be possible to relax them to version ranges instead?\nFor example, in my setup:\ntorch==[10.0.dev](http://10.0.dev/) depends on nvidia-cuda-runtime==13.0.96\ntensorrt==10.14 depends on nvidia-cuda-runtime==13.0.88\nThis conflict causes uv to resolve to a much older torch version:\nhttps://github.com/pytorch/pytorch/issues/170286\n\nIf torch could declare a version range for nvidia-cuda-runtime instead of a strict pin, it would make dependency resolution much easier for downstream users who also depend on other CUDA-related packages.\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.10.0.dev20251210+cu130\nIs debug build: False\nCUDA used to build PyTorch: 13.0\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: 22.0.0 (++20251015042503+856555bfd843-1~exp1~20251015042630.2731)\nCMake version: version 4.2.0\nLibc version: glibc-2.39\n\nPython version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)\nPython platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 13.1.80\nCUDA_MODULE_LOADING set to: \nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080 SUPER\nNvidia driver version: 580.95.05\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\nCaching allocator config: N/A\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen 9 7950X 16-Core Processor\nCPU family: 25\nModel: 97\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 2\nFrequency boost: enabled\nCPU(s) scaling MHz: 79%\nCPU max MHz: 5883.0000\nCPU min MHz: 545.0000\nBogoMIPS: 8982.91\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze\nVirtualization: AMD-V\nL1d cache: 512 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 16 MiB (16 instances)\nL3 cache: 64 MiB (2 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Ghostwrite: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Mitigation; Safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: ", "url": "https://github.com/pytorch/pytorch/issues/170286", "state": "closed", "labels": [ "module: binaries", "triaged" ], "created_at": "2025-12-12T08:39:43Z", "updated_at": "2025-12-13T00:30:46Z", "comments": 3, "user": "lanluo-nvidia" }, { "repo": "pytorch/executorch", "number": 16217, "title": "make building stop at Built target portable_kernels", "body": "Hey, i want to export llama pte model and deploy it on SA8255 device, i refered to https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md and https://docs.pytorch.ac.cn/executorch/stable/llm/build-run-llama3-qualcomm-ai-engine-direct-backend.html, but when i Built llama runner binary for Android i got the error:\n[ 98%] Linking CXX static library libportable_kernels.a\n[ 98%] Built target portable_kernels\n[ 98%] Linking CXX static library liboptimized_portable_kernels.a\n[ 98%] Built target optimized_portable_kernels\ngmake: *** [Makefile:156: all] Error 2\n\nHow can I solve this? i paste the error.log and the build.sh file.\nAppreciate reply!\n\n[build.sh](https://github.com/user-attachments/files/24119047/build.sh)\n[error.log](https://github.com/user-attachments/files/24119048/error.log)\n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/16217", "state": "open", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-12-12T03:24:46Z", "updated_at": "2025-12-21T00:59:11Z", "comments": 16, "user": "imjking" }, { "repo": "pytorch/pytorch", "number": 170183, "title": "[docs] Unable to `git clone` PyTorch wiki on Windows due to colon(`:`) in filename", "body": "### \ud83d\udcda The doc issue\n\n> Summary : `git checkout` fails when trying to clone the PyTorch wiki on Windows OS.\n\nWindows filesystems do not allow the use of colons (`:`) in filenames.\nHowever, the wiki currently contains a page titled: [PyTorch CI Metrics Dashboards: the HUD](https://github.com/pytorch/pytorch/wiki/PyTorch-CI-Metrics-Dashboards:-the-HUD)\n\nBecause this filename contains a colon, cloning the wiki repository on a Windows environment results in an error.\n\n- Error Message.\n```sh\n(base) PS D:\\Git_Repo\\Open_Source> git clone https://github.com/pytorch/pytorch.wiki.git\nCloning into 'pytorch.wiki'...\nremote: Enumerating objects: 3525, done.\nremote: Total 3525 (delta 0), reused 0 (delta 0), pack-reused 3525 (from 1)\nReceiving objects: 100% (3525/3525), 1.73 MiB | 4.51 MiB/s, done.\nResolving deltas: 100% (2173/2173), done.\nerror: invalid path 'PyTorch-CI-Metrics-Dashboards:-the-HUD.md'\nfatal: unable to checkout working tree\nwarning: Clone succeeded, but checkout failed. \nYou can inspect what was checked out with 'git status'\nand retry with 'git restore --source=HEAD :/'\n```\n\nThanks you.\n\n### Suggest a potential alternative/fix\n\nRename the wiki page to remove the colon or replace it with a hyphen.\n\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex", "url": "https://github.com/pytorch/pytorch/issues/170183", "state": "open", "labels": [ "module: windows", "triaged", "module: infra" ], "created_at": "2025-12-11T13:00:52Z", "updated_at": "2025-12-15T18:07:48Z", "comments": 6, "user": "daehyun99" }, { "repo": "pytorch/pytorch", "number": 169970, "title": "Does torch._grouped.mm work with cudagraphs over multiple nodes?", "body": "### \ud83d\udc1b Describe the bug\n\ntorch._grouped_mm auses dynamic memory allocations via c10::cuda::CUDACachingAllocator::allocate() that appears to be incompatible with CUDA graph capture and replay. This causes \"CUDA error: an illegal memory access was encountered\" when these operations are captured in a CUDA graph and later replayed, particularly in multi-node distributed settings with NCCL.\nEnvironment\nPyTorch version: 2.9.0+ (with grouped_mm support)\nCUDA version: 12.8+\nGPU: H100/H200 (SM90/SM100)\nDistributed: Multi-node with NCCL, Tensor Parallelism\n\n```python\nimport torch\n# Setup\ndevice = torch.device(\"cuda\")\nmat_a = torch.randn(4, 128, 256, dtype=torch.bfloat16, device=device)\nmat_b = torch.randn(4, 256, 512, dtype=torch.bfloat16, device=device)\n# Warmup\nout = torch._grouped_mm(mat_a, mat_b)\n# Capture CUDA graph\ngraph = torch.cuda.CUDAGraph()\nwith torch.cuda.graph(graph):\n out = torch._grouped_mm(mat_a, mat_b)\n# Replay - may cause illegal memory access\ngraph.replay() # Works sometimes\ngraph.replay() # More likely to fail\n```\n\nIn multi-node distributed scenarios (e.g., vLLM with tensor parallelism across nodes), the failure rate is much higher and typically manifests on the first inference request after model deployment.\n\n### Versions\n\n```\nCollecting environment information...\nPyTorch version: 2.9.0a0+gitcdb6201\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.39\n\nPython version: 3.11.14 (tags/v3.11.14:cd1c3a63428, Oct 9 2025, 19:23:04) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.6.72+-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.8.93\nCUDA_MODULE_LOADING set to: \nGPU models and configuration: \nGPU 0: NVIDIA H100 80GB HBM3\nGPU 1: NVIDIA H100 80GB HBM3\nGPU 2: NVIDIA H100 80GB HBM3\nGPU 3: NVIDIA H100 80GB HBM3\nGPU 4: NVIDIA H100 80GB HBM3\nGPU 5: NVIDIA H100 80GB HBM3\nGPU 6: NVIDIA H100 80GB HBM3\nGPU 7: NVIDIA H100 80GB HBM3\n\nNvidia driver version: 550.90.07\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.14.0\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\nCaching allocator config: N/A\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 208\nOn-line CPU(s) list: 0-207\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 52\nSocket(s): 2\nStepping: 8\nBogoMIPS: 5399.99\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 4.9 MiB (104 instances)\nL1i cache: 3.3 MiB (104 instances)\nL2 cache: 208 MiB (104 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-51,104-155\nNUMA node1 CPU(s): 52-103,156-207\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVu", "url": "https://github.com/pytorch/pytorch/issues/169970", "state": "open", "labels": [ "oncall: distributed", "module: cuda", "module: cuda graphs" ], "created_at": "2025-12-09T18:12:10Z", "updated_at": "2025-12-16T22:00:56Z", "comments": 3, "user": "ashahab" }, { "repo": "pytorch/pytorch", "number": 169954, "title": "How to prevent landing PRs on sparse tensors that should be rejected?", "body": "Recently, https://github.com/pytorch/pytorch/pull/169807 was submitted that added out-of-bounds checks for inputs of constructing a sparse COO tensor. Sounds reasonable, right? No, it is not right because the corresponding checks already exist but are disabled and the PR authors/reviewers are not aware of this. Fortunately, we (thanks @nikitaved!) discovered https://github.com/pytorch/pytorch/pull/169807 and were able to intervene: the PR is now closed without merge.\n\nAs a side note, the checks are disabled by default for performance reasons: checking sparse tensors inputs is an expensive operation as the tensor inputs (e.g. indices) must be verified element-wise, checking just the dtype and sizes of sparse tensor inputs is insufficient.\n\nThere exists other similar PRs (e.g. https://github.com/pytorch/pytorch/pull/163535) that \"fix\" issues due to users invalid inputs while the proper fix would have been educate users about [check_sparse_tensor_invariants](https://docs.pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html). Unfortunately, https://github.com/pytorch/pytorch/pull/163535 got landed while it should have been rejected for the same reasons as explained above leading to performance degradation.\n\nThis issue is raised to seek solutions to prevent landing sparse tensor related PRs that fix crashes due to invalid user inputs to sparse tensor constructors when the usage of `check_sparse_tensor_invariants` would be sufficient for revealing errors in the user inputs.\n\nHere is a list of ideas:\n1. Enable invariant checks by default when `torch.sparse_coo_tensor` (and similar to CSR constructors) is called from a user script but disable the checks when the constructor is called inside a torch function.\n2. Add a comment saying \"Do not implement checks that can be enabled by `check_sparse_tensor_invariants`\" to sparse tensor constructor implementations where one should want adding these checks. \n3. Require that landing sparse tensor related PRs must be approved by someone who is familiar with sparse tensor internals. Apparently, the code-owner idea in pytorch does quite not work what comes to updating sparse tensor related codes in torch.\n\nAny other idea?\n\n^ @amjames @malfet @janeyx99 @albanD @cpuhrsch \n\ncc @nikitaved @cpuhrsch @amjames @bhosmer @jcaip", "url": "https://github.com/pytorch/pytorch/issues/169954", "state": "closed", "labels": [ "triage review", "module: sparse" ], "created_at": "2025-12-09T14:27:46Z", "updated_at": "2025-12-17T04:25:50Z", "user": "pearu" }, { "repo": "pytorch/ao", "number": 3469, "title": "per tensor symmetric activation quantization", "body": "Is there a w8a8 QAT config that support the following describe? \nint8 per tensor symmetric activation quantization and int8 per channel weight symmetric quantization ", "url": "https://github.com/pytorch/ao/issues/3469", "state": "open", "labels": [], "created_at": "2025-12-09T11:12:02Z", "updated_at": "2025-12-12T21:23:43Z", "comments": 2, "user": "jivercx" }, { "repo": "pytorch/pytorch", "number": 169929, "title": "Python 3.14 \u2013 No CUDA/GPU Wheels Available (Only CPU Build Installed)", "body": "### \ud83d\udc1b Describe the bug\n\nHi PyTorch Team,\n\nI\u2019m using Python 3.14, and I noticed that the latest PyTorch versions install successfully, but only CPU builds are available:\n\npip install torch torchvision torchaudio\n\n**Result,**\ntorch.__version__ \u2192 2.9.1+cpu\ntorch.version.cuda \u2192 None\ntorch.cuda.is_available() \u2192 False\n\n**My system has a valid CUDA-enabled GPU**:\n\nGPU: NVIDIA GeForce RTX 3080\nDriver Version: 573.44\nCUDA Version (nvidia-smi): 12.8\nnvcc Version: 12.4\n\nHowever, no CUDA wheels exist for cp314 on the official index:\n\nhttps://download.pytorch.org/whl/cu121\nhttps://download.pytorch.org/whl/cu124\n\n**I also tried:**\n\npip install torch==2.3.0+cu121 --index-url https://download.pytorch.org/whl/cu121\n\n\n**but received:**\n\nNo matching distribution found for torch==2.3.0+cu121\n\n_**Could you please confirm:**_\n\n**Does PyTorch currently provide CUDA-enabled wheels for Python 3.14?\n\nIf not, is GPU support for Python 3.14 planned, and is there a timeline for release?\n\nAre there any nightly GPU wheels for Python 3.14 available for testing?**\n\n### Versions\n\n**System / Environment**\n\nOS: Windows 11 (64-bit)\nPython: 3.14.0\nPip: 25.3\nCUDA (nvidia-smi): 12.8\nCUDA (nvcc): 12.4\nGPU: NVIDIA GeForce RTX 3080 (16GB)\nNVIDIA Driver Version: 573.44\n\n**PyTorch Installation**\n\n**Command used:**\n\npip install torch torchvision torchaudio\n\n\n**Installed versions:**\n\ntorch: 2.9.1+cpu\ntorchvision: (CPU build)\ntorchaudio: (CPU build)\ntorch.cuda.is_available(): False\ntorch.version.cuda: None\n\ncc @seemethere @malfet @atalman @tinglvv @nWEIdia", "url": "https://github.com/pytorch/pytorch/issues/169929", "state": "open", "labels": [ "module: binaries", "triaged" ], "created_at": "2025-12-09T07:57:15Z", "updated_at": "2025-12-16T15:06:50Z", "comments": 9, "user": "ashikauk24-source" }, { "repo": "pytorch/pytorch", "number": 169893, "title": "Investigate which submodules in third_party/ can be omitted from stable header hiding", "body": "In https://github.com/pytorch/pytorch/pull/167496 we hide all headers except stable/headeronly/shim when TORCH_STABLE_ONLY/TORCH_TARGET_VERSION are defined\n\n@pearu raised that headers in third_party/ should be exposed\n\n> The TORCH_TARGET_VERSION post-processing modifies all header files (except few such as headeronly and stable headers) including the header files that are copied from third_party . I wonder what is the motivation for modifying the third party header files considering that these do not depend on ATen or torch headers?\nMy use case is pybind/pybind.h that is used to construct a simple extension module that has no torch dependency whatsoever and TORCH_TARGET_VERSION post-processing seems an overkill: it protects from something (unstable libtorch symbols) that never exists in this use case and it will unnecessarily restrict the usage of third-party tools such as pybind that are header-only libraries.\nSo, disabling TORCH_TARGET_VERSION post-processing for third-party tools that we know are header-only libraries, should be always safe.\n\nWe need to investigate which libraries in third_party can be safely exposed\n\ncc @janeyx99 @jbschlosser", "url": "https://github.com/pytorch/pytorch/issues/169893", "state": "open", "labels": [ "module: cpp-extensions", "module: cpp", "triaged" ], "created_at": "2025-12-08T22:45:21Z", "updated_at": "2025-12-30T21:08:11Z", "comments": 2, "user": "mikaylagawarecki" }, { "repo": "pytorch/pytorch", "number": 169870, "title": "Capturing ViewAndMutationMeta for training graphs for PyTorch 2.8", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n### Problem\nWe want to capture ViewAndMutationMeta for training graphs so that we can capture and propagate input/output aliasing information to later compilation phases.\n\n### Likely Approach\nFor training graphs, it seems that the best place to to do that would be to capture this information right before partitioning, i.e., before **`partition_fn`** is invoked. A _user-defined_, custom **`partition_fn`** allows us to intercept the compilation-state where ViewAndMutationMeta is accessible.\n\nThe default signature for **`partition_fn`** does not take an additional parameter ( for _fw_metadata_ which holds ViewAndMutationMeta). If that is allowed, we can intercept the AOTAutograd compilation just before partitioning and capture the ViewAndMutationMeta.\n\nThis is a non-invasive approach requiring us to not patch-up local Pytorch installations.\n\n### Request\nWe request that the callsite of the **`partition_fn`** in _jit_compile_runtime_wrappers.py_ allow passing of ``fw_metadata``.\n\nThanks.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @gqchen @nikitaved @soulitzer @Varal7 @xmfan", "url": "https://github.com/pytorch/pytorch/issues/169870", "state": "open", "labels": [ "triaged", "module: viewing and reshaping", "oncall: pt2" ], "created_at": "2025-12-08T19:52:18Z", "updated_at": "2025-12-28T22:09:55Z", "comments": 1, "user": "pratnali" }, { "repo": "pytorch/pytorch", "number": 169854, "title": "CPython test cases under dynamo don't follow paradigm", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n### Problem\nCurrently, the test/dynamo folder prevents calling a test case with PYTORCH_TEST_WITH_DYNAMO, and additionally any tests under test/dynamo should have their main method run torch._dynamo.test_case.run_tests.\n\nThe Cpython test suite goes against those two assumptions and requires PYTORCH_TEST_WITH_DYNAMO=1. Additionally, it calls torch.testing._internal.common_utils.run_tests as part of its main method.\n\n### Proposed Solution\nThe cpython tests for 3.13 should be moved out from under the dynamo folder so that the test cases follow the expected paradigm, namely dynamo test cases should not be compiled as all tests under this folder (except cpython tests) may contain their own call to compile.\n\nThis is more of an RFC to garner feedback/thoughts on making the change. As more cpython versions get added to the test suite, the move will become more burdensome.\n\nI'll open a PR to provide an example of the changes.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @mruberry @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo", "url": "https://github.com/pytorch/pytorch/issues/169854", "state": "open", "labels": [ "module: tests", "triaged", "enhancement", "oncall: pt2", "module: dynamo" ], "created_at": "2025-12-08T17:52:30Z", "updated_at": "2025-12-12T14:35:03Z", "comments": 1, "user": "trichmo" }, { "repo": "pytorch/pytorch", "number": 169797, "title": "When augment_with_fx_traces=True but the user has misconfigured the FX config, raise an error", "body": "### \ud83d\udc1b Describe the bug\n\nIf you dump memory profile with augment_with_fx_traces=True but you don't set torch.fx.experimental._config.enrich_profiler_metadata (or better yet, you accidentally use the dead dynamo version of the config), you will just silently not get any augmentation. This is bad, we should say something if this occurs. I think probably the most appropriate place to give the info is in the memory viz itself; in particular, when we detect a legacy FX filename in the trace (e.g., `eval_with_key`) we should display some help text saying how to get augmented information. The memory profile should also say if augment_with_fx_traces was set so we correctly report if you need to pass that info or not. Also... maybe we should just default augment_with_fx_traces True, if there isn't a reason not to?\n\ncc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @mwootton @EikanWang @jgong5 @wenzhe-nrv @yushangdi \n\n### Versions\n\nmain", "url": "https://github.com/pytorch/pytorch/issues/169797", "state": "open", "labels": [ "triaged", "oncall: profiler", "module: fx" ], "created_at": "2025-12-08T01:24:32Z", "updated_at": "2025-12-09T18:23:40Z", "comments": 0, "user": "ezyang" }, { "repo": "pytorch/tutorials", "number": 3687, "title": "Feedback about Tensors", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/tensorqs_tutorial.html\n\nHello, I just finished the tutorial on tensors, and I think it's really well written. However, I have a question. There are so many attributes and methods related to tensors that after reading the tutorial once, I can't remember them all; I only have a general impression. So I want to know, if my goal is to master PyTorch in depth, is it necessary for me to memorize these specific tensor operations?\n\ncc @albanD @jbschlosser", "url": "https://github.com/pytorch/tutorials/issues/3687", "state": "open", "labels": [ "question", "core" ], "created_at": "2025-12-07T21:07:52Z", "updated_at": "2025-12-08T17:00:50Z", "user": "NJX-njx" }, { "repo": "pytorch/executorch", "number": 16123, "title": "Is dynamic weight update / fine-tuning supported in QNN / XNNPACK backends?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI\u2019m working on a research project to fine-tune a model on Android devices. I am exploring using ExecuTorch + QNN or XNNPACK backend for inference acceleration, but need to ensure that the backend can support dynamic modification of weights (i.e., after initialization, allow updating weights / biases, and then run forward again).\n\n**What I found** \n- In executorch/extension/training/examples/XOR/train.cpp, training code based on executorch is provided, but it does not mention the supported backends.\n- The official XNNPACK backend documentation describes that, during runtime initialization, weights / biases are \u201cpacked\u201d (i.e. weight packing) into XNNPACK\u2019s internal data structures, and the original preprocessed blob\u2019s data is freed. This seems to imply that weights become static / immutable from the perspective of the backend\u2019s execute graph. \n- I did not find description in the docs or runtime API of any mechanism to \u201cunlock\u201d or \u201cupdate\u201d those packed weights at runtime. \n- There is an existing issue (#11355) reporting that even dynamic quantization + XNNPACK + Android may fail to load \u201cforward\u201d method, which suggests that non-static quantization / dynamic behavior is fragile or unsupported. \n- For QNN backend, I saw open / triaged issues about compilation or binary loading, but none that explicitly mention support for runtime weight update. \n\n**My questions** \n1. Does ExecuTorch (any of its backends: QNN, XNNPACK, Vulkan, etc.) currently support *runtime in-place weight updates* (i.e. treat model weights as mutable parameters, allow updating them between forward calls, as required in fine-tuning / training / zeroth-order optimization)? \n2. If not supported, is there a recommended workflow / workaround for on-device fine-tuning with ExecuTorch? Or is this explicitly out of scope? \n3. If it\u2019s not currently supported, would the maintainers be open to considering such a feature in future (e.g. a \u201cmutable weight\u201d delegate, or mechanism to reload new weights into backend graph)? \n\nThank you for your time and for developing ExecuTorch \u2014 it is a great tool for on-device inference / deployment, and I hope it can support on-device fine-tuning in the future.\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_\n\ncc @JacobSzwejbka", "url": "https://github.com/pytorch/executorch/issues/16123", "state": "open", "labels": [ "module: training" ], "created_at": "2025-12-07T06:24:09Z", "updated_at": "2025-12-11T09:27:29Z", "comments": 5, "user": "qqqqqqqwy" }, { "repo": "pytorch/pytorch", "number": 169663, "title": "[CI] Inductor dashboards failing due to unused --quant arg", "body": "### \ud83d\udc1b Describe the bug\n\nThe offending code was added in https://github.com/pytorch/pytorch/pull/123419 which results in a failure as no quant type is provided.\nhttps://github.com/pytorch/pytorch/blob/a097e166db7077f1e8da94757ccd91a6a521550e/.ci/pytorch/test.sh#L767\n\nThis is causing unnecessary headaches when debugging inductor-dashboard runs. @huydhn is it possible for us to either provide a valid quant type or remove this?\n\nExample failure: https://ossci-raw-job-status.s3.amazonaws.com/log/56856027123 for AMD and similar on https://ossci-raw-job-status.s3.amazonaws.com/log/56764730017 for NV.\n```\n2025-12-02T02:00:17.0782532Z + python benchmarks/dynamo/timm_models.py --performance --cold-start-latency --inference --quant --backend inductor --device cuda --total-partitions 7 --partition-id 2 --output /var/lib/jenkins/pytorch/test/test-reports/inductor_cudagraphs_low_precision_timm_models_quant_inference_rocm_performance.csv\n2025-12-02T02:00:19.6203822Z [--channels-last]\n2025-12-02T02:00:19.6204176Z [--batch-size BATCH_SIZE]\n2025-12-02T02:00:19.6204547Z [--iterations ITERATIONS]\n2025-12-02T02:00:19.6204936Z [--batch-size-file BATCH_SIZE_FILE]\n2025-12-02T02:00:19.6205320Z [--cosine]\n2025-12-02T02:00:19.6205614Z [--freezing]\n2025-12-02T02:00:19.6205953Z [--inductor-config INDUCTOR_CONFIG]\n2025-12-02T02:00:19.6206333Z [--ci]\n2025-12-02T02:00:19.6206618Z [--dashboard]\n2025-12-02T02:00:19.6206949Z [--skip-fp64-check]\n2025-12-02T02:00:19.6207276Z [--fast]\n2025-12-02T02:00:19.6207565Z [--only ONLY]\n2025-12-02T02:00:19.6207883Z [--multiprocess]\n2025-12-02T02:00:19.6208197Z [--ddp]\n2025-12-02T02:00:19.6208490Z [--fsdp]\n2025-12-02T02:00:19.6208833Z [--optimize-ddp-mode OPTIMIZE_DDP_MODE]\n2025-12-02T02:00:19.6209530Z [--distributed-master-port DISTRIBUTED_MASTER_PORT]\n2025-12-02T02:00:19.6209993Z [--dynamic-shapes]\n2025-12-02T02:00:19.6210361Z [--propagate-real-tensors]\n2025-12-02T02:00:19.6210749Z [--dynamic-batch-only]\n2025-12-02T02:00:19.6211114Z [--specialize-int]\n2025-12-02T02:00:19.6211449Z [--use-eval-mode]\n2025-12-02T02:00:19.6211801Z [--skip-accuracy-check]\n2025-12-02T02:00:19.6212189Z [--generate-aot-autograd-stats]\n2025-12-02T02:00:19.6212593Z [--inductor-settings]\n2025-12-02T02:00:19.6213079Z [--suppress-errors]\n2025-12-02T02:00:19.6213417Z [--output OUTPUT]\n2025-12-02T02:00:19.6213816Z [--output-directory OUTPUT_DIRECTORY]\n2025-12-02T02:00:19.6214221Z [--disable-output]\n2025-12-02T02:00:19.6214560Z [--baseline BASELINE]\n2025-12-02T02:00:19.6214912Z [--part PART]\n2025-12-02T02:00:19.6215259Z [--export-profiler-trace]\n2025-12-02T02:00:19.6215725Z [--profiler-trace-name PROFILER_TRACE_NAME]\n2025-12-02T02:00:19.6216164Z [--profile-details]\n2025-12-02T02:00:19.6216514Z [--export-perfdoctor]\n2025-12-02T02:00:19.6216885Z [--diff-branch DIFF_BRANCH]\n2025-12-02T02:00:19.6217240Z [--tag TAG]\n2025-12-02T02:00:19.6217536Z [--explain]\n2025-12-02T02:00:19.6217826Z [--stats]\n2025-12-02T02:00:19.6218144Z [--use-warm-peak-memory]\n2025-12-02T02:00:19.6218510Z [--print-memory]\n2025-12-02T02:00:19.6218865Z [--print-compilation-time]\n2025-12-02T02:00:19.6219263Z [--print-dataframe-summary]\n2025-12-02T02:00:19.6219651Z [--disable-cudagraphs]\n2025-12-02T02:00:19.6220033Z [--disable-split-reductions]\n2025-12-02T02:00:19.6220450Z [--disable-persistent-reductions]\n2025-12-02T02:00:19.6220874Z [--disable-divisible-by-16]\n2025-12-02T02:00:19.6221324Z [--inductor-compile-mode INDUCTOR_COMPILE_MODE]\n2025-12-02T02:00:19.6221782Z [--print-graph-breaks]\n2025-12-02T02:00:19.6222146Z [--log-graph-breaks]\n2025-12-02T02:00:19.6222495Z [--trace-on-xla]\n2025-12-02T02:00:19.6222842Z [--xla-tolerance XLA_TOLERANCE]\n2025-12-02T02:00:19.6223230Z [--collect-outputs]\n2025-12-02T02:00:19.6223614Z [--enable-activation-checkpointing]\n2025-12-02T02:00:19.6224005Z [--timing]\n2025-12-02T02:00:19.6224298Z [--progress]\n2025-12-02T02:00:19.6224607Z [--timeout TIMEOUT]\n2025-12-02T02:00:19.6225046Z [--per_process_memory_fraction PER_PROCESS_MEMORY_FRACTION]\n2025-12-02T02:00:19.6225545Z [--no-translation-validation]\n2025-12-02T02:00:19.6225913Z [--minify]\n2025-12-02T02:00:19.6226225Z [--compiled-autograd]\n2025-12-02T02:00:19.6226595Z [--profile_dynamo_cache_lookup]\n2025-12-02T02:00:19.6226979Z [--snapshot-memory]\n2025-12-02T02:00:19.6227313Z [--retain-output]\n2025-12-02T02:00:19.6227656Z [--caching-precompile]\n2025-12-02T02:00:19.6228230Z [--save-model-outputs-to SAVE_MODEL_OUTPUTS_TO]\n2025-12-02T02:00:19.6228782Z [--compare-model-outputs-with COMPARE_MODEL_OUTPUTS_WITH]\n2025-12-02T02:00:19.6229340Z ", "url": "https://github.com/pytorch/pytorch/issues/169663", "state": "closed", "labels": [ "oncall: pt2", "module: inductor" ], "created_at": "2025-12-05T11:10:10Z", "updated_at": "2025-12-08T01:34:22Z", "comments": 0, "user": "jataylo" }, { "repo": "pytorch/pytorch", "number": 169659, "title": "[Export] Incosistent input validation when re-importing a .pt2 model on Linux vs. Windows", "body": "### \ud83d\udc1b Describe the bug\n\n## Summary:\nImporting the same .pt2 model on Windows and Linux yields a GraphModule() instance containing a guard function for input validation on Windows and a GraphModule _without_ that guard function on Linux (same device, Ubuntu running in WSL2). \n\n**Why is this an issue?**\nWhen trying to pass each model through `prepare_pt2e` for quantization, the one containing the guard function on Windows fails with:\n```\n[ ... stack trace ommitted ... ]\nexecutorch.exir.pass_base.ExportPassBaseError: call_module is not supported.\n\nWhile executing %_guards_fn : [num_users=0] = call_module[target=_guards_fn](args = (%x,), kwargs = {})\n```\nwhile the same model can be quantized with no issues on Linux.\n\nUltimately what I'm looking for is being able to consistently import .pt2 files and lower them to ExecuTorch with quantization, both on Windows and Linux hosts.\n\n## Steps to reproduce\n\n### 1. Create Model\nI am creating and exporting a minimal `torch.nn.Module` instance like this:\n```python\nimport torch\n\nclass DoubleModel(torch.nn.Module):\n def __init__(self) -> None:\n super().__init__()\n\n def forward(self, x):\n return x * 2\n\nexample_input = torch.tensor([1.0, 2.0, 3.0, 4.0])\nexported_model = torch.export.export(DoubleModel(), (example_input,))\ntorch.export.save(exported_model, \"double.pt2\")\n```\n\n### 2. Re-Import .pt2 file\nwhen I re-import the model on **Windows**, I get this result:\n```python\nimport torch\n\nmodel = torch.export.load('double.pt2').module()\nmodel.print_readable()\n```\n```\nclass GraphModule(torch.nn.Module):\n def forward(self, x):\n x: \"f32[4]\";\n\n x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)\n # No stacktrace found for following nodes\n _guards_fn = self._guards_fn(x); _guards_fn = None\n\n # File: /tmp/ipykernel_257892/1111071196.py:8 in forward, code: return x * 2\n mul: \"f32[4]\" = torch.ops.aten.mul.Tensor(x, 2); x = None\n return pytree.tree_unflatten((mul,), self._out_spec)\n```\nnotice the `_guards_fn` member.\n\nWhen I run the same code on **Linux**, I get:\n```\nclass GraphModule(torch.nn.Module):\n def forward(self, x):\n x: \"f32[4]\";\n\n x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)\n # File: /tmp/ipykernel_257892/1111071196.py:8 in forward, code: return x * 2\n mul: \"f32[4]\" = torch.ops.aten.mul.Tensor(x, 2); x = None\n return pytree.tree_unflatten((mul,), self._out_spec)\n```\n\n### 3. Check input validation implementation\nWhen I try a forward pass with an invalid input, the input validation also fails in different ways:\n```python\nmodel(torch.ones(3))\n```\non **Windows**:\n```\n[ ... ]\nAssertionError: Guard failed: x.size()[0] == 4\n```\non **Linux**:\n```\n[ ... ]\nRuntimeError: Expected input at *args[0].shape[0] to be equal to 4, but got 3. If you meant for this dimension to be dynamic, please re-export and specify dynamic_shapes (e.g. with Dim.DYNAMIC)\n```\n\n### Versions\n\n# Windows Environment\n```\nCollecting environment information...\nPyTorch version: 2.9.1+cpu\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)\nGCC version: (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r8) 13.2.0\nClang version: Could not collect\nCMake version: version 3.29.2\nLibc version: N/A\n\nPython version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] (64-bit runtime)\nPython platform: Windows-11-10.0.22631-SP0\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nName: 13th Gen Intel(R) Core(TM) i7-1365U\nManufacturer: GenuineIntel\nFamily: 198\nArchitecture: 9\nProcessorType: 3\nDeviceID: CPU0\nCurrentClockSpeed: 1800\nMaxClockSpeed: 1800\nL2CacheSize: 6656\nL2CacheSpeed: None\nRevision: None\n\nVersions of relevant libraries:\n[pip3] executorch==1.0.1\n[pip3] numpy==2.3.5\n[pip3] pytorch_tokenizers==1.0.1\n[pip3] torch==2.9.1\n[pip3] torchao==0.14.0\n[pip3] torchvision==0.24.1\n[conda] Could not collect\n```\n\n# Linux Environment\n```\nCollecting environment information...\nPyTorch version: 2.9.1+cpu\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: 18.1.8 (++20240731025043+3b5b5c1ec4a3-1~exp1~20240731145144.92)\nCMake version: version 4.1.2\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nIs XPU avai", "url": "https://github.com/pytorch/pytorch/issues/169659", "state": "closed", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2025-12-05T08:50:31Z", "updated_at": "2025-12-09T09:22:45Z", "comments": 3, "user": "etrommer" }, { "repo": "pytorch/pytorch", "number": 169597, "title": "Standardize Testing in OpenReg", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nDescribed in the following [issue](https://github.com/pytorch/pytorch/issues/158917): \n\nOpenReg aims to:\n- Track the evolution of community features and provide up-to-date standardized integration implementations, serving as the official reference and code example for integration documentation.\n- The goal is to cover all functional points of new device integration into PyTorch, ensuring that the integration mechanisms themselves are robust and complete.\n\nAs such, this requires a standardized set of tests that follow the same process as current in-tree devices in Pytorch.\n\nThe following are proposed additions to OpenReg to improve testing as well as documentation for future PrivateUse1 users:\n\n- Working example of [DeviceTypeTestBase](https://github.com/pytorch/pytorch/blob/31987d0eda56179bfbed565b8cbb937844cd300c/torch/testing/_internal/common_device_type.py#L317) included in OpenReg\n- Working example of `OpInfo` based test in OpenReg\n- Documentation alongside standard tests\n\nIncluding this in OpenReg provides the following benefits:\n\n- A clear documented reference on how to emulate this for new backends\n- Ensures the stability of these APIs\n\n### Alternatives\n\nGiven OpenReg strives to maintain a standard for pytorch device backends, I believe keeping the standard that in-tree devices require the above solution. Feel free to comment if an alternative is preferred.\n\n### Additional context\n\ncc: @fffrog @albanD @zeshengzong \n\ncc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD", "url": "https://github.com/pytorch/pytorch/issues/169597", "state": "open", "labels": [ "triaged", "module: PrivateUse1", "module: openreg" ], "created_at": "2025-12-04T19:22:19Z", "updated_at": "2025-12-04T19:34:26Z", "comments": 0, "user": "JRosenkranz" }, { "repo": "pytorch/ao", "number": 3436, "title": "Int4WeightOnly torch.bmm semantics", "body": "Currently, int4 weight only quantization does not work out of the box for llama4 scout. \n\n```python\nfqn_to_config = FqnToConfig(\n {\n r\"re:.*\\.feed_forward\\.experts\\.gate_up_proj\": Int4WeightOnlyConfig(),\n r\"re:.*\\.feed_forward\\.experts\\.down_proj\": Int4WeightOnlyConfig()\n }\n)\n\nquantized_model = AutoModelForCausalLM.from_pretrained(\n model_name,\n torch_dtype=\"bfloat16\",\n device_map=device_map,\n quantization_config=quantization_config,\n)\n```\nThis fails because the the int4 torch.bmm implementation expects the weights to be transposed already, while the dense version does not.\n\nThe dense torch.bmm(inputs, weights) expects for input of shape (B, M, K) and weights to be shape (B, K, N)) but the quantized version expects the weights to be of shape (B, N, K). \n\nAdding a line, `down_proj = down_proj.transpose(-2, -1). contiguous().transpose(-2, -1)`, after loading the model will fix this issue, but this is hacky and also means that we can't pass in the config as part of quantization_config. \n\nVasiliy ran into a similar issue with Float8Tensor bmm semantics in https://github.com/pytorch/ao/pull/3296, which solves this problem by transposing qdata and scale as part of the bmm op. \n\nHowever for int4 we pack to int8, so it's a bit more work, as we would need to unpack int4 -> int8, transpose, contiguous, repack -> int4. \n\nI think the easiest way is to add a flag to transpose the weight or not before the quantized data is computed, but open to any suggestions on how to best fix this\n\n", "url": "https://github.com/pytorch/ao/issues/3436", "state": "open", "labels": [ "triaged" ], "created_at": "2025-12-04T18:35:44Z", "updated_at": "2025-12-04T23:12:15Z", "comments": 0, "user": "jcaip" }, { "repo": "pytorch/torchtitan", "number": 2109, "title": "Knowledge Distillation template", "body": "Hi, I want to use torchtitan for knowledge distillation, what is the right way to do it? should I hold both models inside the main model? (then how can I exclude the teacher from being saved or .train()ed or exclude it from the optimizer) or is there a way to have two separate models (with parallelism handled correctly, especially PP)?\n\nif I have to hold both models in the same Model, then will this be a right forward()? (for now the assumption that both models have the same number of layers is ok)\n\n```python\n def forward(\n self,\n tokens: torch.Tensor,\n tokens_t: torch.Tensor,\n attention_masks: AttentionMasksType | None = None,\n ):\n \"\"\"\n Perform a forward pass through the Transformer model.\n \"\"\"\n # passthrough for nonexistent layers, allows easy configuration of pipeline parallel stages\n h = self.tok_embeddings(tokens) if self.tok_embeddings else tokens\n h_t = self.tok_embeddings_t(tokens_t) if self.tok_embeddings_t else tokens_t\n\n for layer, layer_t in zip(self.layers.values(), self.layers_t.values()):\n h = layer(h, self.freqs_cis, attention_masks=attention_masks)\n h_t = layer(h_t, self.freqs_cis_t, attention_masks=attention_masks)\n h = self.norm(h) if self.norm else h\n h_t = self.norm(h_t) if self.norm_t else h_t\n output = self.output(h) if self.output else h\n output_t = self.output_t(h_t) if self.output_t else h_t\n return output, output_t\n```", "url": "https://github.com/pytorch/torchtitan/issues/2109", "state": "open", "labels": [ "question" ], "created_at": "2025-12-04T17:37:51Z", "updated_at": "2025-12-04T23:35:26Z", "user": "Separius" }, { "repo": "pytorch/ao", "number": 3452, "title": "Any plans to support `USE_DISTRIBUTED=0` pytorch?", "body": "**Dec 6th EDIT:** simplified & expanded error and reproduction example from conversation below.\n\nIf not then please write in readme/requirements somewhere. The error below was cryptic.\n\nError that led me to this conception:\n
\n\n```\nTraceback (most recent call last):\n File \"/data/data/com.termux/files/home/dev/llm/sd/test/./to.py\", line 3, in \n import torchao\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/__init__.py\", line 127, in \n from torchao.quantization import (\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/quantization/__init__.py\", line 6, in \n from .autoquant import (\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/quantization/autoquant.py\", line 11, in \n from torchao.dtypes import (\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/__init__.py\", line 1, in \n from . import affine_quantized_tensor_ops\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/affine_quantized_tensor_ops.py\", line 14, in \n from torchao.dtypes.floatx.cutlass_semi_sparse_layout import (\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/floatx/__init__.py\", line 4, in \n from .float8_layout import Float8Layout\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/floatx/float8_layout.py\", line 21, in \n from torchao.float8.inference import (\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/__init__.py\", line 12, in \n from torchao.float8.float8_linear_utils import (\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/float8_linear_utils.py\", line 14, in \n from torchao.float8.float8_linear import Float8Linear\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/float8_linear.py\", line 15, in \n from torchao.float8.distributed_utils import tensor_already_casted_to_fp8\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/distributed_utils.py\", line 14, in \n from torchao.float8.float8_training_tensor import Float8TrainingTensor\n File \"/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/float8_training_tensor.py\", line 10, in \n from torch.distributed._tensor import DTensor\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/_tensor/__init__.py\", line 25, in \n sys.modules[f\"torch.distributed._tensor.{submodule}\"] = import_module(\n ^^^^^^^^^^^^^^\n File \"/data/data/com.termux/files/usr/lib/python3.12/importlib/__init__.py\", line 90, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/__init__.py\", line 4, in \n import torch.distributed.tensor._ops # force import all built-in dtensor ops\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_ops/__init__.py\", line 2, in \n from ._conv_ops import * # noqa: F403\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_ops/_conv_ops.py\", line 5, in \n from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_dtensor_spec.py\", line 6, in \n from torch.distributed.tensor.placement_types import (\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/placement_types.py\", line 8, in \n import torch.distributed._functional_collectives as funcol\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/_functional_collectives.py\", line 9, in \n import torch.distributed.distributed_c10d as c10d\n File \"/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py\", line 23, in \n from torch._C._distributed_c10d import (\nModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package\n```\n\n
\n\nReproduction example leading to the error above when used with `USE_DISTRIBUTED=0 USE_CUDA=0` built pytorch:\n```\nimport torchao\n```\n\nI tried guarding the imports/usage of `DTensor` with something like:\n```\nimport torch.distributed\nis_torch_distributed_available = torch.distributed.is_available()\nif is_torch_distributed_available:\n from torch.distributed._tensor import DTensor\n```\nBut DTensor usage turned out to be prolific and well integrated. Maybe there is some sort of subclass solution and guard anything DTensor specific? I'm out of my depth here.\n\nI don't know of anything else to try with torchao and i", "url": "https://github.com/pytorch/ao/issues/3452", "state": "open", "labels": [], "created_at": "2025-12-04T00:25:19Z", "updated_at": "2025-12-07T20:20:59Z", "comments": 7, "user": "rene-descartes2021" }, { "repo": "pytorch/pytorch", "number": 169461, "title": "torch compile + replicate, compute and communication not overlap", "body": "### \ud83d\udc1b Describe the bug\n\nWhen I use a combination of composable.replicate and torch.compile, I observe that all backward allreduce operations are executed only after the entire backward pass computation is complete.\n\nThis behavior prevents the overlap of computation and communication, which is typically achieved in DDP (DistributedDataParallel) + torch.compile by inserting a graph break during the backward pass (e.g., after the gradient calculation for a specific layer).\n\nI am looking for any potential workarounds or suggested methods to enable computation/communication overlap when using composable.replicate with torch.compile.\n\n```\nimport os\nimport time\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\n\nfrom torch.distributed._composable.replicate import replicate\n\nfrom torch.profiler import profile, record_function, ProfilerActivity\n\ndef setup():\n rank = int(os.environ[\"RANK\"])\n local_rank = int(os.environ[\"LOCAL_RANK\"])\n world_size = int(os.environ[\"WORLD_SIZE\"])\n\n dist.init_process_group(\"nccl\")\n torch.cuda.set_device(local_rank)\n \n return rank, local_rank\n\ndef cleanup():\n dist.destroy_process_group()\n\nclass ToyModel(nn.Module):\n def __init__(self):\n super().__init__()\n self.layers = nn.Sequential(\n nn.Linear(2048, 4096),\n nn.ReLU(),\n nn.Linear(4096, 4096),\n nn.ReLU(),\n nn.Linear(4096, 4096),\n nn.ReLU(),\n nn.Linear(4096, 2048),\n )\n\n def forward(self, x):\n return self.layers(x)\n\ndef main():\n rank, local_rank = setup()\n\n model = ToyModel().to(local_rank)\n\n replicate(\n model, \n device_ids=[local_rank], \n bucket_cap_mb=25\n )\n\n opt_model = torch.compile(model, backend=\"inductor\")\n\n optimizer = torch.optim.SGD(opt_model.parameters(), lr=0.01)\n loss_fn = nn.MSELoss()\n\n input_tensor = torch.randn(32, 2048).to(local_rank)\n target_tensor = torch.randn(32, 2048).to(local_rank)\n\n log_dir = './profiler_logs_composable'\n \n with profile(\n activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n schedule=torch.profiler.schedule(\n wait=5,\n warmup=2,\n active=3,\n repeat=1\n ),\n on_trace_ready=torch.profiler.tensorboard_trace_handler(log_dir),\n record_shapes=True,\n profile_memory=True,\n with_stack=True\n ) as prof:\n \n for step in range(15):\n step_start = time.time()\n \n with record_function(\"model_training_step\"):\n optimizer.zero_grad()\n \n output = opt_model(input_tensor)\n loss = loss_fn(output, target_tensor)\n loss.backward()\n optimizer.step()\n \n torch.cuda.synchronize()\n step_end = time.time()\n\n if rank == 0:\n print(f\"Step {step}: Loss={loss.item():.4f}, Time={step_end - step_start:.4f}s\")\n \n prof.step()\n\n cleanup()\n\nif __name__ == \"__main__\":\n main()\n```\n\n![Image](https://github.com/user-attachments/assets/0af08b18-893b-4cff-b8fe-b23cb10fb4be)\n\n### Versions\n\nPyTorch version: 2.10.0a0+b558c986e8.nv25.11\nIs debug build: False\nCUDA used to build PyTorch: 13.0\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 13.0.88\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA A100-PCIE-40GB\nGPU 1: NVIDIA A100-PCIE-40GB\n\nNvidia driver version: 570.133.20\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.15.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.15.0\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 112\nOn-line CPU(s) list: 0-111\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz\nCPU family: 6\nModel: 106\nThread(s) per core: 2\nCore(s) per socket: ", "url": "https://github.com/pytorch/pytorch/issues/169461", "state": "open", "labels": [ "oncall: distributed", "triaged", "oncall: pt2" ], "created_at": "2025-12-03T08:33:05Z", "updated_at": "2025-12-09T17:50:45Z", "comments": 6, "user": "peaceorwell" }, { "repo": "pytorch/torchtitan", "number": 2101, "title": "EP in latest main is slow", "body": "Hi team,\n\nI tried to duplicate the EP implementation in my model. But I find it's running much slowly with EP.\nI find there is a written cpu-gpu synchronization at the beginning of all2all in token dispatch, for input_split and output_split, which is kinda a blocker. Is it possible to avoid it without symmetric memory all2all?\n\nBesides, could you help to share which part of EP workflow needs torch.compile? I noticed the usage of torch.gather and torch.scatter_add may not be optimal. I guess they may need to be optimized by torch.compile.\n\nThanks!\n", "url": "https://github.com/pytorch/torchtitan/issues/2101", "state": "open", "labels": [], "created_at": "2025-12-03T00:10:46Z", "updated_at": "2025-12-03T00:10:46Z", "comments": 0, "user": "goldhuang" }, { "repo": "pytorch/torchtitan", "number": 2100, "title": "symmetric memory all2all integration for EP", "body": "Hi team,\n\nI find https://github.com/pytorch/torchtitan/tree/main/torchtitan/experiments/moe_symm_mem_kernels. But seems there is no progress update for a while according to \n\nExperiment | Test Status | Owners\n-- | -- | --\n[moe_symm_mem_kernels](https://github.com/pytorch/torchtitan/blob/main/torchtitan/experiments/moe_symm_mem_kernels)|TBA|[@kwen2501](https://github.com/kwen2501)\n\n\nIs there a plan for the integration? Is there any known issue that stops the release? \nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/2100", "state": "open", "labels": [], "created_at": "2025-12-03T00:02:55Z", "updated_at": "2025-12-03T00:02:55Z", "comments": 0, "user": "goldhuang" }, { "repo": "pytorch/xla", "number": 9726, "title": "How is GetOutputShardings supposed to work for PJRT Implementers?", "body": "We have a custom shardy + stablehlo pipeline manage shard propagation inside our compiler stack. We're having trouble **communcating the correct output sharding back to the framework**, and cannot find any obvious interface to do so, and wanted to ask what the intended path for this looks like.\n\nTo be clear, this is the path our compiler takes:\n1. We get the SHLO in Shardy dialect from the torch-xla framework\n2. We run Shardy to solve the SHLO graph\n3. We lower it to our own custom dialect and execute from there.\n\nWe do _not_ convert the SHLO graph back to HLO (as Jax does). After the graph is solved in step 2, we would like to tell torch-xla what the correct output shardings are.\n\n## Observed Behavior\n\nIn torch_xla, we observe that output shardings are retrieved during compilation in ths path:\ntorch_xla::XLAGraphExecutor::Compile -> torch_xla::runtime::PjRtComputationClient::Compile -> [PjRtComputation constructor](https://github.com/tenstorrent/pytorch-xla/blob/a5be1f82e7906e09aa004cb99b08e29d3c102478/torch_xla/csrc/runtime/pjrt_computation_client.h#L329-L336) -> `output_shardings_ = this->executable->GetOutputShardings();`\n\nThis eventually calls into the base PJRTExecutable implementation of [GetOutputShardings](https://github.com/openxla/xla/blob/4ae2ec6f162569750c76dbdbe12071d7091f1988/xla/pjrt/pjrt_executable.cc#L350-L361).\n\nThe mechanism by which output shardings seem to be extracted from the implementer side is by calling `PJRT_Executable_OptimizedProgram` to retrieve the post-compile MLIR from our PJRT implementation in [xla::PjRtCApiExecutable::GetHloModules()](https://github.com/openxla/xla/blob/main/xla/pjrt/c_api_client/pjrt_c_api_client.cc#L2001-L2061).\n\nThe MLIR is then converted to an xla-internal HLO module construct and output shardings are [eventually extracted from that construct inside PjRtExecutable::GetOutputShardings()](https://github.com/openxla/xla/blob/4ae2ec6f162569750c76dbdbe12071d7091f1988/xla/pjrt/pjrt_executable.cc#L340-L361)\n\n## How should this work?\n\nThis existing path would suggest that the way a PJRT implementer \"communicates\" output shardings back to the framework post-compilation is by generating IR with output shardings in some format compatible with how they are ingested in XLA. This seems both complex and unidiomatic, because other paths to return data from compilation to the framework involve well defined interfaces in PJRT (like PJRT_Executable_OutputDimensions) and [PjRtCApi overrides](https://github.com/openxla/xla/blob/main/xla/pjrt/c_api_client/pjrt_c_api_client.cc#L1898-L1921) to use those interfaces and cast the result to xla internal types.\n\nWhat is the recommended way to communicate output shardings to the framework from a lower-level compiler?", "url": "https://github.com/pytorch/xla/issues/9726", "state": "open", "labels": [ "question", "runtime", "stablehlo" ], "created_at": "2025-12-02T19:26:20Z", "updated_at": "2025-12-15T13:47:09Z", "user": "jameszianxuTT" }, { "repo": "pytorch/executorch", "number": 16041, "title": "CORTEX_M: Memory optimization", "body": "No work has been done looking into optimizing memory of the runtime. This ticket covers a broad investigation into what can be done in this space:\n1. Can we optimize scratch buffer allocation (e.g. is it reused between kernels currently?)\n2. Can we strip away anything from the elf to minimize runtime size?\n3. Any other ideas to optimize performance related to memory", "url": "https://github.com/pytorch/executorch/issues/16041", "state": "open", "labels": [], "created_at": "2025-12-02T14:24:20Z", "updated_at": "2025-12-15T12:01:21Z", "comments": 0, "user": "AdrianLundell" }, { "repo": "pytorch/executorch", "number": 16039, "title": "CORTEX_M: Target configuration", "body": "CMSIS-NN requires slightly different lowerings for different architecture extensions (scalar/DSP/vector). Currently\nvector extension is assumed, so we might need to add a way to configure this and do modifications in the pass lowering where required. \n\nFor example, the linear operator currently only pass the kernel_sum scratch buffer and no bias to the operator call, which only works for the MVE implementation of the operator. To run this on another Cortex-M would involve passing the target CPU to the ConvertToCortexMPass which lowers the operator and add the bias as an argument if it does not have MVE support. \n\nAlternatively, it might not be worth the effort to do this in the lowering and it is better to do the target configuration in the runtime flow only, then the scratch buffer would need to be computed in the runtime. This decision of how to best do this is part of the ticket.", "url": "https://github.com/pytorch/executorch/issues/16039", "state": "open", "labels": [], "created_at": "2025-12-02T14:19:34Z", "updated_at": "2025-12-03T15:34:04Z", "comments": 0, "user": "AdrianLundell" }, { "repo": "pytorch/pytorch", "number": 169371, "title": "C++ Generator API is platform dependent", "body": "When creating a tensor with the C++ API, one can do something like this:\n```\n\ttry {\n\t\tTensor t = torch::ones({200, 1, 28, 28});\n\t\tt.to(torch::DeviceType::MPS);\n\t} catch(const std::exception& e) {\n\t\t...\n\t}\n```\nThis code is going to compile and run on all platforms, obviously going into the `catch` block if not on macOS. The same thing happens for `t.to(torch::DeviceType::CUDA)`\n\nOn the other hand, Generators offer the utilities `at::cuda::detail::createCUDAGenerator` and `at::mps::detail::createMPSGenerator` which are not defined in libtorch unless the library was built with CUDA support and on macOS respectively, which needs special care at both compile time and run time (using macros to exclude code, force compilation with unresolved external symbols, checking `torch::cuda::is_available()`/`torch::mps::is_available()` before making the calls, ...).\n\nIs there a platform-independent way to deal with Generators just like there is with Tensors?\n\ncc @jbschlosser @albanD @guangyey @EikanWang", "url": "https://github.com/pytorch/pytorch/issues/169371", "state": "open", "labels": [ "module: cpp", "triaged", "module: accelerator" ], "created_at": "2025-12-02T12:53:52Z", "updated_at": "2025-12-04T02:07:26Z", "comments": 1, "user": "matteosal" }, { "repo": "pytorch/executorch", "number": 16034, "title": "How to add a new backend?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi, I've already seen that some backends already supported from [here](https://docs.pytorch.org/executorch/main/backends-overview.html). Is there a convenient way to add a new backend, like [CANN](https://developer.huawei.com/consumer/en/doc/hiai-guides/introduction-0000001051486804) or OpenCL, into executorch? BTW, if executorch will support OpenCL backend in the future?\n", "url": "https://github.com/pytorch/executorch/issues/16034", "state": "open", "labels": [], "created_at": "2025-12-02T03:13:18Z", "updated_at": "2025-12-02T18:50:18Z", "user": "JingliangGao" }, { "repo": "pytorch/pytorch", "number": 169269, "title": "cannot import name 'get_num_sms' from 'torch._inductor.utils'", "body": "### \ud83d\udc1b Describe the bug\n\nI'm trying to run [nano-vllm](https://github.com/GeeeekExplorer/nano-vllm), and there is an error:\n```\n File \"/mnt/petrelfs/fengyuan/anaconda3/envs/qwen_copy/lib/python3.12/site-packages/torch/_inductor/kernel/mm_grouped.py\", line 20, in \n from ..utils import (\nImportError: cannot import name 'get_num_sms' from 'torch._inductor.utils'\n```\nI reviewed the source code of torch-2.5.1, and there is a reference to `get_num_sms` in `mm_grouped.py`, but this function is not defined in `_inductor/utils.py`.\n\nFor some reason, I can't update my torch-2.5.1 to the latest version. How can I fix it without updating? Can I simply copy the `get_num_sms()` and related functions from torch-2.9.1?\n\nBy the way, you can't get the information about CUDA and GPUs in the following texts because I'm running my code in a cluster, but `nvcc -V` shows:\n```\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2023 NVIDIA Corporation\nBuilt on Tue_Feb__7_19:32:13_PST_2023\nCuda compilation tools, release 12.1, V12.1.66\nBuild cuda_12.1.r12.1/compiler.32415258_0\n```\n\nThanks a lot!\n\n### Versions\n\n```\nPyTorch version: 2.5.1\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to build PyTorch: N/A\n\nOS: CentOS Linux 7 (Core) (x86_64)\nGCC version: (Anaconda gcc) 11.2.0\nClang version: Could not collect\nCMake version: version 2.8.12.2\nLibc version: glibc-2.32\n\nPython version: 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.32\nIs CUDA available: False\nCUDA runtime version: 12.1.66\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: Could not collect\nNvidia driver version: Could not collect\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 256\nOn-line CPU(s) list: 0-255\nThread(s) per core: 2\nCore(s) per socket: 64\nSocket(s): 2\nNUMA node(s): 8\nVendor ID: AuthenticAMD\nCPU family: 23\nModel: 49\nModel name: AMD EPYC 7H12 64-Core Processor\nStepping: 0\nCPU MHz: 2600.000\nCPU max MHz: 2600.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 5200.14\nVirtualization: AMD-V\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 512K\nL3 cache: 16384K\nNUMA node0 CPU(s): 0-15,128-143\nNUMA node1 CPU(s): 16-31,144-159\nNUMA node2 CPU(s): 32-47,160-175\nNUMA node3 CPU(s): 48-63,176-191\nNUMA node4 CPU(s): 64-79,192-207\nNUMA node5 CPU(s): 80-95,208-223\nNUMA node6 CPU(s): 96-111,224-239\nNUMA node7 CPU(s): 112-127,240-255\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca\n\nVersions of relevant libraries:\n[pip3] mkl_fft==1.3.11\n[pip3] mkl_random==1.2.8\n[pip3] mkl-service==2.4.0\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-nccl-cu12==2.27.5\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] torch==2.5.1\n[pip3] torchaudio==2.5.1\n[pip3] torchvision==0.20.1\n[pip3] triton==3.1.0\n[conda] blas 1.0 mkl defaults\n[conda] cuda-cudart 12.1.105 0 nvidia\n[conda] cuda-cupti 12.1.105 0 nvidia\n[conda] cuda-libraries 12.1.0 0 nvidia\n[conda] cuda-nvrtc 12.1.105 0 nvidia\n[conda] cuda-nvtx 12.1.105 0 nvidia\n[conda] cuda-opencl ", "url": "https://github.com/pytorch/pytorch/issues/169269", "state": "closed", "labels": [], "created_at": "2025-11-30T19:57:48Z", "updated_at": "2025-12-02T20:54:05Z", "comments": 2, "user": "WangHaoZhe" }, { "repo": "pytorch/executorch", "number": 16010, "title": "How to run add operator in executorch ?", "body": "The result of the following code is \"Segmentation fault: 11\" ...\n\n```\nusing executorch::aten::ScalarType;\nusing executorch::aten::Tensor;\nusing executorch::aten::TensorImpl;\n\nint main() {\n\texecutorch::runtime::runtime_init();\n\t// Create our input tensor.\n\tfloat data[14465 * 3] = { 1 };\n\tTensorImpl::SizesType sizes[] = { 14465, 3 };\n\tTensorImpl impl(\n\t ScalarType::Float, // dtype\n\t 2, // number of dimensions\n\t sizes,\n\t data);\n\tTensor input_tensor(&impl);\n\tTensor output_tensor(&impl);\n\ttorch::executor::KernelRuntimeContext context_;\n\ttorch::executor::native::add_out(context_, input_tensor, input_tensor, 1.0, output_tensor);\n\t\n\treturn 0;\n}\n```\n\ncc @larryliu0820 @JacobSzwejbka @lucylq", "url": "https://github.com/pytorch/executorch/issues/16010", "state": "open", "labels": [ "module: runtime" ], "created_at": "2025-11-30T10:49:22Z", "updated_at": "2025-12-01T17:50:32Z", "user": "rscguo" }, { "repo": "pytorch/torchtitan", "number": 2091, "title": "question of `_op_sac_save_list` for op-sac", "body": "Hi, I have a noob question, is there any particular reason we dont put `torch.ops.aten._scaled_dot_product_cudnn_attention.default` (and maybe some other SDPA variants) into `_op_sac_save_list` to avoid recompute? ", "url": "https://github.com/pytorch/torchtitan/issues/2091", "state": "closed", "labels": [], "created_at": "2025-11-28T23:29:02Z", "updated_at": "2025-12-02T20:52:33Z", "comments": 4, "user": "rakkit" }, { "repo": "pytorch/FBGEMM", "number": 5176, "title": "How to apply gradient clip in fused optimizer?", "body": "I noticed that my embedding bag parameters exploded. Is there a way I could apply gradient clip. \nI'm using `EmbOptimType.EXACT_ROWWISE_ADAGRAD`\n\nHere is the code\n\n\n```\n sharder_with_optim_params = EmbeddingBagCollectionSharder(\n fused_params={\n 'optimizer': EmbOptimType.EXACT_ROWWISE_ADAGRAD,\n 'learning_rate': 0.01,\n 'eps': 1e-8,\n },\n )\n```", "url": "https://github.com/pytorch/FBGEMM/issues/5176", "state": "open", "labels": [], "created_at": "2025-11-28T16:07:57Z", "updated_at": "2025-11-28T16:07:57Z", "user": "acmilannesta" }, { "repo": "pytorch/pytorch", "number": 169175, "title": "Regarding this issue, how can I upgrade or replace the cuDNN version built into my current PyTorch installation?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nSignificant Memory Regression in F.conv3d with bfloat16 Inputs in PyTorch 2.9.0 (#166643) This release provides work around this issue. If you are impacted please install nvidia-cudnn package version 9.15+ from pypi. (#166480) (#167111) .\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/169175", "state": "closed", "labels": [], "created_at": "2025-11-27T09:32:00Z", "updated_at": "2025-11-27T20:19:07Z", "comments": 2, "user": "saberrroool" }, { "repo": "pytorch/pytorch", "number": 169174, "title": "Does torch.masked_select preserve the original order of the selected elements?", "body": "There is the following issue on this page: https://docs.pytorch.org/docs/stable/generated/torch.masked_select.html\n\nDoes torch.masked_select preserve the original order of the selected elements?\n\n`mask = torch.from_numpy(np.random.uniform(0, 1, 1234567) > 0.5)\n idx = torch.arange(len(mask))\n select = idx.masked_select(mask)\n assert (select == torch.sort(select)[0]).all()`", "url": "https://github.com/pytorch/pytorch/issues/169174", "state": "closed", "labels": [], "created_at": "2025-11-27T09:26:45Z", "updated_at": "2025-11-30T12:12:18Z", "comments": 0, "user": "wanglin03" }, { "repo": "pytorch/pytorch", "number": 169160, "title": "Is there any way to make pinned CPU tensors released back to the OS immediately", "body": "### \ud83d\udc1b Describe the bug\n\nThe pinned CPU tensors can't be released back to the OS immediately.\n\n```python\nimport torch\nimport gc\nimport ctypes\nimport psutil\nimport os\n\ndef get_memory_usage():\n \"\"\"Return current process RSS memory usage in MB.\"\"\"\n process = psutil.Process(os.getpid())\n return process.memory_info().rss / (1024 * 1024)\n\ndef trim_memory():\n \"\"\"Attempt to release unused memory back to the OS using malloc_trim.\"\"\"\n libc = ctypes.CDLL(\"libc.so.6\")\n libc.malloc_trim(0)\n\n# Initial memory usage\nprint(f\"[Before allocation] Memory usage: {get_memory_usage():.2f} MB\")\n\n# Allocate 1 GiB of pinned memory on CPU\nx = torch.empty(1024 * 1024 * 1024, dtype=torch.uint8, device=\"cpu\", pin_memory=True)\nprint(f\"[After allocation] Memory usage: {get_memory_usage():.2f} MB\")\n\n# Delete the tensor\ndel x\n\n# Run garbage collection\ngc.collect()\n\n# Try to trim memory\ntrim_memory()\n\nprint(f\"[After del + gc + malloc_trim] Memory usage: {get_memory_usage():.2f} MB\")\n\n```\n\n### Versions\n\nPyTorch version: 2.7.0a0+7c8ec84dab.nv25.03\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.1 LTS (x86_64)\nGCC version: (Ubuntu 11.5.0-1ubuntu1~24.04) 11.5.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-5.10.134-008.18.kangaroo.al8.x86_64-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.8.93\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version: 550.54.15\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 160\nOn-line CPU(s) list: 0-159\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Processor\nCPU family: 6\nModel: 143\nThread(s) per core: 1\nCore(s) per socket: 160\nSocket(s): 1\nStepping: 8\nBogoMIPS: 5200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd avx512vbmi umip pku waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 3.8 MiB (80 instances)\nL1i cache: 2.5 MiB (80 instances)\nL2 cache: 160 MiB (80 instances)\nL3 cache: 195 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-79\nNUMA node1 CPU(s): 80-159\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers\nVulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] intel-openmp==2021.4.0\n[pip3] mkl==2021.1.", "url": "https://github.com/pytorch/pytorch/issues/169160", "state": "closed", "labels": [], "created_at": "2025-11-27T03:19:54Z", "updated_at": "2025-11-27T20:24:46Z", "comments": 1, "user": "dashanji" }, { "repo": "pytorch/pytorch", "number": 169157, "title": "AOTI does not support fallback kernels with parameters of types other than int and tensor.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrently, AOTI does not support fallback kernels with parameters of types other than int and tensor. https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/cpp_wrapper_cpu.py#L2723-L2729.\nWhy does AOTI restrict the parameter types?\nDo we have any plans to add support for fallback kernels with more complex parameters ?\n\n\n### Alternatives\n\nI implemented a workaround, replacing `generate_fallback_kernel_with_runtime_lookup_aot` with `generate_fallback_kernel_with_runtime_lookup_nopython`, which worked in my experiments. \n\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @benjaminglass1 @jataylo @iupaikov-amd", "url": "https://github.com/pytorch/pytorch/issues/169157", "state": "open", "labels": [ "triaged", "oncall: pt2", "oncall: export", "module: aotinductor" ], "created_at": "2025-11-27T02:10:26Z", "updated_at": "2025-12-18T02:30:56Z", "comments": 3, "user": "CaoE" }, { "repo": "pytorch/tutorials", "number": 3666, "title": "Feedback about What is torch.nn really?", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/nn_tutorial.html\n\nIn the section \"Neural net from scratch (without torch.nn)\" there is a pre-training loss function evaluation on a batch of 64 instances, \n\n```\nyb = y_train[0:bs]\nprint(loss_func(preds, yb))\n```\n \nthen training is performed (comments my own)\n\n```\nfor epoch in range(epochs):\n for i in range((n - 1) // bs + 1):\n # set_trace()\n start_i = i * bs\n end_i = start_i + bs\n xb = x_train[start_i:end_i] # note that xb gets redefined\n yb = y_train[start_i:end_i] # note that yb gets redefined\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n with torch.no_grad():\n weights -= weights.grad * lr\n bias -= bias.grad * lr\n weights.grad.zero_()\n bias.grad.zero_()\n```\n\nand the loss function is evaluated again to demonstrate a reduction in loss. \n\n`print(loss_func(model(xb), yb), accuracy(model(xb), yb))\n`\n\nThe final evaluation is not applied to the same data as the first though. Both invoke xb and yb, but xb and yb in the pre-training evaluation are the first 64 instances from the set, during training these variables are updated with subsequent batches and the final evaluation is performed on the final batch. \n\nPre and post-training evaluations should be performed on the same batch, either the original 64 instances from the first training batch, or (if the intent is to demonstrate generalization loss) the test dataset.\n\ncc @albanD @jbschlosser", "url": "https://github.com/pytorch/tutorials/issues/3666", "state": "open", "labels": [ "core" ], "created_at": "2025-11-26T21:16:14Z", "updated_at": "2025-11-26T21:35:10Z", "user": "bogpetre" }, { "repo": "pytorch/pytorch", "number": 169112, "title": "`torch.compile(fullgraph=True, dynamic=True)` on CUDA fails when using `torch.utils.dlpack.to_dlpack` / `from_dlpack` (`torch._C._to_dlpack` skipped by Dynamo)", "body": "### \ud83d\udc1b Describe the bug\n\n### Summary\nWhen compiling a simple model that uses `torch.utils.dlpack.to_dlpack` / `from_dlpack` with:\nbackend=\"inductor\", fullgraph=True, dynamic=True, device=\"cuda\"\n\nthe eager CUDA execution works fine, but `torch.compile` fails during Dynamo tracing with:\n> torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped\nDynamo does not know how to trace the builtin torch._C._to_dlpack.\n\nIn some setups this shows up only as a warning + graph break, but with `fullgraph=True` it turns into a hard error and the script terminates.\n\n### Minimal Repro\n```python\n# -*- coding: utf-8 -*-\nimport torch\nimport torch.nn as nn\n\nclass MyModel(nn.Module):\n def forward(self, x):\n if x.dtype == torch.bool:\n # bool path: go through uint8 + dlpack roundtrip and back to bool\n x_uint8 = x.to(torch.uint8)\n dlpack = torch.utils.dlpack.to_dlpack(x_uint8)\n converted = torch.utils.dlpack.from_dlpack(dlpack)\n return converted.bool()\n else:\n # non-bool path: direct dlpack roundtrip\n dlpack = torch.utils.dlpack.to_dlpack(x)\n return torch.utils.dlpack.from_dlpack(dlpack)\n\ndef my_model_function():\n return MyModel()\n\ndef GetInput():\n # bool tensor, shape [2], to exercise the bool branch\n return torch.rand(2).bool()\n\ndef main():\n if not torch.cuda.is_available():\n raise RuntimeError(\n \"CUDA is not available, but this repro expects device='cuda'.\"\n )\n\n device = torch.device(\"cuda\")\n\n # ---------- 1. Eager on CUDA: works ----------\n model_eager = my_model_function().to(device).eval()\n inp = GetInput().to(device)\n\n with torch.no_grad():\n out_eager = model_eager(inp)\n\n print(\"=== Eager CUDA Output ===\")\n print(\"out_eager:\", out_eager)\n print(\"shape:\", out_eager.shape)\n print(\"dtype:\", out_eager.dtype)\n print(\"device:\", out_eager.device)\n\n # ---------- 2. torch.compile on CUDA ----------\n from torch._inductor import config as inductor_config\n old_max_autotune = inductor_config.max_autotune\n inductor_config.max_autotune = True # emulate 'max-autotune' mode\n\n try:\n compiled_model = torch.compile(\n model_eager,\n backend=\"inductor\",\n fullgraph=True,\n dynamic=True,\n )\n\n with torch.no_grad():\n out_compiled = compiled_model(inp) # <-- fails here\n\n print(\"\\n=== compiled Output ===\")\n print(\"out_compiled:\", out_compiled)\n print(\"shape:\", out_compiled.shape)\n print(\"dtype:\", out_compiled.dtype)\n print(\"device:\", out_compiled.device)\n\n same = torch.equal(out_eager, out_compiled)\n print(\"\\n=== eager vs compiled elementwise equal ===\", bool(same))\n finally:\n inductor_config.max_autotune = old_max_autotune\n\nif __name__ == \"__main__\":\n main()\n```\n\n### Console output (abridged):\n```\n=== Eager CUDA Output ===\nout_eager: tensor([True, True], device='cuda:0')\nshape: torch.Size([2])\ndtype: torch.bool\ndevice: cuda:0\n\n.../torch/_dynamo/variables/functions.py:1598: UserWarning:\nDynamo does not know how to trace the builtin `torch._C._to_dlpack.` ...\n torch._dynamo.utils.warn_once(explanation + \"\\n\" + \"\\n\".join(hints))\n\nTraceback (most recent call last):\n ...\n File \".../torch/_dynamo/eval_frame.py\", line 841, in compile_wrapper\n raise e.with_traceback(None) from e.__cause__ # User compiler error\ntorch._dynamo.exc.Unsupported: Attempted to call function marked as skipped\n Explanation: Dynamo does not know how to trace the builtin `torch._C._to_dlpack.` ...\n Hint: If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it ...\n Hint: If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator\n or, if it is traceable, use `torch.compiler.allow_in_graph`.\n\n Developer debug context: module: torch._C, qualname: _to_dlpack, skip reason: \n\nfrom user code:\n File \"for_test.py\", line 11, in forward\n dlpack = torch.utils.dlpack.to_dlpack(x_uint8)\n```\n\n\n\n### Versions\n\n```\nPyTorch: 2.9.0 (installed via pip)\nCUDA: 12.x\ncuDNN: 9.x\nPython: 3.10.x\nOS: Ubuntu 22.04 (x86_64)\nGPU: NVIDIA RTX A6000 (repro uses cuda:0)\n```\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo", "url": "https://github.com/pytorch/pytorch/issues/169112", "state": "open", "labels": [ "triaged", "module: dlpack", "oncall: pt2", "module: dynamo" ], "created_at": "2025-11-26T08:13:38Z", "updated_at": "2025-12-04T02:10:01Z", "comments": 3, "user": "tinywisdom" }, { "repo": "pytorch/pytorch", "number": 169106, "title": "Why is fusion restricted here in dynamic mode?", "body": "https://github.com/pytorch/pytorch/blob/3ab08946d5052eaeda11d683d6a58e801a032755/torch/_inductor/ir.py#L3555\n\nI wrote a small demo myself and the numerical accuracy is perfect \n```python\nimport torch\nfrom torch import nn\nfrom typing import List\n\n#concat in dynamic dim\nclass MyCatMul(nn.Module):\n def __init__(self, n: int):\n super().__init__()\n self.n = n\n self.W = nn.Parameter(torch.randn(64, 64))\n\n def forward(self, xs: List[torch.Tensor]):\n assert len(xs) == self.n, f\"need {self.n} tensors\"\n # last = torch.sigmoid(xs[-1] @ self.W)\n last = torch.sigmoid(xs[-1]) \n outs = list(xs[:-1]) + [last]\n return torch.cat(outs, dim=0)\n\n\nn = 15\nmodel = MyCatMul(n).cuda()\nx_list = []\nfor i in range(n):\n a = torch.randint(2, 120, (1,)).item() \n x_list.append(torch.randn(a, 64, device='cuda'))\n\nfrom torch.export import export, Dim\n\ndynamic_shapes = [\n {0: Dim(f\"b{i}\", min=1, max=2048), 1: 64}\n for i in range(n)\n]\n\nwith torch.no_grad():\n out = model(x_list)\n ep = export(model, (x_list,), dynamic_shapes=[dynamic_shapes])\n torch._inductor.aoti_compile_and_package(\n ep, package_path=\"./model.pt2\", \n inductor_configs={\"max_autotune\": True, \n \"epilogue_fusion\": True, \n \"permute_fusion\": True, \n \"max_autotune_pointwise\": True,\n \"max_autotune_gemm\":True,\n \"freezing\":True,\n }\n )\n aot_model = torch._inductor.aoti_load_package(\"./model.pt2\")\n ################diff##############\n for i in range(100):\n test_input = []\n for ii in range(n):\n a = torch.randint(2, 1024, (1,)).item()\n test_input.append(torch.randn(a, 64, device='cuda'))\n\n out_raw = model(test_input)\n out_aot = aot_model(test_input)\n\n diff = torch.abs(out_raw - out_aot)\n max_val, max_idx = diff.max(), diff.argmax()\n coord = torch.unravel_index(max_idx, out_raw.shape)\n val_raw = out_raw.flatten()[max_idx]\n val_aot = out_aot.flatten()[max_idx]\n avg_err = diff.mean().item()\n if max_val > 1e-5:\n print(f\"iter {i}: max_err {max_val.item():.8f} @ coord {coord} \"\n f\"raw={val_raw.item():.8f} aot={val_aot.item():.8f} \"\n f\"avg_err {avg_err:.8f}\")\n raise \"error\"\n\n print(\"pass\")\n\n\n```\n\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @aakhundov @coconutruben @jataylo", "url": "https://github.com/pytorch/pytorch/issues/169106", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-11-26T03:56:02Z", "updated_at": "2025-12-10T04:43:43Z", "comments": 3, "user": "Jin-TaoZhang" }, { "repo": "pytorch/ao", "number": 3389, "title": "Is it possible to export a QAT model in AWQ Format?", "body": "I'm new to torchao and QAT but I'm pretty comfortable with PTQ techniques like AWQ and GPTQ. My deployment pipeline requires AWQ format (safetensors supported by autoawq or gptqmodel's new AWQ integration, needs to be in uint32 like Int4PackingFormat.PLAIN_INT32). I want to train a model with Int4WeightOnlyConfig and but it's confusing as to how I convert the final model into AWQ format, as AWQ format is supported but is this only for PTQ? Unless I'm missing something, you can save to roughly the same format (PLAIN_INT32 but only on xpu?) AND have AWQ support but there's no way to export to this format? If wrap my Int4WeightOnlyConfig in an AWQConfig, will it be trainable or only able to calibrate? Could I otherwise use something along the lines to the converter defined in [this project](https://github.com/gau-nernst/gemma3-int4/blob/92517e8cac07f5caa3e3c98f26931b9046a0fa38/convert_flax.py#L232)?", "url": "https://github.com/pytorch/ao/issues/3389", "state": "closed", "labels": [ "triaged" ], "created_at": "2025-11-25T17:30:03Z", "updated_at": "2025-12-12T17:27:25Z", "comments": 10, "user": "ambroser53" }, { "repo": "pytorch/executorch", "number": 15978, "title": "qnn_executor_runner - mismatch in the skel files ?", "body": "hi, \n\nim testing qnn_executor_runner on s25 ultra, \na Snapdragon 8 Gen 4 processor.\n\nit seems qnn backend choses libQnnHtpV79Skel.so as the backend\n\nbut these messages seem to point to some mismatch ? it tries to call hmx_v73_convf16 ? \ni.e. shouldnt it call hmx_v79_convf16 ? \n\n\n\nV b037a:4006: CDSP0:[R]: Process \"/frpc/f05c4930 qnn_executor_ru\" crashed in thread \"nn_3e56a57b\" due to TLBMISS RW occurrence\n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: Crashed Shared Object \"./libQnnHtpV79Skel.so\" load address : 0x01000000 \n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<015E5C3C>] hmx_v73_convf16_NxN_stride1+0x3C53C: (./libQnnHtpV79Skel.so) \n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<015E5C38>] hmx_v73_convf16_NxN_stride1+0x3C538: (./libQnnHtpV79Skel.so) \n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<015E5D74>] hmx_v73_convf16_NxN_stride1+0x3C674: (./libQnnHtpV79Skel.so) \n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<01546168>] continue_execution_bkgrnd_thread+0xA8: (./libQnnHtpV79Skel.so) \n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<0120EC94>] _ZN5Graph18exec_bkgrnd_workerEP12HexagonNNEnvPS_N9GraphData8ListTypeEN4hnnx3OsSE+0xD4: (./libQnnHtpV79Skel.so) \n2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<01219EA0>] _ZNK5Graph31ubwcd_get_corresponding_surfaceEPKv+0x9E0: (./libQnnHtpV79Skel.so) \n\n\nand output from adb shell\n#./qnn_executor_runner --model_path ./my_model_fp16.pte --input_list_path ./raw_list.txt \n\n[INFO] [Qnn ExecuTorch]: Deserializing processed data using QnnContextCustomProtocol\n[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 1\n[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2\n[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.\n[INFO] [Qnn ExecuTorch]: QnnContextCustomProtocol expected magic number: 0x5678abcd but get: 0x2000000\n[INFO] [Qnn ExecuTorch]: Running level=1 optimization.\nI 00:00:00.150474 executorch:qnn_executor_runner.cpp:313] Method loaded.\nE 00:00:00.156807 executorch:method.cpp:1274] Output 0 is memory planned, or is a constant. Cannot override the existing data pointer.\nI 00:00:00.156838 executorch:qnn_executor_runner.cpp:373] ignoring error from set_output_data_ptr(): 0x2\nE 00:00:00.157118 executorch:method.cpp:1274] Output 1 is memory planned, or is a constant. Cannot override the existing data pointer.\nI 00:00:00.157144 executorch:qnn_executor_runner.cpp:373] ignoring error from set_output_data_ptr(): 0x2\nE 00:00:00.158031 executorch:method.cpp:1274] Output 2 is memory planned, or is a constant. Cannot override the existing data pointer.\nI 00:00:00.158057 executorch:qnn_executor_runner.cpp:373] ignoring error from set_output_data_ptr(): 0x2\nI 00:00:00.158069 executorch:qnn_executor_runner.cpp:376] Inputs prepared.\nI 00:00:00.158198 executorch:qnn_executor_runner.cpp:382] Number of inputs: 1\nI 00:00:00.178327 executorch:qnn_executor_runner.cpp:490] Perform 0 inference for warming up\nI 00:00:00.178343 executorch:qnn_executor_runner.cpp:496] Start inference (0)\n[ERROR] [Qnn ExecuTorch]: QnnDsp DspTransport call failed, error 0x00000010\n\n[ERROR] [Qnn ExecuTorch]: QnnDsp Error from rpc transport\n\n[ERROR] [Qnn ExecuTorch]: QnnDsp Graph forward failed in execution with err 1003\n\n[ERROR] [Qnn ExecuTorch]: qnn_graph_execute failed. Error 1003\nE 00:00:00.192908 executorch:QnnExecuTorchBackend.cpp:176] Fail to execute graph\nE 00:00:00.192912 executorch:method.cpp:1426] CALL_DELEGATE execute failed at instruction 0: 0x1\nI 00:00:00.192924 executorch:qnn_executor_runner.cpp:514] 1 inference took 14.576000 ms, avg 14.576000 ms\nF 00:00:00.192943 executorch:qnn_executor_runner.cpp:519] In function main(), assert failed (status == Error::Ok): Execution of method forward failed with status 0x1\nAborted\n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/15978", "state": "open", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-11-25T15:14:00Z", "updated_at": "2025-12-19T02:26:49Z", "comments": 3, "user": "eliyam32" }, { "repo": "pytorch/executorch", "number": 15973, "title": "What should I do if there is SoC for my processor?", "body": "### \ud83d\udcda The doc issue\n\nHello. I have a device with a Snapdragon 685 processor, it is not on the Qualcomm SoCs list. In this case, the only thing left for me is to convert via Xnnpack? And will the model converted via Xnnpack work on android?\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/15973", "state": "open", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-11-25T13:29:32Z", "updated_at": "2025-11-26T01:50:30Z", "user": "kejndan" }, { "repo": "pytorch/torchtitan", "number": 2086, "title": "mxfp8 MoE train is slower for DeepSeekV3 16b and Qwen models", "body": "I have tested **mxfp8** train for **Qwen** MoE models, and for **DeepSeekV3 16b** on **B200**. It did not show any speed up and even slows down in some case when I use mxfp8 (quantize.grouped_mm.mx).\n\nI found [this](https://github.com/pytorch/ao/tree/main/torchao/prototype/moe_training#low-precision-moe-training) in torchao repo, saying that mxfp8 gives up to 1.6x speed up for DeepSeekV3 671b. Looks like it works only for big MoE models?\n\nI have tried benchmarking of single MoE layer like [here](https://github.com/pytorch/ao/tree/main/torchao/prototype/moe_training#benchmark-single-moe-layer-forward--backward-pass).\nThis is what I got with dims used in [DeepSeekV3 16b](https://github.com/pytorch/torchtitan/blob/7e10d6052a8029592a37d1c843dc7949a6b30043/torchtitan/models/deepseek_v3/__init__.py#L78) [dim=2048, moe_inter_dim=1408]:\n```\n$ python -m benchmarks.prototype.moe_training.bench_moe_layer --recipe mxfp8 --local_batch_size=16 --dim=2048 --hidden_dim=1408 --local_num_experts=8\ntotal_M: 131072, N: 1408, K: 2048\nbf16 time: 16.882 ms\nmxfp8 time: 17.710 ms\nspeedup: 0.953x\n```\n\nI couldn't get any speedup on Qwen3 [235B-A22B](https://github.com/pytorch/torchtitan/blob/7e10d6052a8029592a37d1c843dc7949a6b30043/torchtitan/models/qwen3/__init__.py#L168) and [30B-A3B](https://github.com/pytorch/torchtitan/blob/7e10d6052a8029592a37d1c843dc7949a6b30043/torchtitan/models/qwen3/__init__.py#L145) too.\nBenchmarking of MoE layer with dims form Qwen3 235B-A22B [dim=4096, moe_inter_dim=1536] is following:\n```\n$ python -m benchmarks.prototype.moe_training.bench_moe_layer --recipe mxfp8 --local_batch_size=16 --dim=4096 --hidden_dim=1536 --local_num_experts=8\ntotal_M: 131072, N: 1536, K: 4096\nbf16 time: 34.154 ms\nmxfp8 time: 34.196 ms\nspeedup: 0.999x\n```\n\n\nIs there a any way, how I can get speed up using mxfp8 for above models?\n", "url": "https://github.com/pytorch/torchtitan/issues/2086", "state": "open", "labels": [], "created_at": "2025-11-25T10:33:42Z", "updated_at": "2025-11-26T16:44:51Z", "comments": 2, "user": "Yerniyaz" }, { "repo": "pytorch/pytorch", "number": 169050, "title": "[Graph Partition] [Inductor] UnboundLocalError: cannot access local variable 'buf271' where it is not associated with a value", "body": "### \ud83d\udc1b Describe the bug\n\nUsing \"reduce-overhead\" mode and \"inductor backend for training, with `torch._inductor.config.graph_partition = True`. Run into inductor gen-code bug:\n\n```\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py\", line 1044, in _fn\n[rank0]: return fn(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py\", line 1130, in forward\n[rank0]: return compiled_fn(full_args)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py\", line 339, in runtime_wrapper\n[rank0]: all_outs = call_func_at_runtime_with_args(\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py\", line 129, in call_func_at_runtime_with_args\n[rank0]: out = normalize_as_list(f(args))\n[rank0]: ^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py\", line 103, in g\n[rank0]: return f(*args)\n[rank0]: ^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/autograd/function.py\", line 581, in apply\n[rank0]: return super().apply(*args, **kwargs) # type: ignore[misc]\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py\", line 2118, in forward\n[rank0]: fw_outs = call_func_at_runtime_with_args(\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py\", line 129, in call_func_at_runtime_with_args\n[rank0]: out = normalize_as_list(f(args))\n[rank0]: ^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py\", line 526, in wrapper\n[rank0]: return compiled_fn(runtime_args)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py\", line 690, in inner_fn\n[rank0]: unwrapped_outs = compiled_fn(unwrapped_args)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py\", line 724, in inner_fn\n[rank0]: outs = compiled_fn(args)\n[rank0]: ^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_inductor/output_code.py\", line 613, in __call__\n[rank0]: return self.current_callable(inputs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/home/tiger/.local/lib/python3.11/site-packages/torch/_inductor/utils.py\", line 3017, in run\n[rank0]: out = model(new_inputs)\n[rank0]: ^^^^^^^^^^^^^^^^^\n[rank0]: File \"/tmp/torchinductor_tiger/tmpngii2htx/na/cnabkmabktacecyr75a7sgnkip7pjfcd672lse2ndmzilbphpxxh.py\", line 5071, in call\n[rank0]: partition1_args = [buf301, buf305, buf306, primals_42, buf311, buf286, primals_45, buf294, buf271, s54, u0, u1]\n[rank0]: ^^^^^^\n[rank0]: UnboundLocalError: cannot access local variable 'buf271' where it is not associated with a value\n```\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.9.1+cu129\nIs debug build: False\nCUDA used to build PyTorch: 12.9\nROCM used to build PyTorch: N/A\n\nOS: Debian GNU/Linux 12 (bookworm) (x86_64)\nGCC version: (Debian 12.2.0-14+deb12u1) 12.2.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.36\n\nPython version: 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] (64-bit runtime)\nPython platform: Linux-5.15.152.bsk.10-amd64-x86_64-with-glibc2.36\nIs CUDA available: True\nCUDA runtime version: 12.9.86\nCUDA_MODULE_LOADING set to: \nGPU models and configuration: GPU 0: NVIDIA H800\nNvidia driver version: 535.261.03\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.11.0\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByt", "url": "https://github.com/pytorch/pytorch/issues/169050", "state": "open", "labels": [ "triaged", "module: cuda graphs", "oncall: pt2", "module: inductor" ], "created_at": "2025-11-25T08:29:02Z", "updated_at": "2025-12-01T22:19:24Z", "user": "wmhst7" }, { "repo": "pytorch/pytorch", "number": 169035, "title": "[Question] Why torch.ops.symm_mem.multimem_all_reduce_() don't support e4m3, e5m2, fp16?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi PyTorch developer,\n\nIs there any reason why torch.ops.symm_mem.multimem_all_reduce_() don't support e4m3, e5m2, fp16? From CUDA PTX doc https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-multimem, those data type were supported in multimem.ld_reduce. From latest NCCL code https://github.com/NVIDIA/nccl/blob/master/src/device/symmetric/generate.py#L54, NCCL also support multimem.ld_reduce based fp8 & fp16.\n\nIt seems like enable those data type doesn't require much engineering efforts. My guess is there's likely some accuracy issue PyTorch folks have found that block fp16/e5m2/e4m3 integration? Can we get more info on this? Also, should we expected torch symmetric memory to support fp16 & fp8 in near future?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/169035", "state": "open", "labels": [ "oncall: distributed", "module: symm_mem" ], "created_at": "2025-11-25T02:39:22Z", "updated_at": "2025-11-26T15:00:34Z", "comments": 0, "user": "XiaoSong9905" }, { "repo": "pytorch/pytorch", "number": 169033, "title": "Pytorch CI is partially paused for the time being (updated 11/27)", "body": "## Current Status\n*ongoing*. Linux and Windows runners are re-enabled as of 12pm 11/27. Mac runners and ROCM/H100 still disabled.\n\n## Error looks like\n*No CI was running at all. No merges were processed.*\n\n## Incident timeline (all times pacific)\n*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*\n\n## User impact\n*How does this affect users of PyTorch CI?*\n\n## Root cause\n*What was the root cause of this issue?*\n\n## Mitigation\n*How did we mitigate the issue?*\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\n\n\ncc @seemethere @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/169033", "state": "closed", "labels": [ "module: ci", "triaged" ], "created_at": "2025-11-25T01:57:30Z", "updated_at": "2025-12-07T20:08:54Z", "comments": 3, "user": "malfet" }, { "repo": "pytorch/pytorch", "number": 169002, "title": "Torch dynamo fails to do proper type promotion during export", "body": "### \ud83d\udc1b Describe the bug\n\nWhen I tried to use torch.where with a boolean tensor, a float, and and int, torch dynamo tripped up on doing type promotion, and gave me a really unclear error message on what was wrong. When I explicitly converted the int input to float, it worked. Can we develop proper type promotion in the tracer internally?\n\n\nError message:\n```\nExporting to ONNX with dynamo=True...\nW1124 11:47:57.487000 1846666 miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_compat.py:114] Setting ONNX exporter to use operator set version 18 because the requested opset_version 17 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features\n[torch.onnx] Obtain model graph for `TestModel()` with `torch.export.export(..., strict=False)`...\n[torch.onnx] Obtain model graph for `TestModel()` with `torch.export.export(..., strict=False)`... \u2705\n[torch.onnx] Run decomposition...\n[torch.onnx] Run decomposition... \u274c\nTraceback (most recent call last):\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py\", line 1416, in export\n decomposed_program = _prepare_exported_program_for_export(\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py\", line 984, in _prepare_exported_program_for_export\n _fx_passes.insert_type_promotion_nodes(graph_module)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_fx_passes.py\", line 28, in insert_type_promotion_nodes\n passes.InsertTypePromotion(module).run()\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py\", line 235, in run\n return self._run(*args, **kwargs)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py\", line 1666, in _run\n self.interpreter.run(*fake_args)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/fx/interpreter.py\", line 174, in run\n self.env[node] = self.run_node(node)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py\", line 1583, in run_node\n self._maybe_promote_node(n, rule)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py\", line 1564, in _maybe_promote_node\n self._rerun_node_after_type_promotion(node, type_promotion_info.out_dtype)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py\", line 1389, in _rerun_node_after_type_promotion\n node.target = find_compatible_op_overload(target.overloadpacket, args, kwargs)\n File \"/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py\", line 1318, in find_compatible_op_overload\n assert new_op_overload.overloadpacket == op, (\nAssertionError: Expected same OpOverload packet, got prim.device != aten.where\n```\nReproduce:\n```python\n\nimport os\n\nimport torch\n\n# Disable CUDA to match the user's environment\nos.environ['CUDA_VISIBLE_DEVICES'] = ''\ntorch.cuda.is_available = lambda: False\n\n\nclass TestModel(torch.nn.Module):\n \"\"\"Simple model that reproduces the torch.where type promotion issue.\"\"\"\n\n def __init__(self):\n super().__init__()\n\n def forward(self, attention_mask):\n \"\"\"Forward pass that uses torch.where with scalar arguments.\n \"\"\"\n # This fails with Expected same OpOverload packet, got prim.device != aten.where\n attention_mask = torch.where(attention_mask, 0, -1000.0)\n\n # This works!!\n # attention_mask = torch.where(attention_mask, float(0), -1000.0)\n return attention_mask\n\n\n\"\"\"Main function to run the reproduction.\"\"\"\nprint(\"Creating model...\")\nmodel = TestModel()\nmodel.eval()\nmodel = model.cpu()\n\n# Shape: [batch, num_heads, seq_len, seq_len] or similar 4D shape\nprint(\"Creating sample inputs...\")\nattention_mask = torch.randn(1, 1, 1505, 1505) > 0 # 4D boolean tensor\nattention_mask = attention_mask.cpu()\n\nprint(f\"Attention mask shape: {attention_mask.shape}\")\nprint(f\"Attention mask dtype: {attention_mask.dtype}\")\n\n# Test forward pass first\nprint(\"\\nTesting forward pass...\")\nwith torch.no_grad():\n output = model(attention_mask)\nprint(f\"Forward pass successful. Output shape: {output.shape}\")\nprint(f\"Output dtype: {output.dtype}\")\n\n# Export to ONNX with dynamo=True to trigger type promotion pass\nprint(\"\\nExporting to ONNX with dynamo=True...\")\n\nonnx_path = \"where_reproduce.onnx\"\n\ntorch.onnx.export(\n model,\n (atten", "url": "https://github.com/pytorch/pytorch/issues/169002", "state": "open", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2025-11-24T19:51:33Z", "updated_at": "2025-12-02T20:20:47Z", "comments": 1, "user": "aboubezari" }, { "repo": "pytorch/pytorch", "number": 169000, "title": "Dr CI is temporarily not working due to API fairewall", "body": "\n## Current Status\nongoing\n\n## Incident timeline (all times pacific)\nSince Nov 21st, 2025\n\n## User impact\n*How does this affect users of PyTorch CI?*\nThe jobs and Pr that depends on Dr CI will see no update.\n\n## Root cause\n*What was the root cause of this issue?*\nWe changed the configuration of our firewall, this changes affected all bot jobs, and can make bots have failed api call \n\n## Mitigation\n*How did we mitigate the issue?*\ncurrently dev infra team is working on fixing it\n\n", "url": "https://github.com/pytorch/pytorch/issues/169000", "state": "closed", "labels": [ "ci: sev" ], "created_at": "2025-11-24T19:22:26Z", "updated_at": "2025-12-01T22:13:09Z", "comments": 3, "user": "yangw-dev" }, { "repo": "pytorch/pytorch", "number": 168993, "title": "[CI][B200] DGXB200-07 Is Having NVIDIA-CONTAINER-TOOLKIT Related Issues", "body": "## Current Status\nOn-going \n## Error looks like\nOnly affecting periodic jobs, not PR blocking. \nErrors are like: (Using https://github.com/pytorch/pytorch/actions/runs/19630438757/job/56210849037 for example) \n\ndocker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'\nnvidia-container-cli: detection error: driver rpc error: timed out: unknown\n\n\n\n## Incident timeline (all times pacific)\nFirst noticed this from this job: https://github.com/pytorch/pytorch/actions/runs/19626056398/job/56197556152\nWhich was Nov 23rd 10pm. \n\n## User impact\n*How does this affect users of PyTorch CI?*\nCommits landed in trunk may be run with B200 periodic job and 1/3 chance the job would land on dgxb200-07 runner, which is the broken one. \n\n## Root cause\n*What was the root cause of this issue?*\nNot root-caused yet. But only dgxb200-07 is affected. \n\n## Mitigation\n*How did we mitigate the issue?*\nTo be figured out.\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\nTo be figured out. \n\ncc @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @seemethere @malfet @pytorch/pytorch-dev-infra @atalman @huydhn ", "url": "https://github.com/pytorch/pytorch/issues/168993", "state": "closed", "labels": [ "module: cuda", "module: ci", "triaged" ], "created_at": "2025-11-24T18:35:16Z", "updated_at": "2025-12-02T19:18:32Z", "comments": 2, "user": "nWEIdia" }, { "repo": "pytorch/pytorch", "number": 168965, "title": "max_autotuned BMM produces wrong result when multiple threads are used", "body": "### \ud83d\udc1b Describe the bug\n\nI noticed that when I use aoti_compile_and_package with max_autotune, in certain conditions the result is wrong. Specifically:\n1. It's important to `set_num_threads(4)`. With 1 threads it doesn't reproduce\n2. It's important to do `import cv2`, without it the bug doesn't reproduce\n3. Adding `os.environ['OPENCV_FOR_OPENMP_DYNAMIC_DISABLE'] = '1'` before import fixes the issue\n\nMy explanation of this behavior is that code produced by max_autotune looks like this\n```\nvoid cpp_CppMicroGemmFP32Vec_threaded_mm(const float* X, const float* W, float* Y, const int64_t ks_b_index)\n...\n #pragma omp parallel num_threads(4)\n {\n \n const int tid = omp_get_thread_num();\n const int64_t k_group_id = tid / num_Kt_blocks;\n const int64_t k_slice_id = tid % num_Kt_blocks;\n...\n```\nand the code relies that this block would be really executed 4 times in parallel. But if you call `omp_set_dynamic`, openmp can ignore this thread hint and run the code less times that leads to wrong results and this behavior is documented [here](https://www.openmp.org/spec-html/5.0/openmpsu35.html#x55-860002.6.1). Unfortunatly omp_set_dynamic is called while I'm importing `cv2` library, specifically [here](https://github.com/opencv/opencv/blob/4.x/modules/core/src/parallel.cpp#L470) when just loading shared library.\nSo, I think it should be fixed somehow, to not depend on this kind of OMP behavior, and maybe even use at::parallel_for instead, because different parallelizing backends can be enabled, not necessary openmp\n\n[This](https://colab.research.google.com/drive/1fDz0ZcDbYhluSTQ-ldPcZebS65YPP5KX?usp=sharing) notebook should reproduce the bug, but I didn't manage to do it in colab because there max_autotune chooses different implementation and pytorch version is also different.\n\n[data.zip](https://github.com/user-attachments/files/23722728/data.zip)\n\nOn pytorch 2.9 it doesn't reproduce, but I noticed that the generated code is using different constants. Maybe layout of input tensors in BMM has changed, so the bug isn't triggered, but anyway the code still relies on the invariant that actuall executed count is equal to `#pragma omp parallel num_threads=N`\n\n### Error logs\n\n_No response_\n\n### Versions\n\n```\nCollecting environment information...\nPyTorch version: 2.7.0\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (aarch64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-5.15.0-134-generic-aarch64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA L40S\nNvidia driver version: 550.127.05\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: aarch64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: ARM\nModel name: Neoverse-N1\nModel: 1\nThread(s) per core: 1\nCore(s) per cluster: 128\nSocket(s): -\nCluster(s): 1\nStepping: r3p1\nFrequency boost: disabled\nCPU(s) scaling MHz: 41%\nCPU max MHz: 3000.0000\nCPU min MHz: 1000.0000\nBogoMIPS: 50.00\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp\nL1d cache: 8 MiB (128 instances)\nL1i cache: 8 MiB (128 instances)\nL2 cache: 128 MiB (128 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-127\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\nVulnerability Spectre v2: Mitigation; CSV2, BHB\nVulnerability Srbds: Not affected", "url": "https://github.com/pytorch/pytorch/issues/168965", "state": "open", "labels": [ "triaged", "module: correctness (silent)", "oncall: pt2", "oncall: export", "oncall: cpu inductor", "module: aotinductor" ], "created_at": "2025-11-24T12:41:52Z", "updated_at": "2025-12-11T12:23:10Z", "comments": 6, "user": "mstebelev" }, { "repo": "pytorch/torchtitan", "number": 2077, "title": "Context Parallel for Qwen3", "body": "Thanks for supporting Qwen3 models!\n\n> CP is not supported currently because of RoPE embedding implementation details.\n\nAny plan to support CP + EP for Qwen3 MoE models?\nIf no plan in short time, can you help guide how can I implement it myself?", "url": "https://github.com/pytorch/torchtitan/issues/2077", "state": "open", "labels": [ "high priority", "triage review" ], "created_at": "2025-11-24T08:09:30Z", "updated_at": "2025-12-15T23:56:00Z", "comments": 8, "user": "unavailableun" }, { "repo": "pytorch/executorch", "number": 15956, "title": "[QNN] Support for in-place modification of mutable buffers (weights) within the QNN delegate?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n### Description\nI am working on a model where certain buffers (serving as weights) are updated in-place during the `forward` pass (e.g., zero-order optimization algorithm). \n\nI attempted to export this model and lower it to the QNN backend. My goal is to have the entire graph, including the weight update logic, executed on the QNN backend to avoid context switching between CPU and NPU.\n\n### Current Behavior\nCurrently, it seems that:\n1. The partitioner either rejects the node performing the mutation (fallback to CPU).\n2. Or, if forced, the compiled binary does not reflect the updated weights in subsequent runs (weights are treated as static constants baked into the context binary).\n\n### Question / Request\n1. **Is there native support in the QNN backend** to handle mutable buffers that are modified inside the delegated graph?\n2. If not, is the only recommended workaround to **lift the buffers to graph inputs/outputs** (managing state on the CPU)?\n3. Are there any specific compiler specs or flags (e.g., `take_over_mutable_buffer` equivalent for QNN) that I should be enabling?\n\n### Minimal Reproducible Example (MRE)\nHere is a simplified version of the logic:\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom executorch.backends.qualcomm.partition.qnn_partitioner import QnnPartitioner\n# ... other imports ...\n\nclass MutableModel(nn.Module):\n def __init__(self):\n super().__init__()\n # Registering a buffer that acts like a weight\n self.register_buffer(\"dynamic_weight\", torch.empty(10, 10))\n\n def forward(self, x):\n # Update the weight in-place during inference\n self.dynamic_weight.add_(0.01) \n # Use the updated weight for computation\n out = F.linear(x, self.dynamic_weight)\n return out\n\n# Standard export and lowering flow...\n# ...\n\n### Alternatives\n\nModify the QNN backend kernel to support weight updates\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/executorch/issues/15956", "state": "closed", "labels": [], "created_at": "2025-11-24T06:07:43Z", "updated_at": "2025-11-24T08:40:16Z", "comments": 0, "user": "qqqqqqqwy" }, { "repo": "pytorch/executorch", "number": 15954, "title": "qnn_llama_runner on SA8295 outputs repetitive \u201csp\u201d with Qwen3-1.7B after ExecuTorch export", "body": "### \ud83d\udc1b Describe the bug\n\nuse main commit b4d72f1e271915e9c0e1d313753a1eec840fbdee\n\nI have tried some settings, the setting:( when I use other setting, the convert would be failed, and the error \n\" some op has incorrect Value 68, expected >= 73\"\nor\n \" [ERROR] [Qnn ExecuTorch]: fa_alloc.cc:2462::ERROR:graph requires estimated allocation of 2315388 KB, limit is 2097152 KB [ERROR] [Qnn ExecuTorch]: graph_prepare.cc:845::ERROR:error during serialize: memory usage too large\",\n\nWhen using default_quant_dtype = QuantDtype.use_8a8w and disabling the 16a4w_block quantization, the quantization/conversion completes successfully\n`\nclass Qwen3_1_7BQuantRecipe(StaticLLMQuantRecipe):\n default_quant_dtype = QuantDtype.use_8a8w\n def __init__(self, verbose: bool = False):\n super().__init__()\n\n self.recipe = (\n QuantRecipe(\n self.default_quant_dtype,\n False,\n act_observer=MinMaxObserver,\n granularity=QuantGranularity.PER_TENSOR,\n verbose=verbose,\n )\n .add_regex(\n {\n r\"output\\.conv\",\n },\n QuantDtype.use_16a8w,\n False,\n act_observer=MinMaxObserver,\n granularity=QuantGranularity.PER_CHANNEL,\n )\n )\n self.recipe.custom_quant_annotations.append(annotate_kv_8bit)\n`\n\nhowever, when running qnn_llama_runner with Qwen3-1.7B converted via ExecuTorch (hybrid QNN .pte) on a Qualcomm SA8295 device, the model generates a long sequence of \u201csp\u201d . \n\n` <|im_start|>user\nwhat is 1+1<|im_end|>\n<|im_start|>assistant.addHandlertoHaveBeenCalled sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp`\n\nI hope to get your help or suggestions. Thanks very much.\n\n### Versions\n\ncommit b4d72f1e271915e9c0e1d313753a1eec840fbdee\n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/15954", "state": "closed", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-11-24T03:28:00Z", "updated_at": "2025-12-04T03:41:00Z", "comments": 12, "user": "lansexinhu" }, { "repo": "pytorch/pytorch", "number": 168940, "title": "[DTensor] aten.max.dim returns wrong indices when using DTensor", "body": "### \ud83d\udc1b Describe the bug\n\nI found that current strategy of `aten.max.dim` may get incorrect indices output if sharded the dim for maximization.\n\nSample code:\n```python\nimport torch\nfrom torch.distributed.tensor import distribute_tensor, Shard\n\nfrom torch.testing._internal.common_utils import run_tests\nfrom torch.testing._internal.distributed._tensor.common_dtensor import DTensorTestBase, with_comms\n\n\nclass TestRegisterSharding(DTensorTestBase):\n @with_comms\n def test_max_dim(self):\n mesh = self.build_device_mesh()\n\n x = torch.randn(4, 4, device=\"cuda\")\n\n max_value, max_indices = torch.max(x, dim=1)\n\n dist_x = distribute_tensor(x, mesh, [Shard(1)])\n\n dist_max_value, dist_max_indices = torch.max(dist_x, dim=1)\n\n print(\"x:\", x)\n print(\"max_value:\", max_value)\n print(\"max_indices:\", max_indices)\n print(\"dist_max_value:\", dist_max_value.full_tensor())\n print(\"dist_max_indices:\", dist_max_indices.full_tensor())\n\n\nif __name__ == \"__main__\":\n run_tests()\n```\n\nResult:\n```python\nx: tensor([[-1.6165, 0.5685, -0.5102, -0.9113],\n [-1.1555, -0.2262, -1.2891, 1.0654],\n [-0.7167, -0.5333, 0.2078, -0.9798],\n [ 0.7447, -0.2395, 0.2737, 0.0920]], device='cuda:0')\nmax_value: tensor([0.5685, 1.0654, 0.2078, 0.7447], device='cuda:0')\nmax_indices: tensor([1, 3, 2, 0], device='cuda:0')\ndist_max_value: tensor([0.5685, 1.0654, 0.2078, 0.7447], device='cuda:0')\ndist_max_indices: tensor([0, 0, 0, 0], device='cuda:0')\n```\n\nEach rank gets a shape(4, 1) local tensor to call `max.dim` in this case, and the local result of max indices is [0, 0, 0, 0]. The framework doesn't process the offset of index, which leads to an incorrect global result when the relavant dim is sharded.\n\nIs there a good way to implement a strategy that supports sharding the index dim?\n\n### Versions\n\ntorch v2.9.0\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad", "url": "https://github.com/pytorch/pytorch/issues/168940", "state": "open", "labels": [ "oncall: distributed", "module: dtensor" ], "created_at": "2025-11-24T02:36:58Z", "updated_at": "2025-12-12T14:40:32Z", "comments": 11, "user": "qqq6op" }, { "repo": "pytorch/torchtitan", "number": 2073, "title": "Slow Dataloader should use num_worker > 1", "body": "I am trying to use torchtitan with procedurally generated data (data augmentation). This process is CPU-intensive and I strongly do not want to store each sample before. Under this setup, `torchtitan` is really slow to train and I'm seeing my MFU dropping by 4-5x compared to unbottlenecked dataloader (no data augmentation). \n\nI have seen a related problem reported [here](https://github.com/pytorch/torchtitan/issues/1663) with some caveats on how to do multiprocess dataloader effectively. It would be cool to have an official implementation of multiprocess dataloader with `num_worker>1`", "url": "https://github.com/pytorch/torchtitan/issues/2073", "state": "closed", "labels": [], "created_at": "2025-11-21T08:13:27Z", "updated_at": "2025-12-19T01:45:50Z", "comments": 3, "user": "hypnopump" }, { "repo": "pytorch/FBGEMM", "number": 5161, "title": "Does anyone know how to build fbgemm_gpu from source without fbgemm", "body": "I'd like to only build fbgemm_gpu from source without building fbgemm.\n\nSeems that\n```\ncd fbgemm_gpu\npython setup.py install\n```\nmissed some arguments?", "url": "https://github.com/pytorch/FBGEMM/issues/5161", "state": "closed", "labels": [], "created_at": "2025-11-21T07:40:18Z", "updated_at": "2025-11-27T08:45:52Z", "user": "fmo-mt" }, { "repo": "pytorch/pytorch", "number": 168291, "title": "Remove unnecessary `ConstantVariable` wrapping in `raise_observed_exception`", "body": "~We currently convert arguments to `ConstantVariable` before calling `raise_observed_exception` in several places. This conversion is unnecessary as the Python objects can be used directly. Doing so also improves readability of some error reports.~\n\nBefore:\n```python\nObserved exception\n Explanation: ...\n Hint: ...\n Hint: ...\n\n Developer debug context: raised exception TypeError([ConstantVariable(str: \"unhashable type: \")])\n```\n\nAfter:\n```python\nObserved exception\n Explanation: ...\n Hint: ...\n Hint: ...\n\n Developer debug context: raised exception TypeError([\"unhashable type: \"])\n```\n\nExample of places that needs to be changed:\nhttps://github.com/pytorch/pytorch/blob/9396e69194e8e16801b08b1326e34708a859fa5f/torch/_dynamo/variables/functions.py#L196-L204\nhttps://github.com/pytorch/pytorch/blob/9396e69194e8e16801b08b1326e34708a859fa5f/torch/_dynamo/variables/functions.py#L211-L219\n\n\n### Versions\n\nmain\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela", "url": "https://github.com/pytorch/pytorch/issues/168291", "state": "closed", "labels": [ "good first issue", "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2025-11-20T19:28:03Z", "updated_at": "2025-12-03T13:48:14Z", "comments": 8, "user": "guilhermeleobas" }, { "repo": "pytorch/executorch", "number": 15923, "title": "1008 Giene-t2t-OnSM8850 chippet", "body": "### \ud83d\udc1b Describe the bug\n\n./genie-t2t-run -c genie_bundle_llama3.2-1b/genie_config.json -p \"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\"$'\\n\\n'$\"What is France's capital?<|eot_id|><|sta>\nUsing libGenie.so version 1.13.0\n\n[ERROR] \"Failed to create device: 1008\"\n[ERROR] \"Device Creation failure\"\nFailure to initialize model. \nFailed to create the dialog.\n\n### Versions\n\npython version 3.11 ", "url": "https://github.com/pytorch/executorch/issues/15923", "state": "closed", "labels": [], "created_at": "2025-11-20T18:49:32Z", "updated_at": "2025-11-24T18:09:35Z", "comments": 3, "user": "pbtsvinaysukhesh" }, { "repo": "pytorch/pytorch", "number": 168253, "title": "nestedtensor inconsistency in `torch.masked_select`", "body": "### \ud83d\udc1b Describe the bug\n\nHere is the code that left me with questions: I am not sure if it is a bug, but I feel it is not it would be a great addition to the docs. I would expect padded nt and padded nt1 to have the same values at the end of the script, but they are not. If it is not a bug, how can I achieve it: create a nested tensor from a padded tensor and a mask that will have a proper max_len?\n\n```python\nimport torch\n\nlengths = [5,5,6,6,6,7,7,7,7,8,8,8,8,9]\nresults = []\nfor length in lengths:\n results.append(torch.ones((length,)))\n\nresults\n# [tensor([1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),\n# tensor([1., 1., 1., 1., 1., 1., 1., 1., 1.])]\n\n\nnt = torch.nested.nested_tensor(results, layout=torch.jagged)\n\nnt\n# NestedTensor(size=(14, j1), offsets=tensor([ 0, 5, 10, 16, 22, 28, 35, 42, 49, 56, 64, 72, 80, 88, 97]), contiguous=True)\n\npt_infer = torch.nested.to_padded_tensor(nt, 0.0)\n\npt_infer.shape\n# torch.Size([14, 9])\n\nmask = pt_infer != 0\n\nmask.shape\n# torch.Size([14, 9])\n\nnt1 = torch.nested.masked_select(pt_infer, mask)\n\nnt1.shape\n# torch.Size([14, j2])\n\nnt.shape\n# torch.Size([14, j1])\n\nnt1.to_padded_tensor(0.0, ).shape\n# torch.Size([14, 97])\n\ntorch.nested.to_padded_tensor(nt1, 0.0).shape\n# torch.Size([14, 97])\n\ntorch.nested.to_padded_tensor(nt, 0.0).shape\n# torch.Size([14, 9])\n\n```\n\n\n### Versions\n\nPyTorch version: 2.9.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Rocky Linux 9.4 (Blue Onyx) (x86_64)\nGCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.34\n\nPython version: 3.12.12 | packaged by conda-forge | (main, Oct 22 2025, 23:25:55) [GCC 14.3.0] (64-bit runtime)\nPython platform: Linux-5.14.0-427.13.1.el9_4.x86_64-x86_64-with-glibc2.34\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to:\nGPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3\nNvidia driver version: 575.57.08\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\n\nVersions of relevant libraries:\n[pip3] numpy==2.3.4\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-nccl-cu12==2.27.5\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] pytorch-lightning==2.5.6\n[pip3] torch==2.9.0\n[pip3] torch-dct==0.1.6\n[pip3] torchaudio==2.9.0\n[pip3] torchmetrics==1.8.2\n[pip3] triton==3.5.0\n[conda] numpy 2.3.4 pypi_0 pypi\n[conda] nvidia-cublas-cu12 12.8.4.1 pypi_0 pypi\n[conda] nvidia-cuda-cupti-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cuda-nvrtc-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cudnn-cu12 9.10.2.21 pypi_0 pypi\n[conda] nvidia-cufft-cu12 11.3.3.83 pypi_0 pypi\n[conda] nvidia-curand-cu12 10.3.9.90 pypi_0 pypi\n[conda] nvidia-cusolver-cu12 11.7.3.90 pypi_0 pypi\n[conda] nvidia-cusparse-cu12 12.5.8.93 pypi_0 pypi\n[conda] nvidia-cusparselt-cu12 0.7.1 pypi_0 pypi\n[conda] nvidia-nccl-cu12 2.27.5 pypi_0 pypi\n[conda] nvidia-nvjitlink-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-nvtx-cu12 12.8.90 pypi_0 pypi\n[conda] pytorch-lightning 2.5.6 pypi_0 pypi\n[conda] torch 2.9.0 pypi_0 pypi\n[conda] torch-dct 0.1.6 pypi_0 pypi\n[conda] torchaudio 2.9.0 pypi_0 pypi\n[conda] torchmetrics 1.8.2 pypi_0 pypi\n[conda] triton 3.5.0 pypi_0 pypi\n\ncc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ", "url": "https://github.com/pytorch/pytorch/issues/168253", "state": "open", "labels": [ "triaged", "module: nestedtensor" ], "created_at": "2025-11-20T14:08:48Z", "updated_at": "2025-11-21T17:47:04Z", "comments": 2, "user": "rustamzh" }, { "repo": "pytorch/torchrec", "number": 3567, "title": "how to use torch.distributed.checkpoint to save and load state dict", "body": "sparse_arch is a part of my model.\n\n\"Image\"\n\n\"Image\"", "url": "https://github.com/meta-pytorch/torchrec/issues/3567", "state": "open", "labels": [], "created_at": "2025-11-20T09:30:47Z", "updated_at": "2025-11-20T09:30:47Z", "comments": 0, "user": "haolujun" }, { "repo": "pytorch/pytorch", "number": 168186, "title": "2nd example of large numeric divergence for torch compile vs eager in bf16", "body": "### \ud83d\udc1b Describe the bug\n\nFirst example is https://github.com/pytorch/pytorch/issues/168126.\n\nHere's another smaller example where I'm seeing a significant difference (rtol 1.0) between eager and compiled when running under bf16. Somehow the call to `torch.chunk` in `Module2` causes a numeric divergence to occur. It's likely related to inductor because the results match when I set `torch.compile(..., backend='aot_eager')`.\n\n```python\nimport torch\nfrom torch import Tensor, nn\n\n\nclass BaseModule(nn.Module):\n def __init__(self, dim: int = 128) -> None:\n super().__init__()\n self.p_in = nn.Linear(dim, 2 * dim, bias=False)\n self.g_in = nn.Linear(dim, 2 * dim, bias=False)\n\n\nclass Module1(BaseModule):\n def forward(self, x: Tensor, mask: Tensor) -> Tensor:\n x = self.p_in(x) * self.g_in(x)\n return x\n\n\nclass Module2(BaseModule):\n def forward(self, x: Tensor, mask: Tensor) -> Tensor:\n x = self.p_in(x) * self.g_in(x)\n a, b = torch.chunk(x, 2, dim=-1)\n x = a + b\n return x\n\n\nif __name__ == \"__main__\":\n for module_cls in [Module1, Module2]:\n for dtype in [torch.float32, torch.bfloat16]:\n print(f\"Testing module {module_cls.__name__} with dtype: {dtype}\")\n with torch.autocast(device_type=\"cuda\", dtype=dtype):\n torch.manual_seed(42)\n x = torch.randn(16, 128, 128, 128, device=\"cuda\")\n mask = torch.randint(0, 2, (16, 128, 128), device=\"cuda\")\n\n eager_layer = module_cls().cuda()\n compiled_layer = torch.compile(module_cls().cuda(), fullgraph=True)\n\n # Copy weights from reference to optimized to ensure identical parameters\n with torch.no_grad():\n for param, ref_param in zip(\n compiled_layer.parameters(), eager_layer.parameters()\n ):\n param.data.copy_(ref_param.data)\n\n out_eager = eager_layer(x, mask)\n out_compiled = compiled_layer(x, mask)\n torch.testing.assert_close(out_eager, out_compiled)\n print(f\"Passed module {module_cls.__name__} with dtype: {dtype}\")\n```\n\n### Error logs\n\n```\n(repro) jamin@jamin-dev:~/deep-affinity$ python repro.py\nTesting module Module1 with dtype: torch.float32\nPassed module Module1 with dtype: torch.float32\nTesting module Module1 with dtype: torch.bfloat16\nPassed module Module1 with dtype: torch.bfloat16\nTesting module Module2 with dtype: torch.float32\nPassed module Module2 with dtype: torch.float32\nTesting module Module2 with dtype: torch.bfloat16\nTraceback (most recent call last):\n File \"/home/jamin/deep-affinity/repro.py\", line 47, in \n torch.testing.assert_close(out_eager, out_compiled)\n File \"/home/jamin/miniconda3/envs/repro/lib/python3.10/site-packages/torch/testing/_comparison.py\", line 1589, in assert_close\n raise error_metas[0].to_error(msg)\nAssertionError: Tensor-likes are not close!\n\nMismatched elements: 753240 / 33554432 (2.2%)\nGreatest absolute difference: 0.01171875 at index (1, 61, 83, 48) (up to 1e-05 allowed)\nGreatest relative difference: 127.0 at index (4, 33, 47, 23) (up to 0.016 allowed)\n```\n\nWith `TORCHINDUCTOR_EMULATE_PRECISION_CASTS=1`:\n```\n(repro) jamin@jamin-dev:~/deep-affinity$ TORCHINDUCTOR_EMULATE_PRECISION_CASTS=1 python repro.py\nTesting module Module1 with dtype: torch.float32\nPassed module Module1 with dtype: torch.float32\nTesting module Module1 with dtype: torch.bfloat16\nPassed module Module1 with dtype: torch.bfloat16\nTesting module Module2 with dtype: torch.float32\nPassed module Module2 with dtype: torch.float32\nTesting module Module2 with dtype: torch.bfloat16\nTraceback (most recent call last):\n File \"/home/jamin/deep-affinity/repro.py\", line 47, in \n torch.testing.assert_close(out_eager, out_compiled)\n File \"/home/jamin/miniconda3/envs/repro/lib/python3.10/site-packages/torch/testing/_comparison.py\", line 1589, in assert_close\n raise error_metas[0].to_error(msg)\nAssertionError: Tensor-likes are not close!\n\nMismatched elements: 554480 / 33554432 (1.7%)\nGreatest absolute difference: 0.0078125 at index (3, 87, 97, 127) (up to 1e-05 allowed)\nGreatest relative difference: 1.0 at index (0, 0, 13, 20) (up to 0.016 allowed)\n```\n\n### Versions\n\n```\nPyTorch version: 2.9.1+cu130\nIs debug build: False\nCUDA used to build PyTorch: 13.0\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: 14.0.0-1ubuntu1.1\nCMake version: version 3.22.1\nLibc version: glibc-2.35\n\nPython version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.8.0-1043-gcp-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 13.0.88\nCUDA_MODULE_LOADING set to:\nGPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3\nNvidia driver version: 580.95.05\ncuDNN version: Could not collect\nIs XPU available: False\nHIP r", "url": "https://github.com/pytorch/pytorch/issues/168186", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-11-19T21:19:58Z", "updated_at": "2025-12-01T19:20:59Z", "comments": 6, "user": "jamin-chen" }, { "repo": "pytorch/torchrec", "number": 3561, "title": "How can I export a trained model to the Triton inference server?", "body": "How can I export a trained model to the Triton inference server?\n\nAre there any examples of exporting models, whether using Torch-TensorRT or TorchScript?", "url": "https://github.com/meta-pytorch/torchrec/issues/3561", "state": "open", "labels": [], "created_at": "2025-11-19T08:20:51Z", "updated_at": "2025-11-19T08:20:51Z", "comments": 0, "user": "intfish123" }, { "repo": "pytorch/pytorch", "number": 168148, "title": "BF16 activation precision mismatch between eager ATen and compiled Triton", "body": "### \ud83d\udc1b Describe the bug\n\nI\u2019d like to report that for activation operators such as `sigmoid` and `tanh`, when the input dtype is `bf16`, the computation precision differs between eager mode and `compile[triton]`. In eager mode, ATen computes directly in `bf16`, but the generated Triton kernel upcasts to `fp32` \u2192 applies the activation \u2192 then downcasts to `bf16`. This can lead to accuracy differences between the eager and compiled paths for the same model. Why is this the current strategy?\n\n### Error logs\n\n_No response_\n\n### Versions\n\ntorch==2.7.0a0+git1169ded\ntriton==3.2.0\n\ncc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @coconutruben", "url": "https://github.com/pytorch/pytorch/issues/168148", "state": "closed", "labels": [ "high priority", "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-11-19T08:10:53Z", "updated_at": "2025-11-28T06:05:05Z", "comments": 6, "user": "zhaoying9105" }, { "repo": "pytorch/torchrec", "number": 3559, "title": "How to convert DistributedModelParallel to quantize_inference_model and use torch.jit.script to save?", "body": "I run a example in `https://github.com/facebookresearch/dlrm/tree/main/torchrec_dlrm`, and want to save model with `torch.jit.script`, but it has error.\n\ncommand:\n```\nexport LEARNING_RATE=0.5;\ntorchx run -s local_cwd dist.ddp -j 1x1 --script dlrm_main.py -- --batch_size 2048 --learning_rate $LEARNING_RATE --dataset_name criteo_kaggle --num_embeddings_per_feature 40000000,39060,17295,7424,20265,3,7122,1543,63,40000000,3067956,405282,10,2209,11938,155,4,976,14,40000000,40000000,40000000,590152,12973,108,36 --embedding_dim 128 --over_arch_layer_sizes 1024,1024,512,256,1 --dense_arch_layer_sizes 512,256,128 --epochs 1 --validation_freq_within_epoch 12802\n```\n\n\"Image\"\n\nlogs:\n```\ntorchx 2025-11-19 06:46:19 INFO Tracker configurations: {}\ntorchx 2025-11-19 06:46:19 INFO Log directory not set in scheduler cfg. Creating a temporary log dir that will be deleted on exit. To preserve log directory set the `log_dir` cfg option\ntorchx 2025-11-19 06:46:19 INFO Log directory is: /tmp/torchx_z2d00ny6\nlocal_cwd://torchx/dlrm_main-vm9krtsx5bpnjd\ntorchx 2025-11-19 06:46:19 INFO Waiting for the app to finish...\ndlrm_main/0 [0]:PARAMS: (lr, batch_size, warmup_steps, decay_start, decay_steps): (0.5, 2048, 0, 0, 0)\ndlrm_main/0 [0]:/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:860: UserWarning: `_get_pg_default_device` will be deprecated, it only stays for backward-compatiblity reason. If you need to find a device for object collectives, please use `_get_object_coll_device`. If you need to query the device types supported by group, please use `_device_capability(group)`. \ndlrm_main/0 [0]: warnings.warn(\ndlrm_main/0 [0]:\ndlrm_main/0 [0]:Epoch 0: 0%| | 0/10 [00:00\ndlrm_main/0 [0]:[rank0]: invoke_main() # pragma: no cover\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/torchrec_dlrm/dlrm_main.py\", line 733, in invoke_main\ndlrm_main/0 [0]:[rank0]: main(sys.argv[1:])\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/torchrec_dlrm/dlrm_main.py\", line 727, in main\ndlrm_main/0 [0]:[rank0]: script_model = torch.jit.script(quantize_model)\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_script.py\", line 1443, in script\ndlrm_main/0 [0]:[rank0]: ret = _script_impl(\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_script.py\", line 1152, in _script_impl\ndlrm_main/0 [0]:[rank0]: return torch.jit._recursive.create_script_module(\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py\", line 554, in create_script_module\ndlrm_main/0 [0]:[rank0]: concrete_type = get_module_concrete_type(nn_module, share_types)\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py\", line 503, in get_module_concrete_type\ndlrm_main/0 [0]:[rank0]: concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py\", line 435, in get_or_create_concrete_type\ndlrm_main/0 [0]:[rank0]: concrete_type_builder = infer_concrete_type_builder(nn_module)\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py\", line 285, in infer_concrete_type_builder\ndlrm_main/0 [0]:[rank0]: sub_concrete_type = get_module_concrete_type(item, share_types)\ndlrm_main/0 [0]:[rank0]: File \"/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recurs", "url": "https://github.com/meta-pytorch/torchrec/issues/3559", "state": "open", "labels": [], "created_at": "2025-11-19T06:51:01Z", "updated_at": "2025-11-19T06:53:01Z", "comments": 0, "user": "intfish123" }, { "repo": "pytorch/vision", "number": 9276, "title": "where did torchvision v0.10.0 go?", "body": "I am trying to download torchvision v0.10.0 to my Jetson Nano to build it but I am always getting this error:\n\n```\nams@ams-Alienware-m17-R3:~$ git ls-remote --tags https://github.com/pytorch/vision.git\nremote: Internal Server Error\nfatal: unable to access 'https://github.com/pytorch/vision.git/': The requested URL returned error: 500\n\n```\n\nI have navigated inside the repository to search for v0.10.0, but couldn't find it in the branches.", "url": "https://github.com/pytorch/vision/issues/9276", "state": "closed", "labels": [], "created_at": "2025-11-18T21:32:56Z", "updated_at": "2025-11-19T09:03:29Z", "comments": 1, "user": "abdosalem490" }, { "repo": "pytorch/pytorch", "number": 168099, "title": "Unify pointwise DTensor and NestedTensor OP Coverage. Adds over 100 op overloads to DTensor and about to 10 to NestedTensor", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrently, DTensor maintains it's own list of which ops are pointwise. NestedTensor has a similar requirement and instead elected to add a pointwise tag to OpInfo. Maintaining two separate lists of pointwise ops is error prone. We should have both use a single source of information on which ops are pointwise. Doing so should improve op coverage for DTensor and perhaps NestedTensor and remove code duplication significantly.\n\nThese calculations I quickly did using PyTorch 2.8.0 in from Google Colab.\n\n> DTensor pointwise ops #: 364\n> OPinfo pointwise ops #: 537\n> DTensor pointwise ops missing in OpInfo #: 10\n> OpInfo pointwise ops missing in DTensor #: 185\n\n\nUnifying these would add 185 ops to DTensor coverage and 10 ops to NestedTensor coverage\n\nI would suggest checking if an op is pointwise\nwith `torch.Tag.pointwise in op.tags` for an arbitrary aten operator. I would then add the pointwise tags to any ops that are listed as pointwise in DTensor but not in opinfo and unify the lists. Doing so would ensure NestedTensor and DTensor have similar coverage\n\nTagging @ezyang \n\nSlack Discussion:\n> Anyone know why DTensor doesn\u2019t use optest\u2019s pointwise tag registration that NestedTensor already uses? It\u2019s weird to me it maintains a second list of all the pointwise ops when that info should be provided already by OpInfo registration?\n> @ezyang Reply: it probably should just use it\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nExample of drift between DTensor and NestedTensor op coverage: https://github.com/pytorch/pytorch/pull/167973\n\nCurrent analysis:\n```\ncheck if op is pointwise: [torch.Tag.pointwise in op.tags for op in pointwise_ops] \n```\n\nThese ops are missing the op that are in the do not have pointwise tags currently, but are list in DTensor:\n```python\n[, , , , , , , , , ]\n```\n\n```python\ndef get_pointwise_overloads():\n pointwise = []\n\n # All registered operator schemas\n for schema in torch._C._jit_get_all_schemas():\n ns, op_name = schema.name.split(\"::\", 1)\n\n # Only care about aten ops; drop prim, quantized, etc.\n if ns != \"aten\":\n continue\n\n # Get the OpOverloadPacket, e.g. torch.ops.aten.add\n try:\n packet = getattr(getattr(torch.ops, ns), op_name)\n except AttributeError:\n continue # some schemas may not be exposed via torch.ops\n\n # Map JIT overload name -> Python overload attribute\n overload_name = schema.overload_name or \"default\"\n\n try:\n overload = getattr(packet, overload_name) # OpOverload\n except AttributeError:\n continue # can happen in weird cases\n\n # Check tag\n if torch.Tag.pointwise in overload.tags:\n pointwise.append(overload)\n\n return pointwise\n```\nand comparing to that the list in the DTensor:\nDTensor pointwise ops #: 364\nOPinfo pointwise ops #: 537\nDTensor pointwise ops missing in OpInfo #: 10\nOpInfo pointwise ops missing in DTensor #: 185\n\nSo unifying this would add 173 ops to DTensor and add 10 op coverage to NestedTensor!\n\nPointwoise opinfo tags are found here: https://github.com/pytorch/pytorch/blob/f9724db4921288a096e331cee835abd43257fbd6/aten/src/ATen/native/native_functions.yaml#L10242\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad\n\nThe least invasive way to handle this is probably to add a fallback that checks if the op has a pointwise tags and tries to pointwise it similar to how NestedTensor currently works. Similar to: https://github.com/pytorch/pytorch/blob/33d4cf4fcb7f0cba6191b242dae53b48057e05b9/torch/distributed/tensor/_ops/_pointwise_ops.py#L626C1-L629C6 may need to check if the op supports out= arg though.", "url": "https://github.com/pytorch/pytorch/issues/168099", "state": "open", "labels": [ "oncall: distributed", "triaged", "module: dtensor", "llm-amenable" ], "created_at": "2025-11-18T19:47:48Z", "updated_at": "2025-11-24T19:04:58Z", "comments": 2, "user": "Skylion007" }, { "repo": "pytorch/torchtitan", "number": 2053, "title": "Training Qwen3-0.6B with loss mismatch.", "body": "### Bug description\n\nWhen using the config file 'torchtitan/models/qwen3/train_configs/qwen3_0.6b.toml', the starting loss of 12x suggests the weights may not have been loaded properly.\n\n\"Image\"\n\n### Versions\n\n[job]\ndump_folder = \"./outmodel\"\ndescription = \"Qwen 3 0.6B training\"\n\n[profiling]\nenable_profiling = false\nsave_traces_folder = \"profile_trace\"\nprofile_freq = 100\n\n[metrics]\nlog_freq = 1\nenable_tensorboard = false\nsave_tb_folder = \"tb\"\n\n[model]\nname = \"qwen3\"\nflavor = \"0.6B\"\nhf_assets_path = \"./assets/hf/Qwen3-0.6B\"\n# converters = [\"float8\"]\n\n[optimizer]\nname = \"AdamW\"\nlr = 3e-4\neps = 1e-8\n\n[lr_scheduler]\nwarmup_steps = 2 # lr scheduler warm up, 20% total steps\n\n[training]\nlocal_batch_size = 1\nseq_len = 4096\nmax_norm = 1.0 # grad norm clipping\nsteps = 100\ndataset = \"math\"\n\n[parallelism]\ndata_parallel_replicate_degree = 1\ndata_parallel_shard_degree = -1\nfsdp_reshard_after_forward = \"default\" # default / never / always\ntensor_parallel_degree = 1\ncontext_parallel_degree = 1\n\n[checkpoint]\nenable = false\nfolder = \"checkpoint\"\ninterval = 50\nlast_save_model_only = false\nexport_dtype = \"float16\"\nasync_mode = \"disabled\" # [\"disabled\", \"async\", \"async_with_pinned_mem\"]\n\n[activation_checkpoint]\nmode = \"full\" # [\"none\", \"selective\", \"full\"]\nselective_ac_option = \"op\" # \"int\" = ac every positive int layer or 'op', ac based on ops policy\n\n[compile]\nenable=false\ncomponents = [\"model\", \"loss\"]\n\n[quantize.linear.float8]\nenable_fsdp_float8_all_gather = false\nprecompute_float8_dynamic_scale_for_fsdp = false\nfilter_fqns = [\"output\"]\n", "url": "https://github.com/pytorch/torchtitan/issues/2053", "state": "closed", "labels": [ "question" ], "created_at": "2025-11-18T14:24:43Z", "updated_at": "2025-12-18T09:24:46Z", "user": "Joluck" }, { "repo": "pytorch/pytorch", "number": 168065, "title": "On aarch64, `pip install torch` resulted in the CPU version?", "body": "### \ud83d\udc1b Describe the bug\n\nHi, noticing that trying to `pip install torch` resulted in the CPU version of torch stable.\n\nRepro:\n1. Get an aarch64 machine, e.g. GB200\n2. `pip install torch`\n3. `pip list`, see if you see cudnn cublas etc\n\nIt can be bypassed with \n```\npip3 install torch --index-url https://download.pytorch.org/whl/cu128\n```\nbut just want to report this, in case this isn't intentional.\n\n\"Image\"\n\n### Versions\n\ntorch stable\n\ncc @svekars @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @nWEIdia", "url": "https://github.com/pytorch/pytorch/issues/168065", "state": "open", "labels": [ "module: docs", "module: cuda", "triaged" ], "created_at": "2025-11-18T04:59:16Z", "updated_at": "2025-11-24T19:19:58Z", "comments": 3, "user": "henrylhtsang" }, { "repo": "pytorch/pytorch", "number": 167994, "title": "CI Not Detecting Failing Tests in test/distributed/elastic/*", "body": "A significant number of tests under `test/distributed/elastic/` are failing, but CI does **not** surface these failures, possibly same with test/distributed/launcher, Many of these tests appear to have been broken for a long time without detection. I opened a PR with fixes, but I believe this warrants an issue so the team can investigate why CI is not catching failures in this directory.\n\nPR with fixes: https://github.com/pytorch/pytorch/pull/167993\n\n### `test/distributed/elastic/rendezvous/c10d_rendezvous_backend_test.py`\n\n**Issue:** \nIn `test_create_backend_returns_backend_if_is_host_is_false` and \n`test_create_backend_returns_backend_if_is_not_specified_and_store_already_exists`, commit https://github.com/pytorch/pytorch/commit/d25e6e623fea0552d1a4b3124344d1b2c499f6f8 removed the unused `store` variable. This caused the `TCPStore` to be garbage-collected immediately, and the tests fail as a result.\n\n\n---\n\n### `test/distributed/elastic/rendezvous/dynamic_rendezvous_test.py`\n\n#### Issue 1 \n`datetime.utcnow` was replaced with `datetime.now` in the implementation PR https://github.com/pytorch/pytorch/pull/136141, but the tests were not updated.\n\n\n#### Issue 2 \nPR https://github.com/pytorch/pytorch/pull/145228 changed `create_handler()` to expect `keep_alive_interval` as an `int`, but the test `test_redundancy_transition_to_wait_list_then_join_rendezvous` passes `timedelta(seconds=1)`.\n\n\n#### Issue 3 \n`test_share_tcp_store_from_backend` mocks `dist.PrefixStore` but also calls \n`CustomPrefixStore(spec=dist.PrefixStore)`. Since `dist.PrefixStore` is already patched, this results in:\n\n> Cannot spec a Mock object\n\n\n---\n\n### `test/distributed/elastic/rendezvous/etcd_server_test.py`\n\n**Issue:** \nIn `test_etcd_server_with_rendezvous`, the `EtcdRendezvous` prefix does not include a leading slash, but etcd v2 always stores keys with one. This causes a hang during the `rdzv_handler.next_rendezvous()` \u2192 `RendezvousStoreInfo.build` \u2192 (`store.set` \u2192 `store.get`), because the key is written as `test/run_1/rdzv/v_1/kv/TUFTVEVSX0FERFI=` but etcd stores it as `/test/run_1/rdzv/v_1/kv/TUFTVEVSX0FERFI=`. Since `store.get` (via `ETCDStore._try_wait_get`) looks for the non\u2013slash-prefixed key, it never finds \n\n---\n\n### `test/distributed/elastic/rendezvous/out_of_tree_rendezvous_test.py`\n\n**Issue:** \n`test_out_of_tree_handler_loading` attempts to test out-of-tree handler registration by adding a directory to `sys.path`. \nHowever, the real mechanism uses Python entry points, which require pip installation. \nThe original PR https://github.com/pytorch/pytorch/pull/132633 used pip install, but after review it was replaced with `sys.path` modification \u2014 which probably only worked locally due to stale installations.\n\n\n---\n\n### `torch/distributed/elastic/rendezvous/etcd_rendezvous.py`\n\n**Issue:** \nPR https://github.com/pytorch/pytorch/pull/135262 added an optional `local_addr` parameter to `EtcdRendezvousHandler.__init__`, but did not define a default value. \nThis breaks `test_etcd_server_with_rendezvous` in `test/distributed/elastic/rendezvous/etcd_server_test.py`\n\n\n---\n\nFollowing test might actually be passing in some environments and configs.\n### `test/distributed/launcher/test_run.py`\n\n**Issue:** \n`nproc_type=\"auto\"` determines world size using `torch.accelerator.is_available()`, but the test incorrectly patches `torch.cuda.is_available()`.\n\n\ncc @seemethere @malfet @pytorch/pytorch-dev-infra @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/167994", "state": "open", "labels": [ "oncall: distributed", "module: ci" ], "created_at": "2025-11-17T15:39:56Z", "updated_at": "2025-11-17T18:18:56Z", "comments": 0, "user": "harikodali" }, { "repo": "pytorch/pytorch", "number": 167991, "title": "Warnings from inside Dynamo should include at least one level of stack trace", "body": "We saw the following in vLLM:\n```\n(Worker_TP6_EP6 pid=3247488) /home/robertgshaw2-redhat/vllm/.venv/lib64/python3.12/site-packages/torch/_dynamo/variables/functions.py:1692: UserWarning: Dynamo detected a call to a `functools.lru_cache`-wrapped function. Dynamo ignores the cache wrapper and directly traces the wrapped function. Silent incorrectness is only a *potential* risk, not something we have observed. Enable TORCH_LOGS=\"+dynamo\" for a DEBUG stack trace.\n```\nif we could *just* see one frame of the stack trace, we'd be able to tell the line of vLLM where this is coming from.\nNB: I don't know how to reproduce this yet (we didn't get a repro command). I assume TORCH_LOGS=+dynamo has that stack trace, but it's nice to be able to debug this one step by just looking at the logs\n\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo @chenyang78", "url": "https://github.com/pytorch/pytorch/issues/167991", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "vllm-compile", "module: compile ux", "module: vllm", "dynamo-triage-dec2025" ], "created_at": "2025-11-17T15:31:02Z", "updated_at": "2026-01-01T18:17:59Z", "comments": 1, "user": "zou3519" }, { "repo": "pytorch/audio", "number": 4132, "title": "How can I use one streamwriter to write multiple videos?", "body": "### \ud83d\ude80 The feature\n\nUse one streamwriter to write multiple videos.\n\n### Motivation, pitch\n\nCan the streamwriter support writing multiple videos using the same object, with each video corresponding to a different stream when I use gpu to encode? In current situation, this result in writing to the same buffer, ultimately producing one video. How can I do this? This can avoid the overhead caused by multiple initializations and destructions of the streamwriter.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/4132", "state": "open", "labels": [], "created_at": "2025-11-17T11:55:56Z", "updated_at": "2025-11-17T11:55:56Z", "user": "Z-NAVY" }, { "repo": "pytorch/pytorch", "number": 167977, "title": "[DTensor]Sharding propagation failed for custom operation with Tensor in kwargs", "body": "### \ud83d\udc1b Describe the bug\n\nI try to register strategy for my custom operation by ```@register_sharding```, which has Tensor params in kwargs. And my custom strategy function provides strategies for all DTensor in args and kwargs.\nDuring sharding propagation, an AssertionError `assert len(input_specs) == len(input_args_strategy)` occurs in function `expand_to_full_mesh_op_strategy`.\nThe cause is that `input_args_strategy` only considers DTensor in args, while `input_specs` contains sharding strategies of all DTensor in args and kwargs.\n\nIs it a limitation of DTensor? Or how can I adapt my operation with Tensor kwargs to Dtensor?\n\nHere is a simple demo using `aten.min.dim_min` to reproduce the problem:\n\n```python\nimport torch\nfrom torch.distributed.tensor import distribute_tensor, Replicate\nfrom torch.distributed.tensor.experimental import register_sharding\n\nfrom torch.testing._internal.common_utils import run_tests\nfrom torch.testing._internal.distributed._tensor.common_dtensor import DTensorTestBase, with_comms\n\naten = torch.ops.aten\n\nclass TestRegisterSharding(DTensorTestBase):\n @with_comms\n def test_register_sharding_for_tensor_kwargs(self):\n mesh = self.build_device_mesh()\n\n x = torch.randn(4, 4, device=\"cuda\")\n y = torch.randn(4, 4, device=\"cuda\")\n\n x = distribute_tensor(x, mesh, [Replicate()])\n y = distribute_tensor(y, mesh, [Replicate()])\n\n # aten::min.dim_min(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) min, Tensor(b!) min_indices) -> (Tensor(a!) values, Tensor(b!) indices)\n @register_sharding(aten.min.dim_min)\n def custom_strategy(x, dim, keepdim, min, min_indices):\n acceptable_shardings = []\n all_replicate = ([Replicate(), Replicate()], [Replicate(), None, None, Replicate(), Replicate()])\n acceptable_shardings.append(all_replicate)\n return acceptable_shardings\n\n value = torch.randn(4, 1, device=\"cuda\")\n indices = torch.randn(4, 1, device=\"cuda\").long()\n value = distribute_tensor(value, mesh, [Replicate()])\n indices = distribute_tensor(indices, mesh, [Replicate()])\n torch.min(x, dim=1, keepdim=True, out=(value, indices))\n\nif __name__ == \"__main__\":\n run_tests()\n```\n\nThe error message:\n\n```\nTraceback (most recent call last):\n File \"/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py\", line 156, in dispatch\n self.sharding_propagator.propagate(op_info)\n File \"/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py\", line 327, in propagate\n OutputSharding, self.propagate_op_sharding(op_info.schema)\n File \"/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py\", line 46, in __call__\n return self.cache(*args, **kwargs)\n File \"/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py\", line 352, in propagate_op_sharding_non_cached\n op_strategy = self.op_strategy_funcs[op_schema.op](strategy_schema)\n File \"/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/experimental/_register_sharding.py\", line 98, in custom_strategy\n return expand_to_full_mesh_op_strategy(\n File \"/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_ops/utils.py\", line 332, in expand_to_full_mesh_op_strategy\n assert len(input_specs) == len(input_args_strategy)\nAssertionError\n```\n\n### Versions\n\ntorch 2.9.0\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad", "url": "https://github.com/pytorch/pytorch/issues/167977", "state": "closed", "labels": [ "oncall: distributed", "module: dtensor" ], "created_at": "2025-11-17T11:43:09Z", "updated_at": "2025-11-24T05:24:21Z", "comments": 3, "user": "qqq6op" }, { "repo": "pytorch/pytorch", "number": 167950, "title": "Insufficient documentation about the batching logic of `torch.linalg.solve`", "body": "### \ud83d\udcda The doc issue\n\nThe documentation for `torch.linalg.solve` states that\n> \n> Letting _*_ be zero or more batch dimensions,\n> If `A` has shape _(*, n, n)_ and `B` has shape _(*, n)_ (a batch of vectors) or shape _(*, n, k)_ (a batch of matrices or \u201cmultiple right-hand sides\u201d), this function returns _X_ of shape _(*, n)_ or _(*, n, k)_ respectively.\n\nHowever, from what I understand based on testing the code, the meaning of _*_ is different in these two cases. In the first case (batch of vectors), the batch dimensions _*_ of `A` and `B` must have the exact same shape, while in the second case (batch of matrices), the batch dimensions _*_ of `A` and `B` need only be broadcastable with each other. For example, an error is raised if `A` has shape _(2, 3, 4, 4)_ and `B` has shape _(3, 4)_ or _(1, 3, 4)_ (batch of vectors), while no error is raised if `A` has shape _(2, 3, 4, 4)_ and `B` has shape _(3, 4, 4)_ or _(1, 3, 4, 4)_ (batch of matrices). I think the documentation needs to be clear about how the meaning of _*_ is different for these two cases.\n\nIt should also be clarified that the interpretation of `B` as a batch of vectors should take precedence over the interpretation of `B` as a zero-dimensional batch of matrices. For example, if `A` has shape _(n, n, n)_ and `B` has shape _(n, n)_, then the output X has shape _(n, n)_, indicating that `B` is interpreted as a batch of vectors, even though `B` can be interpreted as a 'batch' of matrices with zero-dimensional batch shape _* = ()_.\n\n### Suggest a potential alternative/fix\n\nUpdate the quoted part of the documentation to\n> Letting _*_ be zero or more batch dimensions, and _**_ be one or more batch dimensions, such that _*_ and _**_ are broadcastable with each other,\n> If `A` has shape _(*, n, n)_ and `B` has shape _(*, n)_ (a batch of vectors) or shape _(**, n, k)_ (a batch of matrices or \u201cmultiple right-hand sides\u201d), this function returns _X_ of shape _(*, n)_ or _(***, n, k)_ respectively, where _***_ is the shape obtained by broadcasting _*_ with _**_.\n\nNote that this revision also automatically clarifies the ambiguity as stated in the case where `A` has shape _(n, n, n)_ and `B` has shape _(n, n)_. This is because _**_ is defined as one or more (instead of zero or more) batch dimensions, so `B` cannot be interpreted as having shape _(**, n, k)_.\n\ncc @svekars @sekyondaMeta @AlannaBurke @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano", "url": "https://github.com/pytorch/pytorch/issues/167950", "state": "open", "labels": [ "module: docs", "triaged", "module: linear algebra" ], "created_at": "2025-11-17T02:02:25Z", "updated_at": "2025-11-19T16:44:07Z", "comments": 5, "user": "hchau630" }, { "repo": "pytorch/pytorch", "number": 167906, "title": "Avoid Exception Refcycle Problems", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n\nhttps://github.com/pytorch/pytorch/blob/d01a7b0241ed1c4cded7e7ca097249feb343f072/torch/_utils.py#L720-L726\n\nThe traceback refcycle problem can happen whenever an exception is stored in a local variable. This happened in many places across pytorch:\n\n```\n$ grep ' = e$' torch -R | wc -l\n22 # note: a few are false positives\n```\nThere are likely some potential refcycles that could cause tensors to not get freed at the earliest possible time.\n\nTake one of the detected results from `collective_utils.py` for example, we can create a repro of refcycle:\n\n```python\nimport sys\nimport torch\nfrom torch.distributed.collective_utils import all_gather\ndef f(obj):\n def f():\n raise RuntimeError('hhh')\n try:\n all_gather(f)\n except Exception as e:\n pass\nif __name__ == '__main__':\n torch.distributed.init_process_group(backend='gloo')\n rank = torch.distributed.get_rank()\n obj = object()\n for k in range(20):\n f(obj)\n if rank == 0:\n print(sys.getrefcount(obj)) # Refcount keep increasing!\n```\n\nrun it with\n```\ntorchrun --nproc_per_node=2 test.py\n```\n(Note: `collective_utils` seems not used anywhere, maybe a good idea to remove it. I'm not a user of it. )\n\nPyTorch users' callstacks often have giant objects that better not get leaked. Ideas to avoid similar issues:\n* Check if any of the 22 results are worth fixing.\n* Apply a lint rule (perhaps with https://github.com/ast-grep/ast-grep/) to disable assignment of exception, unless explicitly bypassed.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @albanD", "url": "https://github.com/pytorch/pytorch/issues/167906", "state": "open", "labels": [ "module: memory usage", "triaged", "better-engineering", "module: python frontend" ], "created_at": "2025-11-15T16:47:16Z", "updated_at": "2025-11-18T22:15:10Z", "comments": 1, "user": "ppwwyyxx" }, { "repo": "pytorch/pytorch", "number": 167901, "title": "nvalid _global_ write of size 16 bytes in torch.bmm with sparse tensors", "body": "### \ud83d\udc1b Describe the bug\n\nWhen using `torch.bmm` with sparse tensors, a CUDA `__global__` memory write out-of-bounds error occurs. \n\n```python\nimport torch\n\nm1 = torch.randn(2, 291105, 1).to_sparse().cuda()\nm2 = torch.randn(2, 1, 1).cuda()\nprint([m1.size(), m2.size()])\n\ntorch.bmm(m1, m2)\n```\n\n### How to Reproduce\n\n1. Save the code above as `poc.py`.\n2. Run the script using `compute-sanitizer`. The `Invalid __global__ write` error will be reported.\n\n```bash\ncompute-sanitizer python poc.py\n```\n\n### Observed Results\n\n```\n========= Invalid __global__ write of size 16 bytes\n========= at void cusparse::vector_scalar_multiply_kernel, long, float, float>(cusparse::KernelCoeff, T2, T4 *)+0x460\n========= by thread (32,0,0) in block (4,0,0)\n========= Address 0x7ffe9b0aa884 is misaligned\n========= and is inside the nearest allocation at 0x7ffe9a000000 of size 20,971,520 bytes\n========= Saved host backtrace up to driver entry point at kernel launch time\n========= Host Frame: [0x93c3fa] in libcusparse.so.12\n========= Host Frame: [0x99859a] in libcusparse.so.12\n========= Host Frame: [0x89fbfc] in libcusparse.so.12\n========= Host Frame: [0x17c999] in libcusparse.so.12\n========= Host Frame: [0x196a6b] in libcusparse.so.12\n========= Host Frame: cusparseSpMM [0xf3ed3] in libcusparse.so.12\n========= Host Frame: at::native::bmm_out_sparse_cuda(at::Tensor const&, at::Tensor const&, at::Tensor&)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const [0x2e54a29] in libtorch_cuda.so\n========= Host Frame: at::native::bmm_out_sparse_cuda(at::Tensor const&, at::Tensor const&, at::Tensor&) [0x2e573ff] in libtorch_cuda.so\n========= Host Frame: at::native::bmm_sparse_cuda(at::Tensor const&, at::Tensor const&) [0x2e59137] in libtorch_cuda.so\n========= Host Frame: at::(anonymous namespace)::(anonymous namespace)::wrapper_SparseCUDA__bmm(at::Tensor const&, at::Tensor const&) [0x3510e0a] in libtorch_cuda.so\n========= Host Frame: c10::impl::wrap_kernel_functor_unboxed_, at::Tensor, c10::guts::typelist::typelist >, at::Tensor (at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x3510ebf] in libtorch_cuda.so\n========= Host Frame: at::_ops::bmm::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x29ca7fd] in libtorch_cpu.so\n========= Host Frame: torch::autograd::VariableType::(anonymous namespace)::bmm(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x47cbb1e] in libtorch_cpu.so\n========= Host Frame: c10::impl::wrap_kernel_functor_unboxed_, at::Tensor, c10::guts::typelist::typelist >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x47cc502] in libtorch_cpu.so\n========= Host Frame: at::_ops::bmm::call(at::Tensor const&, at::Tensor const&) [0x2a1a6fd] in libtorch_cpu.so\n========= Host Frame: torch::autograd::THPVariable_bmm(_object*, _object*, _object*) [0x6a8ad8] in libtorch_python.so\n========= Host Frame: cfunction_call in methodobject.c:542 [0x128db6] in python\n========= Host Frame: _PyObject_MakeTpCall in call.c:214 [0x10454b] in python\n========= Host Frame: _PyEval_EvalFrameDefault in ceval.c:4769 [0x111cf5] in python\n========= Host Frame: _PyEval_Vector in ceval.c:6434 [0x1cd0a9] in python\n========= Host Frame: PyEval_EvalCode in ceval.c:1148 [0x1cc77e] in python\n========= Host Frame: run_eval_code_obj in pythonrun.c:1741 [0x1ed556] in python\n========= Host Frame: run_mod in pythonrun.c:1762 [0x1e907f] in python\n========= Host Frame: pyrun_file in pythonrun.c:1657 [0x1fde71] in python\n========= Host Frame: _PyRun_SimpleFileObject in pythonrun.c:440 [0x1fd28e] in python\n========= Host Frame: _PyRun_AnyFileObject in pythonrun.c:79 [0x1fcfb2] in python\n========= Host Frame: Py_RunMain in main.c:684 [0x1f7dad] in python\n========= Host Frame: Py_BytesMain in main.c:738 [0x1bcdf8] in python\n========= Host Frame: __libc_start_call_main in libc_start_call_main.h:58 [0x2a1c9] in libc.so.6\n========= Host Frame: __libc_start_main in libc-start.c:360 [0x2a28a] in libc.so.6\n========= Host Frame: [0x1bcc42] ", "url": "https://github.com/pytorch/pytorch/issues/167901", "state": "open", "labels": [ "module: sparse", "triaged", "module: sanitizers" ], "created_at": "2025-11-15T03:25:41Z", "updated_at": "2025-11-24T04:30:02Z", "comments": 1, "user": "supermarkli" }, { "repo": "pytorch/torchtitan", "number": 2046, "title": "Any interest in adding MLPerf Llama 3 8B to TorchTitan models ?", "body": "It will be great to have MLPerf LLama 3 pre-training working OOB with TorchTitan, Here are some references on that .\n\n[MLPerf Training Adds Llama 3.1 8B Benchmark](https://mlcommons.org/2025/10/training-llama-3-1-8b/)\n\n[small_llm_pretraining/nemo](https://github.com/mlcommons/training/tree/master/small_llm_pretraining/nemo)", "url": "https://github.com/pytorch/torchtitan/issues/2046", "state": "open", "labels": [], "created_at": "2025-11-14T18:38:59Z", "updated_at": "2026-01-05T22:49:56Z", "comments": 14, "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 167843, "title": "Some docs are outdated about how to access ctx object in forward function?", "body": "### \ud83d\udcda The doc issue\n\nI remember some docs said that the forward function (originally in torch.autograd.Function subclass) can pass anything to setup_context function by saving the data to ctx object. I was off for a while. Back in 2.6, the input param for forward function looks like (ctx, *input), but now it's(input_1, input_2, ...). I updated to 2.9 yesterday, and I found it's impossible to access the ctx object in forward function. I need to modify the old code a bit, which is ok. But the problem is, if I want to save anything for backward pass while I don't want to output it, in extreme case, do I have to compute it twice? 1st in forward function, 2nd in setup_context function.\nI saw some other docs said that, the 2 function style(forward+setup_context) is more similar to the vanilla torch implementation, so users are encouraged to do the 2 func style. I believe this docs is new and up to date, but the docs I mentioned uppon uppon is outdated.\nAnd, can you guys consider modifying the type hint of ctx object from *Any* to *[torch.autograd.function.]CtxFunction*. And add all the optional data member in __builtins__ or somewhere to help the auto completion in vs code. Thanks.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan", "url": "https://github.com/pytorch/pytorch/issues/167843", "state": "closed", "labels": [ "module: autograd", "triaged" ], "created_at": "2025-11-14T16:05:18Z", "updated_at": "2025-11-28T05:06:40Z", "user": "YagaoDirac" }, { "repo": "pytorch/xla", "number": 9712, "title": "Why isn't there a binding for clearing the XLAGraphExecutor::ComputationCache?", "body": "We have exposed this function in our [tenstorrent fork](https://github.com/tenstorrent/pytorch-xla/pull/16/files) and found that it works for post-test cleanup. \n\nMy assumption was that TPU runtime does not require such a feature because it does not bind scarce device resources to PJRTComputation lifetime. So, implementers did not find it necessary to implement such a function. Is that correct? Were there any other reasons to avoid exposing this to the user? ", "url": "https://github.com/pytorch/xla/issues/9712", "state": "open", "labels": [ "question", "runtime" ], "created_at": "2025-11-14T15:19:55Z", "updated_at": "2025-11-24T18:55:09Z", "user": "jameszianxuTT" }, { "repo": "pytorch/pytorch", "number": 167820, "title": "Why torch==2.9 compile qwen3 model with block ptr will crash?", "body": "### \ud83d\udc1b Describe the bug\n\ntorch==2.8 compile with \u201ctorch._inductor.config.triton.use_block_ptr = True\u201c is ok, 2.9 torch will crash as shown in the figure.\n\n\"Image\"\n\n```python\nimport torch\nfrom vllm import LLM, SamplingParams\nfrom vllm.config import CompilationConfig\nfrom torch._inductor.lowering import make_fallback\nprompts = [\n \"Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name\" ,\n]\nsampling_params = SamplingParams(temperature=0.0, top_p=0.95,max_tokens=2)\n\ndef main():\n torch._inductor.config.implicit_fallbacks = False\n torch._inductor.config.layout_optimization = False\n torch._inductor.config.prologue_fusion = True\n torch._inductor.config.permute_fusion = True\n torch._inductor.config.online_softmax = True\n torch._inductor.config.memory_planning = False\n torch._inductor.config.memory_pool = \"intermediates\"\n torch._inductor.config.autotune_local_cache = True\n torch._inductor.config.autotune_fallback_to_aten = False\n torch._inductor.config.max_autotune_gemm = True\n torch._inductor.config.max_autotune_gemm_backends = \"TRITON\"\n torch._inductor.config.triton.use_block_ptr = True\n torch._inductor.config.triton.prefer_nd_tiling = True\n torch._inductor.config.triton.tile_reductions = True\n torch._inductor.config.triton.codegen_upcast_to_fp32 = False\n\n llm = LLM(model=\"models/Qwen3-0.6B\",\n dtype=torch.float16,\n enforce_eager=False,\n compilation_config=CompilationConfig(\n mode=3,\n cache_dir=\"output/vllm/compile\"))\n outputs = llm.generate(prompts, sampling_params)\n print(\"\\nGenerated Outputs:\\n\" + \"-\" * 60)\n for output in outputs:\n prompt = output.prompt\n generated_text = output.outputs[0].text\n print(f\"Prompt: {prompt!r}\")\n print(f\"Output: {generated_text!r}\")\n print(\"-\" * 60)\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n### Versions\n\ntorch >=2.9.0\nno other requirements\n\ncc @chauhang @penguinwu @zou3519", "url": "https://github.com/pytorch/pytorch/issues/167820", "state": "open", "labels": [ "needs reproduction", "triaged", "oncall: pt2", "vllm-compile", "module: vllm" ], "created_at": "2025-11-14T08:06:29Z", "updated_at": "2025-11-18T06:07:13Z", "comments": 2, "user": "TracyMac1" }, { "repo": "pytorch/pytorch", "number": 167818, "title": "undefined symbol for `at::meta::_index_put_impl_` when running or compiling executable on my own torch-related project.", "body": "### \ud83d\udc1b Describe the bug\n\nI have a torch extended backend(PrivateUse1), somewhere in my code, I invoked `at::meta::_index_put_impl_` API. undefined symbol error occurs when I try to create executable or running python.\n\n`at::meta::_index_put_impl_` seems like a LOCAL symbol in libtorch_cpu.so, and not exist in dynsym, but why? \n\nit marked as `TORCH_API` as `at::cpu::_index_put_impl_`, but I found `at::cpu::_index_put_impl_` in output of `nm -CD libtorch_cpu.so`, no `at::meta::_index_put_impl_`.\n\nhow can I use this API or some other APIs like this in my own shared lib?\n\n\n```bash\nnm -C libtorch_cpu.so| grep -E \"at::(cpu|meta)::_index_put_impl_\"\n0000000003403cc8 T at::cpu::_index_put_impl_(at::Tensor&, c10::List > const&, at::Tensor const&, bool, bool)\n00000000048592a3 t at::meta::_index_put_impl_(at::Tensor&, c10::List > const&, at::Tensor const&, bool, bool)\n```\n\n```bash\nnm -CD libtorch_cpu.so| grep -E \"at::(cpu|meta)::_index_put_impl_\"\n0000000003403cc8 T at::cpu::_index_put_impl_(at::Tensor&, c10::List > const&, at::Tensor const&, bool, bool)\n```\n\n```bash\nreadelf -CWs libtorch_cpu.so| grep -E \"at::(cpu|meta)::_index_put_impl_\"\n 32915: 0000000003403cc8 70 FUNC GLOBAL DEFAULT 12 at::cpu::_index_put_impl_(at::Tensor&, c10::List > const&, at::Tensor const&, bool, bool)\n4301074: 00000000048592a3 70 FUNC LOCAL DEFAULT 12 at::meta::_index_put_impl_(at::Tensor&, c10::List > const&, at::Tensor const&, bool, bool)\n4441498: 0000000003403cc8 70 FUNC GLOBAL DEFAULT 12 at::cpu::_index_put_impl_(at::Tensor&, c10::List > const&, at::Tensor const&, bool, bool)\n```\n\n\n\n### Versions\n\nPyTorch version: 2.9.0a0+git0fabc3b\nIs debug build: True\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.1 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: Could not collect\nCMake version: version 4.1.0\nLibc version: glibc-2.35\n\nPython version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 14 2025, 16:16:33) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.35\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8480+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 8\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 5.3 MiB (112 instances)\nL1i cache: 3.5 MiB (112 instances)\nL2 cache: 224 MiB (112 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Itlb multihit: Not affected\nVulnerabil", "url": "https://github.com/pytorch/pytorch/issues/167818", "state": "open", "labels": [ "module: binaries", "triaged", "actionable", "module: PrivateUse1" ], "created_at": "2025-11-14T07:32:33Z", "updated_at": "2025-12-08T06:43:48Z", "comments": 4, "user": "sunjiabin17" }, { "repo": "pytorch/pytorch", "number": 167721, "title": "Minimal, comprehensive test suite", "body": "\nWe are building PyTorch from source using, among others, the system installed CUDA.\n\nCurrently we are running the full test suite to ensure nothing got broken due to e.g. wrong dependency versions or missing dependencies. I.e. `python test/run_test.py --continue-through-error`\n\nHowever, that takes up to 3 days on a GPU node of a HPC cluster and shows random failures due to small accuracy issues or \"unlucky\" random inputs.\n\nIs there some smaller test suite that can be used to verify sufficiently large parts of PyTorch but runs much faster?\n\nI noticed that requesting a PR merge has an anticipated delay of 3-4hrs for running some tests. What is exactly used there? Could that be enough for our use case too?\n\n\n\nSo what I request is some documentation next to the \"Building PyTorch from source\" section on how to verify the built package in a reasonable time frame.\n\n\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/167721", "state": "open", "labels": [ "module: docs", "feature", "triaged", "module: infra", "module: testing" ], "created_at": "2025-11-13T12:18:42Z", "updated_at": "2025-11-26T21:55:31Z", "comments": 5, "user": "Flamefire" }, { "repo": "pytorch/pytorch", "number": 167716, "title": "`torch.sparse.mm` returns corrupted sparse tensor causing Segmentation fault in `to_dense()` on PyTorch 2.9.0", "body": "### \ud83d\udc1b Describe the bug\n\nI experienced a problem while using the \"torch.sparse.mm()\" function, which prompted me to consult the official documentation for clarification. The documentation includes sample code that executes successfully. According to the documentation, the second matrix parameter accepts both sparse and dense matrices. In the official example provided, the second matrix is implemented as a dense matrix. The example code runs as follows:\n```python\nimport torch\na = torch.tensor([[1., 0, 2], [0, 3, 0]]).to_sparse().requires_grad_()\nb = torch.tensor([[0, 1.], [2, 0], [0, 0]], requires_grad=True)\ny = torch.sparse.mm(a, b)\nz = y.to_dense()\n```\nThe official example executes successfully. Out of curiosity about the behavior when the second matrix parameter is sparse, I created a custom sparse matrix to test the functionality. The sparse matrix multiplication operation itself completed without errors, but attempting to inspect the result using \"to_dense()\" caused a \"Segmentation fault\".\nThe code runs as follows:\n```python\nimport torch\n\ntorch.manual_seed(42)\n\nindices_A = torch.tensor([[0, 1, 2], [0, 2, 3]]) \nvalues_A = torch.tensor([1.0, 2.0, 3.0]) \nA = torch.sparse_coo_tensor(indices_A, values_A, size=(3, 4))\n\nindices_B = torch.tensor([[0, 1, 2, 3], [0, 1, 1, 2]]) \nvalues_B = torch.tensor([4.0, 5.0, 6.0, 7.0]) \nB = torch.sparse_coo_tensor(indices_B, values_B, size=(4, 2))\n\nC = torch.sparse.mm(A, B)\nC = C.to_dense()\n```\nit comes out:\n```\ntest2.py:13: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:53.)\n C = torch.sparse.mm(A, B)\nSegmentation fault (core dumped)\n```\nTo further investigate the root cause, I proceeded to debug the code. During debugging, I found that simply printing the two matrices prior to invoking torch.sparse.mm()resolved the segmentation fault. The specific modification is shown below:\n```python\nimport torch\n\ntorch.manual_seed(42)\n\nindices_A = torch.tensor([[0, 1, 2], [0, 2, 3]]) \nvalues_A = torch.tensor([1.0, 2.0, 3.0]) \nA = torch.sparse_coo_tensor(indices_A, values_A, size=(3, 4))\n\nindices_B = torch.tensor([[0, 1, 2, 3], [0, 1, 1, 2]]) \nvalues_B = torch.tensor([4.0, 5.0, 6.0, 7.0]) \nB = torch.sparse_coo_tensor(indices_B, values_B, size=(4, 2))\n\nprint(\"A:\", A)\nprint(\"B:\", B)\n\nC = torch.sparse.mm(A, B)\nC = C.to_dense()\n```\nBased on this observation, I suspect that the print()function inadvertently triggers necessary initialization procedures that should occur within \"torch.sparse.mm()\", but due to an implementation flaw in the sparse matrix multiplication function, these critical initialization steps are not being properly executed, resulting in the \"Segmentation fault\".\nTo investigate the root cause of the issue, I proceeded to examine the internal data structures of the matrices by printing their properties. The diagnostic code is shown below:\n```python\nimport torch\nimport os\n\ntorch.manual_seed(42)\n\nindices_A = torch.tensor([[0, 1, 2], [0, 2, 3]])\nvalues_A = torch.tensor([1.0, 2.0, 3.0])\nA = torch.sparse_coo_tensor(indices_A, values_A, size=(3, 4))\n\nindices_B = torch.tensor([[0, 1, 2, 3], [0, 1, 1, 2]])\nvalues_B = torch.tensor([4.0, 5.0, 6.0, 7.0])\nB = torch.sparse_coo_tensor(indices_B, values_B, size=(4, 2))\n\nC = torch.sparse.mm(A, B)\n\nindices = C.indices()\nvalues = C.values()\nprint(f\" Indices shape: {indices.shape}\")\nprint(f\" Values shape: {values.shape}\")\nprint(f\" Indices range: [{indices.min().item()}, {indices.max().item()}]\")\nprint(f\" Values range: [{values.min().item()}, {values.max().item()}]\")\n\nC = C.to_dense()\n```\nit comes out:\n\n\"Image\"\n\nAs highlighted in the red box, the index data of the resulting matrix C appears to be corrupted, leading the \"to_dense()\" function to attempt accessing an invalid memory address. This concludes my current analysis of the issue. Since I'm not proficient in C++, I'm unable to conduct deeper source code investigation. I greatly appreciate any insights you can provide!\n\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.9.0+cpu\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.39\n\nPython version: 3.13.9 (main, Oct 14 2025, 21:29:44) [Clang 20.1.4 ] (64-bit runtime)\nPython platform: Linux-6.8.0-86-generic-x86_64-with-glibc2.39\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runti", "url": "https://github.com/pytorch/pytorch/issues/167716", "state": "closed", "labels": [ "module: sparse", "module: crash" ], "created_at": "2025-11-13T07:25:20Z", "updated_at": "2025-11-13T16:57:47Z", "comments": 2, "user": "David-YB" }, { "repo": "pytorch/pytorch", "number": 167631, "title": "`jit.export` analoge for `torch.export`", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nAccording to the documentation, [`TorchScript` is deprecated in favor of `torch.export`](https://docs.pytorch.org/docs/stable/jit.html). \n\nHowever, `torch.jit.script` offered some functionality that does not seem to be covered by `torch.export`, specifically the ability to export multiple entry points via the `@jit.export` decorator. This is useful in many situations, for instance when working with Normalizing Flows and wanting to use both forward and inverse method, probabilistic models with multiple relevant methods, or for declaring additional state-modifying functions.\n\nWithout this functionality, it's unclear how to convert models that relied on `jit.export` to the new `torch.export` setup.\n\nRelated Discussions:\n\n- https://github.com/pytorch/executorch/issues/7458\n- https://discuss.pytorch.org/t/export-multiple-functions-of-a-pytorch-module/194816\n\n\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/167631", "state": "open", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2025-11-12T10:24:10Z", "updated_at": "2025-11-17T18:41:46Z", "comments": 3, "user": "randolf-scholz" }, { "repo": "pytorch/pytorch", "number": 167630, "title": "Memory leak in aoti compile", "body": "### \ud83d\udc1b Describe the bug\n\nI want to compile many exported programs into an aoti .so file. However it seems like there is a memory leak\n```python\nimport contextlib\nimport gc\nimport logging\nimport os\nimport tempfile\nfrom pathlib import Path\n\nimport torch\nimport torch._inductor\nimport torch.nn as nn\n\nlogging.basicConfig(\n format=\"%(asctime)s %(levelname)s: %(message)s\",\n level=logging.INFO,\n)\n\n\ndef log_current_memory() -> None:\n total = torch.cuda.get_device_properties(0).total_memory\n allocated = torch.cuda.memory_allocated(0)\n reserved = torch.cuda.memory_reserved(0)\n msg = \"Current CUDA memory usage:\"\n msg += f\"\\n Total: {total / 1e9:.2f} GB\"\n msg += f\"\\n Allocated: {allocated / 1e9:.4} GB\"\n msg += f\"\\n Reserved: {reserved / 1e9:.4f} GB\"\n logging.info(msg)\n\n\n# ---------- toy model ----------\ndef make_mlp(in_dim=128, hidden=256, out_dim=64, depth=3):\n layers = []\n d = in_dim\n for _ in range(depth):\n layers += [nn.Linear(d, hidden), nn.ReLU()]\n d = hidden\n layers += [nn.Linear(d, out_dim)]\n return nn.Sequential(*layers)\n\n\ndef one_iter(i, device, batch, in_dim, hidden, out_dim, depth, workdir):\n model = make_mlp(in_dim, hidden, out_dim, depth).to(device).eval()\n x = torch.randn(batch, in_dim, device=device)\n with torch.inference_mode():\n _ = model(x)\n exported = torch.export.export(\n model,\n (x,),\n )\n\n pkg_path = Path(workdir) / f\"mlp_{i}.pt2\"\n path = torch._inductor.aoti_compile_and_package( # returns artifact path\n exported_program=exported,\n package_path=str(pkg_path),\n )\n\n logging.info(f\"[iter {i}] AOTI artifact: {path}\")\n\n log_current_memory()\n\n del _\n del model, x, exported\n torch.cuda.synchronize()\n torch.cuda.empty_cache()\n gc.collect()\n\n with contextlib.suppress(OSError):\n os.remove(path)\n\n\ndef main():\n assert torch.cuda.is_available(), \"CUDA is required for this MRE.\"\n\n device = \"cuda\"\n logging.info(f\"Running on {torch.cuda.get_device_name(0)}\")\n\n log_current_memory()\n for i in range(10):\n with tempfile.TemporaryDirectory() as tmp_workdir:\n one_iter(\n i=i,\n device=device,\n batch=32,\n in_dim=2048,\n hidden=512,\n out_dim=10,\n depth=6,\n workdir=tmp_workdir,\n )\n logging.info(\"Done.\")\n torch.cuda.synchronize()\n torch.cuda.empty_cache()\n gc.collect()\n log_current_memory()\n\n\nif __name__ == \"__main__\":\n main()\n```\nwill print\n```\n2025-11-12 09:14:08,074 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.0 GB\n Reserved: 0.0000 GB\n/persist/envs/Fluyt312/lib/python3.12/site-packages/torch/_inductor/compile_fx.py:282: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.\n warnings.warn(\n2025-11-12 09:14:16,492 INFO: [iter 0] AOTI artifact: /tmp/tmpu7l62f87/mlp_0.pt2\n2025-11-12 09:14:16,493 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.01825 GB\n Reserved: 0.0294 GB\n2025-11-12 09:14:20,846 INFO: [iter 1] AOTI artifact: /tmp/tmp_4ueq3r9/mlp_1.pt2\n2025-11-12 09:14:20,847 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.02772 GB\n Reserved: 0.0336 GB\n2025-11-12 09:14:25,241 INFO: [iter 2] AOTI artifact: /tmp/tmpe3zldlu3/mlp_2.pt2\n2025-11-12 09:14:25,242 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.03719 GB\n Reserved: 0.0608 GB\n2025-11-12 09:14:29,657 INFO: [iter 3] AOTI artifact: /tmp/tmptv6ucstj/mlp_3.pt2\n2025-11-12 09:14:29,657 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.04667 GB\n Reserved: 0.0650 GB\n2025-11-12 09:14:36,116 INFO: [iter 4] AOTI artifact: /tmp/tmp_9hsaky4/mlp_4.pt2\n2025-11-12 09:14:36,117 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.05614 GB\n Reserved: 0.0713 GB\n2025-11-12 09:14:40,528 INFO: [iter 5] AOTI artifact: /tmp/tmpi2q_wgap/mlp_5.pt2\n2025-11-12 09:14:40,529 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.06561 GB\n Reserved: 0.0755 GB\n2025-11-12 09:14:44,982 INFO: [iter 6] AOTI artifact: /tmp/tmp_9xuabi5/mlp_6.pt2\n2025-11-12 09:14:44,982 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.07508 GB\n Reserved: 0.0818 GB\n2025-11-12 09:14:49,412 INFO: [iter 7] AOTI artifact: /tmp/tmpeedfcd55/mlp_7.pt2\n2025-11-12 09:14:49,412 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.08455 GB\n Reserved: 0.1070 GB\n2025-11-12 09:14:53,822 INFO: [iter 8] AOTI artifact: /tmp/tmpp2miv7ts/mlp_8.pt2\n2025-11-12 09:14:53,823 INFO: Current CUDA memory usage:\n Total: 85.10 GB\n Allocated: 0.09402 GB\n Reserved: 0.1132 GB\n2025-11-12 09:14:58,244 INFO: [iter 9] AOTI artifact: /tmp/tmp3mwylv5e/mlp_9.pt2\n2025-11-12 09:14:58,244 INFO: Current CUDA memory usage:\n", "url": "https://github.com/pytorch/pytorch/issues/167630", "state": "closed", "labels": [ "module: memory usage", "oncall: pt2", "oncall: export", "module: aotinductor" ], "created_at": "2025-11-12T09:23:03Z", "updated_at": "2025-11-19T03:42:13Z", "comments": 1, "user": "ben-da6" }, { "repo": "pytorch/pytorch", "number": 167624, "title": "\ud83d\udca1 Bounty Platform for PyTorch", "body": "Hi PyTorch team! \ud83d\udc4b\n\nI wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.\n\n**What is Roxonn?**\n\u2705 Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)\n\u2705 Notify 300+ AI/ML developers\n\u2705 Auto-pay when PRs merge via blockchain\n\u2705 Zero crypto setup needed\n\n**Quick flow:**\n1. Register repo (GitHub App)\n2. Fund pool with USDC (stable pricing)\n3. Assign bounties to features\n4. PR merged \u2192 automatic payment\n\n**Perfect for AI/ML:**\n- Access to research community\n- **Only 1% total platform fee**\n- Transparent payments\n\nLearn more: **https://roxonn.com**\n\n*No pressure - sharing a resource!*", "url": "https://github.com/pytorch/pytorch/issues/167624", "state": "closed", "labels": [], "created_at": "2025-11-12T07:49:51Z", "updated_at": "2025-11-13T12:35:34Z", "comments": 0, "user": "dineshroxonn" }, { "repo": "pytorch/pytorch", "number": 167613, "title": "UNSTABLE inductor-periodic / inductor-smoke-test / test (inductor_torchbench_smoketest_perf)", "body": "I can't figure out from the logs what is wrong\n\ncc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @coconutruben @seemethere @malfet @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/167613", "state": "closed", "labels": [ "high priority", "module: ci", "triaged", "module: cuda graphs", "oncall: pt2", "module: inductor", "unstable" ], "created_at": "2025-11-12T03:47:42Z", "updated_at": "2026-01-05T15:15:52Z", "comments": 6, "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 167596, "title": "[dynamo][feature] Guard on constants only if graph is specialized and not bytecode", "body": "### \ud83d\udc1b Describe the bug\n\nWhen Dynamo creates guards, it specializes not just for Fx graph, but also for residual bytecode. For example, in the following codebase, the graph is same, but the `summary` update leads to a recompilation. This causes unnecessary compile time issues. Is it possible to create guards only for those constants that actually end up changing graph? And somehow replay the constant variable compute in the resulting bytecode.\n\n```\nimport torch\n\nsummary = {}\n\nclass SubMod(torch.nn.Module):\n def __init__(self, name):\n super().__init__()\n self.name = name\n\n @torch.compile(backend=\"eager\")\n def forward(self, x):\n out = torch.sin(x)\n self.add_summary()\n return out\n\n def add_summary(self):\n global summary\n summary[self.name] = 0\n\nclass Mod(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.mod_a = SubMod(\"mod_a\")\n self.mod_b = SubMod(\"mod_b\")\n\n def forward(self, x):\n global summary\n summary = {}\n x = self.mod_a(x)\n x = self.mod_b(x)\n return x\n\nmod = Mod()\n\nx = torch.randn(4)\nmod(x)\nprint(summary)\n\nx = torch.randn(4)\nmod(x)\nprint(summary)\n\n```\n\n### Error logs\n\n_No response_\n\n### Versions\n\nNA\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela", "url": "https://github.com/pytorch/pytorch/issues/167596", "state": "open", "labels": [ "triaged", "enhancement", "oncall: pt2", "module: dynamo" ], "created_at": "2025-11-12T00:55:17Z", "updated_at": "2025-11-20T17:44:45Z", "comments": 2, "user": "anijain2305" }, { "repo": "pytorch/pytorch", "number": 167566, "title": "include string names of types in logs when dynamo guards on input types", "body": "When debugging recompile reasons in dynamo, it is convenient to look at a tlparse to understand what is causing recompiles.\n\nOne guard that dynamo has is a type_id guard, which guards on the id(type(x)) of an input. In the tlparse, when one these guards fails it shows up as this:\n\n\"Image\"\n\nThis is not very easy to interpret - it would be great if dynamo can stash the string name of the type so it can include it in the tlparse.\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo @chenyang78", "url": "https://github.com/pytorch/pytorch/issues/167566", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "module: compile ux" ], "created_at": "2025-11-11T19:11:14Z", "updated_at": "2025-12-10T17:14:27Z", "comments": 0, "user": "bdhirsh" }, { "repo": "pytorch/pytorch", "number": 167560, "title": "naming of periodic-dynamo-benchmarks-cpu-test / test (cpu_inductor_amp_freezing_torchbench, 1, 2, linux.8xlarge.amx) seems wrong", "body": "Why is it a dynamo benchmark but also running cpu_inductor_amp_freezing ?\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/167560", "state": "open", "labels": [ "triaged", "module: benchmark", "oncall: pt2" ], "created_at": "2025-11-11T18:20:30Z", "updated_at": "2025-11-17T16:51:51Z", "comments": 0, "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 167558, "title": "per_page=1000 doesn't work in hud.pytorch.org", "body": "e.g. https://hud.pytorch.org/hud/pytorch/pytorch/main/31?per_page=50&mergeEphemeralLF=true\n\nWhatever I set it to, it seems to just be 50.\nMy use case is that I am trying to find the first date that a test began to fail. The test has been failing for weeks. I have to hit the next button a lot.\n\ncc @ZainRizvi @huydhn @clee2000", "url": "https://github.com/pytorch/pytorch/issues/167558", "state": "open", "labels": [ "triaged", "module: devx" ], "created_at": "2025-11-11T18:06:21Z", "updated_at": "2025-11-11T19:30:25Z", "comments": 1, "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 167540, "title": "[Dtensor]:change the test_mm shape from (12,8) * (8,16) to (512, 512) * (512, 512), throw assert error", "body": "### \ud83d\udc1b Describe the bug\n\nwhen I try to use (512, 512) * (512, 512) instead of the original shape in the testcase, it throw assert error.\n```python\n @with_comms\n def test_mm(self):\n device_mesh = self.build_device_mesh()\n shard0_spec = Shard(0)\n shard1_spec = Shard(1)\n replica_spec = Replicate()\n\n t1 = torch.randn(512, 512, requires_grad=True)\n t2 = torch.randn(512, 512, requires_grad=True)\n local_res = torch.mm(t1, t2)\n\n def test_placement_comb(\n placements1: list[Placement], placements2: list[Placement]\n ) -> None:\n dt1 = distribute_tensor(t1, device_mesh, placements1)\n dt2 = distribute_tensor(t2, device_mesh, placements2)\n dist_res: DTensor = cast(DTensor, torch.mm(dt1, dt2)).redistribute(\n device_mesh, [replica_spec]\n )\n self.assertEqual(dist_res.to_local(), local_res)\n # backward\n grad_dist_res = torch.ones_like(dist_res)\n dist_res.backward(grad_dist_res)\n self.assertIsNotNone(dt1.grad)\n\n placement_specs = [shard0_spec, shard1_spec, replica_spec]\n shard_specs_comb = list(itertools.product(placement_specs, placement_specs))\n for spec in shard_specs_comb:\n test_placement_comb([spec[0]], [spec[1]])\n```\n\nCUDA:12.8, driver 550.54.15\npytorch:2.9.0\n\n\n\n### Versions\n\nIs there anything that needs to be supplemented.\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad", "url": "https://github.com/pytorch/pytorch/issues/167540", "state": "open", "labels": [ "oncall: distributed", "module: dtensor" ], "created_at": "2025-11-11T13:14:49Z", "updated_at": "2025-11-12T08:21:45Z", "comments": 2, "user": "zhanghanleo93" }, { "repo": "pytorch/pytorch", "number": 167526, "title": "Missing documentation for CUTLASS backend", "body": "### \ud83d\udcda The doc issue\n\nThe release notes of PyTorch 2.8.0 report \n\n> Inductor CUTLASS backend support\n\nBut it is missing information on how to activate/use that.\n\nThere are multiple NVIDIA PYPI packages that are related: nvidia-cutlass, nvidia-cutlass-dsl \nAnd there is the CUTLASS repository on GitHub included under the `third_party` submodule folder.\n\nFor PyTorch 2.9.0 the submodule points to the CUTLASS 4.1.0 tag, but there is no corresponding release of nvidia-cutlass on PYPI, but there is one for nvidia-cutlass-dsl.\n\nSimilar for PyTorch 2.8.0 it points to v3.9.2 but neither nvidia PYPI package has a corresponding release.\n\nThere is a message \"Please check whether _inductor.config.cuda.cutlass_dir [...] is set correctly\" but no information what that is supposed to. In the source then env variable `TORCHINDUCTOR_CUTLASS_DIR` can be found but that is nowhere mentioned in the docs either. See this 2 searches:\n\n- https://docs.pytorch.org/docs/stable/search.html?q=TORCHINDUCTOR_CUTLASS_DIR\n- https://docs.pytorch.org/docs/stable/search.html?q=_inductor.config.cuda.cutlass_dir\n\n### Suggest a potential alternative/fix\n\nAdd documentation how to use CUTLASS:\n- Requirements\n- Setup\n- Expected results/example/tutorial\n\ncc @svekars @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv", "url": "https://github.com/pytorch/pytorch/issues/167526", "state": "open", "labels": [ "module: docs", "module: cuda", "triaged" ], "created_at": "2025-11-11T08:33:22Z", "updated_at": "2025-12-17T15:25:44Z", "comments": 1, "user": "Flamefire" }, { "repo": "pytorch/pytorch", "number": 167499, "title": "check_compiler_is_gcc() fails to detect versioned GCC compilers (g++-13, g++-14, etc.)", "body": "### \ud83d\udc1b Describe the bug\n\n\ud83d\udc1b Describe the bug\n\nThe torch.utils.cpp_extension.check_compiler_is_gcc() function only returns True when the compiler basename is exactly 'c++', failing to detect other GCC variants like g++, gcc, g++-13, g++-14, etc.\n\nThis affects any PyTorch functionality that relies on GCC detection, causing features to be silently disabled or tests to be incorrectly skipped on systems using versioned GCC compilers.\n\nHow to reproduce:\n\nOn a system where the default C++ compiler is g++-13 (common on Fedora/RHEL):\n\n```python\nfrom torch.utils.cpp_extension import get_cxx_compiler, check_compiler_is_gcc\n\ncompiler = get_cxx_compiler() # Returns /usr/bin/g++-13\nresult = check_compiler_is_gcc(compiler)\n\nprint(f\"Compiler: {compiler}\")\nprint(f\"Detected as GCC: {result}\") # False (incorrect!)\n```\n\nExpected result: Detected as GCC: True\nActual result: Detected as GCC: False\n\nRoot cause:\n\nIn torch/utils/cpp_extension.py, the check_compiler_is_gcc() function only checks if the compiler basename is exactly 'c++':\n\n```python\ncompiler_path = os.path.realpath(results[0].strip())\nif os.path.basename(compiler_path) == 'c++' and 'gcc version' in version_string:\n return True\nreturn False\n```\n\nImpact:\n- Any feature/test that uses check_compiler_is_gcc() will fail to detect GCC on systems with versioned compilers\n- GCC-specific optimizations or features may be silently disabled\n\nEnvironment:\n- PyTorch version: main/viable/strict\n- OS: Fedora/RHEL with versioned GCC\n- Compiler: g++-13, g++-14, or similar\n\nI am working on a PR to fix this issue.\n\n\n### Versions\n\nCollecting environment information...\nPyTorch version: N/A\nIs debug build: N/A\nCUDA used to build PyTorch: N/A\nROCM used to build PyTorch: N/A\n\nOS: CentOS Stream 9 (x86_64)\nGCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11)\nClang version: Could not collect\nCMake version: version 3.26.5\nLibc version: glibc-2.34\n\nPython version: 3.9.23 (main, Aug 19 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (64-bit runtime)\nPython platform: Linux-5.14.0-615.el9.x86_64-x86_64-with-glibc2.34\nIs CUDA available: N/A\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: \nGPU 0: NVIDIA H200\nGPU 1: NVIDIA H200\nGPU 2: NVIDIA H200\nGPU 3: NVIDIA H200\nGPU 4: NVIDIA H200\nGPU 5: NVIDIA H200\nGPU 6: NVIDIA H200\nGPU 7: NVIDIA H200\n\nNvidia driver version: 580.82.07\ncuDNN version: Probably one of the following:\n/usr/lib64/libcudnn.so.9.13.0\n/usr/lib64/libcudnn_adv.so.9.13.0\n/usr/lib64/libcudnn_cnn.so.9.13.0\n/usr/lib64/libcudnn_engines_precompiled.so.9.13.0\n/usr/lib64/libcudnn_engines_runtime_compiled.so.9.13.0\n/usr/lib64/libcudnn_graph.so.9.13.0\n/usr/lib64/libcudnn_heuristic.so.9.13.0\n/usr/lib64/libcudnn_ops.so.9.13.0\nIs XPU available: N/A\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: N/A\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 160\nOn-line CPU(s) list: 0-159\nVendor ID: GenuineIntel\nModel name: Intel Xeon Processor (SapphireRapids)\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 40\nSocket(s): 2\nStepping: 4\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities\nVirtualization: VT-x\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 5 MiB (160 instances)\nL1i cache: 5 MiB (160 instances)\nL2 cache: 320 MiB (80 instances)\nL3 cache: 32 MiB (2 instances)\nNUMA node(s): 2", "url": "https://github.com/pytorch/pytorch/issues/167499", "state": "closed", "labels": [ "module: cpp-extensions" ], "created_at": "2025-11-11T01:11:22Z", "updated_at": "2025-11-11T05:14:08Z", "comments": 0, "user": "razaaliraza" }, { "repo": "pytorch/pytorch", "number": 167480, "title": "OS command injection via torch.utils.cpp_extension precompiled-header build (use_pch path)", "body": "**Summary**\nThere is an OS command injection risk in `torch/utils/cpp_extension.py` in the precompiled-header build helper. The helper constructs a compiler command including user-supplied values (e.g., `extra_cflags`, `extra_include_paths`) and executes the command via `subprocess.check_output(..., shell=True)`. If untrusted input reaches those parameters (for example, a service calling `load_inline(..., extra_cflags=..., use_pch=True)` with user-provided flags), an attacker can inject shell metacharacters and execute arbitrary commands as the Python process user.\n\nOriginal Thread : https://github.com/pytorch/pytorch/security/advisories/GHSA-gfrj-f355-6v3r#advisory-comment-139444\n\n**Affected versions**\n- Introduced in Aug 2023 (commit `5ed6047`, PR #106696).\n- Present in PyTorch releases starting with 2.1 and later main/nightly as of 2025.\n\n**Severity**\n- Meta Security and @malfet asked me to file me as general bug. Exact comment from Security Issue \n\n```\n _ @sumantro93 your example highlights the point I was trying to make: it's not framework's responsibility to sanitize the inputs.\n\nFor example, In the sample that you've shared, one is allowed to compile and run any untrusted code, which is a huge security issue on its own, so even if this issue is fixed, one is already allowed to execute arbitrary code on the host by the Flask endpoint developer.\n\nClosing, but please do not hesitate to report it as regular issue or propose a pull request that would sanitize the inputs _\n``` \n\n**Technical details & PoC**\n- The vulnerable pattern constructs a single command string and calls: `subprocess.check_output(cmd_string, shell=True, stderr=subprocess.STDOUT)`.\n- Proof-of-concept (local):\n```python\n# repro_pch_bug.py\nimport os, subprocess\ndef build_precompile_header(pch_cmd):\n try:\n subprocess.check_output(pch_cmd, shell=True, stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n print('Error:', e)\n\npayload = \"false; echo vulnerable > /tmp/pch_exploit\"\nbuild_precompile_header(payload)\nprint('Exploit file exists?', os.path.exists('/tmp/pch_exploit'))\n\n\ncc @janeyx99 @malfet", "url": "https://github.com/pytorch/pytorch/issues/167480", "state": "closed", "labels": [ "module: cpp-extensions", "module: error checking", "triaged", "actionable" ], "created_at": "2025-11-10T20:36:14Z", "updated_at": "2025-11-11T07:27:44Z", "comments": 1, "user": "sumantro93" }, { "repo": "pytorch/pytorch", "number": 167467, "title": "Tensor creation documentation: example code not consistent with its description", "body": "https://docs.pytorch.org/cppdocs/notes/tensor_creation.html#configuring-properties-of-the-tensor says \u201cHere is an example of creating a `TensorOptions` object that represents a **64-bit float**, strided tensor that requires a gradient, and lives on CUDA device 1\u201d, but then calls `.dtype(torch::kFloat32)`.\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/167467", "state": "open", "labels": [ "module: docs", "triaged", "actionable" ], "created_at": "2025-11-10T15:00:11Z", "updated_at": "2025-11-10T21:04:17Z", "comments": 0, "user": "sboukortt" }, { "repo": "pytorch/pytorch", "number": 167459, "title": "Dynamic number of omp threads of torch.compile cache", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIt looks like torch.compile hardcodes the number of omp threads in the cache. I can see things like `#pragma omp parallel num_threads(8)` in the cache. And if different number threads is used the performance is much worse. Is it possible to make it compatible for different number of threads? It's quite useful when running on HPC. Naively it sounds to be something very simple. Hopefully one just need to change `#pragma omp parallel num_threads(xxx)` to `#pragma omp parallel`?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/167459", "state": "open", "labels": [ "triaged", "oncall: pt2", "oncall: cpu inductor" ], "created_at": "2025-11-10T10:24:23Z", "updated_at": "2025-12-22T19:49:32Z", "comments": 1, "user": "SUSYUSTC" }, { "repo": "pytorch/torchtitan", "number": 2008, "title": "On the TorchTitan Infrastructure Build-out (VLM)", "body": "In the past, I\u2019ve always trained models with the Lightning framework; now I\u2019d like to switch to a more efficient one (TorchTitan or Megatron). However, I\u2019ve run into a few questions and would appreciate your advice:\nCan I simply import the encoder part straight from Hugging Face Transformers? (In VLM, the encoder usually accounts for only a small fraction of the parameters, so in my view it doesn\u2019t need tensor-parallelism, etc.)", "url": "https://github.com/pytorch/torchtitan/issues/2008", "state": "open", "labels": [ "question" ], "created_at": "2025-11-09T15:03:35Z", "updated_at": "2025-11-10T09:56:00Z", "user": "Joluck" }, { "repo": "pytorch/pytorch", "number": 167412, "title": "How can I train in C++ using a Pytorch torchscript model", "body": "### \ud83d\udc1b Describe the bug\n\ndd\n\n### Versions\n\nI trained a model in the PyTorch, and then saved it to Torchscript format using torch.jit.save.\nNow, I want to retrain on this model. I have a question about whether the torchscript model can be used for training.\n\n I have a few different questions about how to train the Torchscript model in C++.\nI want to use a trained model for fine tuning. I generated the Torchscript model in pytorch. In C++ API, I load the model using torch::jit::load function. And then I want to retrain the model.\nIn my code:\ntorch::jit::script::Module m_model = torch::jit::load(m_modulePath);\ntorch::optim::SGD optimizer(m_model.parameters(), SGDoptions);\n\nWhen I set up the optimizer, I was told that the first parameter was incorrect.\n\ncc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel", "url": "https://github.com/pytorch/pytorch/issues/167412", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2025-11-08T13:11:14Z", "updated_at": "2025-11-10T19:11:24Z", "comments": 1, "user": "mullerhai" }, { "repo": "pytorch/ao", "number": 3314, "title": "Loading 8bit optimizer state from checkpoint causes dtype mismatch", "body": "We are using torch2.8. Optimizer states are quantized to [8bit](https://github.com/pytorch/ao/blob/main/torchao/optim/subclass_8bit.py). Normal training jobs are fine, but jobs that resume from checkpoint fail at `optimizer.step()`. We use AdamW optimizer copied from some older version of torch/torchao, where computation is done at fp32 precision:\n```\nexp_avg_f32 = exp_avg.float().lerp(grad_f32, 1 - beta1)\n```\n\nThis fails with error that indicates `exp_avg.float()` is somehow bf16. \n```\ntorch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_method lerp(*(DTensor(local_tensor=OptimState8bit(signed=True, block_size=256, shape=(1408, 2048), device=cuda:0, requires_grad=False), device_mesh=DeviceMesh('cuda', [0], mesh_dim_names=('fsdp_cp',)), placements=(Shard(dim=0),)), DTensor(local_tensor=FakeTensor(..., device='cuda:0', size=(1408, 2048)), device_mesh=DeviceMesh('cuda', [0], mesh_dim_names=('fsdp_cp',)), placements=(Shard(dim=0),)), 0.09999999999999998), **{}): got RuntimeError('expected dtype torch.bfloat16 for `end`, but got dtype torch.float32')\n\nfrom user code:\n File \"/traindata/yunfan/lotus/lotus/components/optim/adamw.py\", line 165, in single_param_adam\n exp_avg_f32 = exp_avg_f32.lerp(grad_f32, 1 - beta1)\n```\n\nThe casting in load_state_dict() is suspicious that it converts state values like exp_avg to bf16 to match model weights' precision. So I tried to make both `DTensor` wrapper and `OptimState8bit` local tensor converted to fp32 if they appear to be bf16 after checkpoint loading, and added assert statement before `lerp()` to make sure `exp_avg.float()`'s dtype is fp32. But these efforts don't help. It seems somewhere in DTensor operation bf16 is enforced without triggering the assert statement. Can I get help on understanding the behavior and making correct fix? Thanks in advance!\n\nBelow is more detailed stacktrace:\n```\nTraceback (most recent call last):\n File \"/traindata/yunfan/lotus/lotus/grpo.py\", line 1051, in \n recipe_main()\n File \"/traindata/yunfan/lotus/lotus/utils/config.py\", line 184, in wrapper\n recipe_main(conf)\n File \"/traindata/yunfan/lotus/lotus/grpo.py\", line 1046, in recipe_main\n recipe.train()\n File \"/traindata/yunfan/lotus/lotus/grpo.py\", line 813, in train\n step_output = self.train_step(\n ^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/lotus/grpo.py\", line 694, in train_step\n self._optimizer.step()\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/optim/lr_scheduler.py\", line 133, in wrapper\n return func.__get__(opt, opt.__class__)(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/optim/optimizer.py\", line 516, in wrapper\n out = func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 120, in decorate_context\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/lotus/components/optim/adamw.py\", line 166, in step\n adamw8bit_step_helper(self, self.param_groups, self._new_buffer, self.bf16_stochastic_round, self.is_adamw)\n File \"/traindata/yunfan/lotus/lotus/components/optim/adamw.py\", line 280, in adamw8bit_step_helper\n single_param_adam(\n File \"/traindata/yunfan/lotus/lotus/components/optim/adamw.py\", line 208, in single_param_adam\n exp_avg_f32 = exp_avg_float.lerp(grad_f32, 1 - beta1)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/_compile.py\", line 53, in inner\n return disable_fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py\", line 929, in _fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_api.py\", line 350, in __torch_dispatch__\n return DTensor._op_dispatcher.dispatch(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_dispatch.py\", line 154, in dispatch\n self.sharding_propagator.propagate(op_info)\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py\", line 266, in propagate\n OutputSharding, self.propagate_op_sharding(op_info.schema)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py\", line 45, in __call__\n return self.cache(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py\", line 279, in propagate_op_sharding_non_cached\n out_tensor", "url": "https://github.com/pytorch/ao/issues/3314", "state": "open", "labels": [ "optimizer", "triaged" ], "created_at": "2025-11-08T00:27:00Z", "updated_at": "2025-12-05T01:12:07Z", "comments": 6, "user": "yz-ppl" }, { "repo": "pytorch/pytorch", "number": 167369, "title": "Dynamo fails to trace repr", "body": "### \ud83d\udc1b Describe the bug\n\n```python\nimport torch\nimport torch.nn as nn\n\n\nclass Config:\n def __repr__(self):\n return \"Config()\"\n\n\ndef forward(x, config):\n # Calling repr() on non-constant user object\n # This triggers the bug without the fix\n return x * len(repr(config))\n\n\nconfig = Config()\nx = torch.randn(2, 2)\n\ncompiled = torch.compile(forward, fullgraph=True)\n```\n\nErrors with:\n```\nUnsupported: Failed to trace builtin operator\n Explanation: Dynamo does not know how to trace builtin operator `repr` with argument types ['Config'] (has_kwargs False)\n Hint: Avoid calling builtin `repr` with argument types ['Config']. Consider using an equivalent alternative function/method to `repr`.\n Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.\n Hint: Please report an issue to PyTorch.\n\n Developer debug context: builtin repr [] False\n```\n\n### Versions\n\nmain\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela", "url": "https://github.com/pytorch/pytorch/issues/167369", "state": "closed", "labels": [ "oncall: pt2", "module: dynamo" ], "created_at": "2025-11-07T22:02:51Z", "updated_at": "2025-11-10T21:06:41Z", "comments": 0, "user": "tugsbayasgalan" }, { "repo": "pytorch/pytorch", "number": 167344, "title": "UnboundLocalError: cannot access local variable 'tracer_output' where it is not a ssociated with a value", "body": "(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] if tracer_output:\n(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] ^^^^^^^^^^^^^\n(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] UnboundLocalError: cannot access local variable 'tracer_output' where it is not associated with a value\n\nOnly in 2.9.0. Can we fix for 2.9.1?\n\nhttps://github.com/pytorch/pytorch/blame/release/2.9/torch/_dynamo/convert_frame.py#L1473\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/167344", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2025-11-07T18:48:42Z", "updated_at": "2025-11-07T22:31:29Z", "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 167331, "title": "[TEST FAILURE UT] TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda fails", "body": "**TDLR** for_each test fails when ran with: \n`TEST_CONFIG=default python3 test/run_test.py --verbose --keep-going -i test_foreach`\n\nAdding @serialTest() decorator to the test function `test_foreach_copy_with_multi_dtypes_large_input` fixes this issue.\n\n```\n_____ TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda _____\nTraceback (most recent call last):\n File \"/pytorch/torch/testing/_comparison.py\", line 1289, in not_close_error_metas\n pair.compare()\n File \"/pytorch/torch/testing/_comparison.py\", line 740, in compare\n self._compare_values(actual, expected)\n File \"/pytorch/torch/testing/_comparison.py\", line 898, in _compare_values\n compare_fn(\n File \"/pytorch/torch/testing/_comparison.py\", line 1077, in _compare_regular_values_close\n matches = torch.isclose(\ntorch.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB. GPU 0 has a total capacity of 139.80 GiB of which 94.40 GiB is free. Process 32188 has 518.00 MiB memory in use. Process 32189 has 518.00 MiB memory in use. Process 32190 has 518.00 MiB memory in use. Including non-PyTorch memory, this process has 39.76 GiB memory in use. Process 33858 has 518.00 MiB memory in use. Process 33860 has 518.00 MiB memory in use. Process 33859 has 518.00 MiB memory in use. Process 34062 has 520.00 MiB memory in use. Process 35455 has 518.00 MiB memory in use. Process 35453 has 518.00 MiB memory in use. Process 35454 has 518.00 MiB memory in use. Process 35670 has 520.00 MiB memory in use. 46.13 GiB allowed; Of the allocated memory 39.00 GiB is allocated by PyTorch, and 12.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/pytorch/test/test_foreach.py\", line 1376, in test_foreach_copy_with_multi_dtypes_large_input\n self.assertEqual(self_tensor, ref_out)\n File \"/pytorch/torch/testing/_internal/common_utils.py\", line 4139, in assertEqual\n error_metas = not_close_error_metas(\n File \"/pytorch/torch/testing/_comparison.py\", line 1295, in not_close_error_metas\n raise RuntimeError(\nRuntimeError: Comparing\n\nTensorOrArrayPair(\n id=(),\n actual=tensor([1., 1., 1., ..., 1., 1., 1.], device='cuda:0'),\n expected=tensor([1., 1., 1., ..., 1., 1., 1.], device='cuda:0'),\n rtol=1.3e-06,\n atol=1e-05,\n equal_nan=True,\n check_device=False,\n check_dtype=True,\n check_layout=False,\n check_stride=False,\n)\n\nresulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead.\n\nTo execute this test, run the following from the base repo dir:\n python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda\n\nThis message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0\n```\n\n\nI can send a pr if that is okay.\n\ncc @crcrpar @mcarilli @janeyx99", "url": "https://github.com/pytorch/pytorch/issues/167331", "state": "open", "labels": [ "triaged", "actionable", "module: mta" ], "created_at": "2025-11-07T17:09:56Z", "updated_at": "2025-11-07T17:16:33Z", "comments": 2, "user": "arkadip-maitra" }, { "repo": "pytorch/pytorch", "number": 167304, "title": "RPC cannot run in jetson orin because of the specific uuid of orin", "body": "### \ud83d\udc1b Describe the bug\n\nWhen run RPC demo in jetson orin, the uuid issue were shown as below:\n\ntensorpipe/channel/cuda_ipc/context_impl.cc:65 \"uuidStr.substr(0, 4) != \"GPU-\"Couldn\u2019t obtain valid UUID for GPU #0 from CUDA driver.\n\nThe uuid of jetson does not begin with characters \u201cGPU-\u201d like RTX series, the failure message will appear at once.\n\nI think that tensorpipe didnot support jetson because of the specific characters \u201cGPU-\u201c check, and i do not know how to run RPC in jetson. How should i do to solve that. Thanks.\n\n### Versions\n\nWhen run RPC demo in jetson orin, the uuid issue were shown as below:\n\ntensorpipe/channel/cuda_ipc/context_impl.cc:65 \"uuidStr.substr(0, 4) != \"GPU-\"Couldn\u2019t obtain valid UUID for GPU #0 from CUDA driver.\n\nThe uuid of jetson does not begin with characters \u201cGPU-\u201d like RTX series, the failure message will appear at once.\n\nI think that tensorpipe didnot support jetson because of the specific characters \u201cGPU-\u201c check, and i do not know how to run RPC in jetson. How should i do to solve that. Thanks.\n@scw @svenstaro @JackDanger @infil00p \n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @ptrblck @eqy @jerryzh168 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @jjlilley @osalpekar @jiayisuse @mrzzd", "url": "https://github.com/pytorch/pytorch/issues/167304", "state": "open", "labels": [ "oncall: distributed", "module: cuda", "module: rpc" ], "created_at": "2025-11-07T09:20:00Z", "updated_at": "2025-11-07T15:33:35Z", "comments": 0, "user": "mamba824824" }, { "repo": "pytorch/torchrec", "number": 3525, "title": "Could Torchrec support PyTorch's PrivateUse1 Dispatch Key?", "body": "Hello,\n\nI've noticed that there are many conditional checks like if device.type == \"cuda\" in our TorchRec codebase. Without modifying TorchRec's source code, such fixed conditional logic might not be flexible enough to conveniently support third-party devices. From what I understand, PyTorch has introduced the PrivateUse1 DispatchKey to address third-party device extension issues. I'd like to ask if our TorchRec repository could add support for PyTorch's PrivateUse1 DispatchKey? This would enable third-party devices to seamlessly adapt TorchRec's functionality through PrivateUse1 without requiring code modifications.", "url": "https://github.com/meta-pytorch/torchrec/issues/3525", "state": "open", "labels": [], "created_at": "2025-11-07T07:17:42Z", "updated_at": "2026-01-05T22:39:04Z", "comments": 1, "user": "kwgqjj" }, { "repo": "pytorch/pytorch", "number": 167291, "title": "[FSDP] Support param step with fp32", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIn Megatron, we can keep a fp32 version of params. while doing optimizer.step, the gradient is used to update the fp32 version of params, and the cast the fp32 param to fp16 version. Can we do this in FSDP?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/167291", "state": "open", "labels": [ "oncall: distributed" ], "created_at": "2025-11-07T04:37:48Z", "updated_at": "2025-11-07T15:34:42Z", "comments": 0, "user": "yikaizhu-baseten" }, { "repo": "pytorch/pytorch", "number": 167276, "title": "Dynamo Fails to Trace Python Built-in Function print in Compile Mode", "body": "### \ud83d\udc1b Describe the bug\n\nDescription\uff1a\nWhen running a PyTorch model in Compile mode with torch.compile(), the Dynamo tracing mechanism fails to trace the Python built-in print() function, resulting in the following error.\ncode:\n```\nimport torch\nimport torch.nn as nn\n\nclass SimpleModel(nn.Module):\n def forward(self, x):\n print(f'Input stats - min: {min(x.flatten())}, max: {max(x.flatten())}, mean: {sum(x.flatten()) / len(x.flatten())}')\n return x\n\ndef run_eager_and_compile():\n device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n model = SimpleModel().to(device)\n x = torch.randn(2, 3, device=device)\n\n try:\n print(\"Running in Eager mode\")\n out_eager = model(x)\n print(\"Eager output:\", out_eager)\n except Exception as e:\n print(\"Eager error:\", e)\n\n try:\n print(\"Running in Compile mode\")\n compiled_model = torch.compile(model, fullgraph=True)\n out_compile = compiled_model(x)\n print(\"Compile output:\", out_compile)\n except Exception as e:\n print(\"Compile error:\", e)\n\nif __name__ == \"__main__\":\n run_eager_and_compile()\n```\noutput:\n```\nRunning in Eager mode\nInput stats - min: -0.002417617244645953, max: 1.318856120109558, mean: 0.6973526477813721\nEager output: tensor([[ 1.2410, 0.0111, 1.3189],\n [ 1.3116, 0.3040, -0.0024]])\nRunning in Compile mode\nCompile error: Failed to trace builtin operator\n Explanation: Dynamo does not know how to trace builtin operator `print` with argument types [''] (has_kwargs False)\n Hint: Avoid calling builtin `print` with argument types ['']. Consider using an equivalent alternative function/method to `print`.\n Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.\n Hint: Please report an issue to PyTorch.\n\n Developer debug context: builtin print [] False\n\n\nfrom user code:\n line 6, in forward\n print(f'Input stats - min: {min(x.flatten())}, max: {max(x.flatten())}, mean: {sum(x.flatten()) / len(x.flatten())}')\n\nSet TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS=\"+dynamo\"\n\n```\n\n### Versions\n\nPyTorch version: 2.7.1+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.1 LTS (x86_64)\nGCC version: (Ubuntu 9.5.0-6ubuntu2) 9.5.0\nClang version: Could not collect\nCMake version: version 4.0.3\nLibc version: glibc-2.39\n\nPython version: 3.9.7 (default, Jul 16 2025, 16:34:47) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39\nIs CUDA available: False\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU\nNvidia driver version: 580.65.06\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) i9-14900HX\nCPU family: 6\nModel: 183\nThread(s) per core: 2\nCore(s) per socket: 24\nSocket(s): 1\nStepping: 1\nCPU(s) scaling MHz: 31%\nCPU max MHz: 5800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4838.40\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 896 KiB (24 instances)\nL1i cache: 1.3 MiB (24 instances)\nL2 cache: 32 MiB (12 instances)\nL3 cache: 36 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Ghostwrite: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file dat", "url": "https://github.com/pytorch/pytorch/issues/167276", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2025-11-07T01:34:42Z", "updated_at": "2025-11-18T19:05:11Z", "comments": 2, "user": "Blooming-Tree" }, { "repo": "pytorch/pytorch", "number": 167266, "title": "TorchDynamo Tracing Error: Unable to Trace Builtin bool() Operator on Tensor", "body": "### \ud83d\udc1b Describe the bug\n\nDescription\nWhen compiling a model with torch.compile, TorchDynamo fails to trace the builtin bool() operator when applied to PyTorch tensors, resulting in a compilation error.\nError Details:\n\nError Type: Tracing failure for builtin operator\n\nFailed Operation: bool operator applied to Tensor\n\nSpecific Code: make_causal = bool((mask == 0).all())\n\nError Message: \"Dynamo does not know how to trace builtin operator bool with argument types ['Tensor']\"\ncode:\n```\nimport torch\nimport torch.nn as nn\nimport torch._dynamo\n\ntorch._dynamo.config.suppress_errors = False\ntorch._dynamo.config.verbose = True\n\nclass BoolTensorModel(nn.Module):\n def forward(self, x, mask):\n make_causal = bool((mask == 0).all())\n print(f\"[Forward] make_causal={make_causal}\")\n return x + 1\n\ndef main():\n x = torch.randn(2, 3)\n mask = torch.zeros(2, 3)\n\n model = BoolTensorModel()\n\n eager_out = model(x, mask)\n print(\"Eager mode output shape::\\n\", eager_out)\n\n try:\n compiled_model = torch.compile(model, fullgraph=True)\n compile_out = compiled_model(x, mask)\n print(\"Compiled mode output shape:\\n\", compile_out)\n except Exception as e:\n print(\"Compile error:\\n\", e)\n\nif __name__ == \"__main__\":\n main()\n\n```\n\noutput:\n```\n[Forward] make_causal=True\nEager mode output shape:\n tensor([[-0.0879, 1.7579, 1.2001],\n [ 2.2467, 2.0874, 0.1205]])\n\nCompile error:\n Failed to trace builtin operator\n Explanation: Dynamo does not know how to trace builtin operator `bool` with argument types ['Tensor'] (has_kwargs False)\n Hint: Avoid calling builtin `bool` with argument types ['Tensor']. Consider using an equivalent alternative function/method to `bool`.\n Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.\n Hint: Please report an issue to PyTorch.\n\n Developer debug context: builtin bool [] False\n\n\nfrom user code:\n line 10, in forward\n make_causal = bool((mask == 0).all())\n\n```\n\n### Versions\n\nPyTorch version: 2.7.1+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.1 LTS (x86_64)\nGCC version: (Ubuntu 9.5.0-6ubuntu2) 9.5.0\nClang version: Could not collect\nCMake version: version 4.0.3\nLibc version: glibc-2.39\n\nPython version: 3.9.7 (default, Jul 16 2025, 16:34:47) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39\nIs CUDA available: False\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU\nNvidia driver version: 580.65.06\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) i9-14900HX\nCPU family: 6\nModel: 183\nThread(s) per core: 2\nCore(s) per socket: 24\nSocket(s): 1\nStepping: 1\nCPU(s) scaling MHz: 31%\nCPU max MHz: 5800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4838.40\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 896 KiB (24 instances)\nL1i cache: 1.3 MiB (24 instances)\nL2 cache: 32 MiB (12 instances)\nL3 cache: 36 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Ghostwrite: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Mitigation; Clear Register File\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerabilit", "url": "https://github.com/pytorch/pytorch/issues/167266", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "dynamo-triage-dec2025" ], "created_at": "2025-11-07T00:26:30Z", "updated_at": "2025-12-24T03:49:22Z", "comments": 1, "user": "Blooming-Tree" }, { "repo": "pytorch/pytorch", "number": 167242, "title": "CUDNN version in nightly pytorch 2.10.0 builds", "body": "Hi, I mainly use pytorch with ComfyUI. I know there is an issue with pytorch and CUDNN for which there have been made workarounds in ComfyUI code.\n\nI have seen here https://github.com/pytorch/pytorch/issues/166122 that CUDNN 9.15 solves the problem (from I can understand, as I'm not a developer). Checking today's torch nightly 2.10.0+cu130 for Windows it shows, if I'm not mistaken, CUDNN version 9.12:\n\n```\n>>> import torch\n>>> print(torch.__version__)\n2.10.0.dev20251106+cu130\n>>> torch.backends.cudnn.version()\n91200\n```\nMy question is: when is to be expected to see CUDNN v 9.15 in the nightly 2.10.0+cu130 builds?\n\nAnd another question is: seeing that today CUDNN 9.15 is available from nvidia (in fact is already downloaded and installed on my computer) is there a way to use 9.15 in the current torch build, as this comment suggests?\nhttps://github.com/pytorch/pytorch/issues/166122#issuecomment-3487979692\n\n> We have aligned not to bump this for the minor version release; as a workaround, we encourage users to manually install cudnn 9.15+ if they want to work around\n\nMy apologies, if I ask, maybe, trivial questions.\n\ncc @seemethere @malfet @atalman @csarofeen @ptrblck @xwang233 @eqy", "url": "https://github.com/pytorch/pytorch/issues/167242", "state": "open", "labels": [ "module: binaries", "module: cudnn", "triaged" ], "created_at": "2025-11-06T20:16:08Z", "updated_at": "2025-11-30T16:25:21Z", "comments": 13, "user": "jovan2009" }, { "repo": "pytorch/ao", "number": 3305, "title": "[MXFP8 MoE] What's the expected inference solution on H100s, after training with TorchAO MXFP8 MoE?", "body": "Hi team,\n\nThanks for your great implementation of the new MXFP8 MoE! I have integrated it and consider to use it for prod training.\nBut I got a concern about how to do inference.\n\nMXFP8 is only available on B200. What is the expected inference solution on H100 or even non-Nvidia GPUs after training with MXFP8. Other quantizations, even another FP8 quantization, is not guaranteed to work well with the model trained with MXFP8.\n\nIs a QAT finetuning with another quantization method expected?\nShould we just inference with another quantization method without finetuning?\n\nI guess FP4 training is a similar case. \n\nI think the question is not only to TorchAO team. Anyone please share your ideas/insights if you would like to.\n\nThanks in advance!", "url": "https://github.com/pytorch/ao/issues/3305", "state": "open", "labels": [ "question", "mx", "moe" ], "created_at": "2025-11-06T18:45:31Z", "updated_at": "2025-11-07T19:20:18Z", "user": "goldhuang" }, { "repo": "pytorch/pytorch", "number": 167219, "title": "Are there limitations to dtensor's registration strategy?", "body": "I have a IR schema like this\nfunc: my_scatter_add(Tensor x, Tensor(a!) y, Tensor index, Tensor? scale=None, bool use_high_prec=False) -> ()\nThis function has no return value, and the second parameter is an in-place parameter\nI tried the `register_sharding` method described in the Dtensor documentation. However, it threw an error. It seems this method doesn't support IR schema without outputs. \nCan this IR schema support Dtensor registration?\n\n\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad", "url": "https://github.com/pytorch/pytorch/issues/167219", "state": "open", "labels": [ "oncall: distributed", "module: dtensor" ], "created_at": "2025-11-06T14:50:40Z", "updated_at": "2025-11-11T13:37:24Z", "comments": 4, "user": "Bin1024" }, { "repo": "pytorch/pytorch", "number": 167186, "title": "scripts/build_android.sh missing", "body": "### \ud83d\udc1b Build scripts for android deleted, README outdated\n\nI was trying to build pytorch v2.9.0 for android, but it seems build_android.sh script was deleted. Is there any reason why it was deleted?\n\nThe odd thing is that https://github.com/pytorch/pytorch/blob/v2.9.0/android/README.md\nreferences bash ./scripts/build_pytorch_android.sh which doesn't exit.\n\n```\ncommit 91602a92548d1dd351979cdc6e778c505c32c2b9\nAuthor: albanD \nDate: Wed Jul 23 01:21:25 2025 +0000\n\n Cleanup old caffe2 scripts (#158475)\n \n Testing on this one is grep based: if there were no reference to that script I can find, I deleted.\n We can easily add any of these back if needed!\n Pull Request resolved: https://github.com/pytorch/pytorch/pull/158475\n Approved by: https://github.com/seemethere, https://github.com/huydhn, https://github.com/cyyever\n\n```\n\n### Versions\n\nv2.9.0", "url": "https://github.com/pytorch/pytorch/issues/167186", "state": "closed", "labels": [ "triaged", "oncall: mobile" ], "created_at": "2025-11-06T04:15:16Z", "updated_at": "2025-11-07T00:56:14Z", "comments": 1, "user": "ppavacic" }, { "repo": "pytorch/torchtitan", "number": 1998, "title": "[Documentation] [BE] Add docs for MXFP8 training on Blackwell", "body": "We have [float8](https://github.com/pytorch/torchtitan/blob/main/docs/float8.md) docs, we should add mxfp8 docs as well, especially since we have a public blog post on accelerating training with torchtitan mxfp8 training: https://pytorch.org/blog/accelerating-2k-scale-pre-training-up-to-1-28x-with-torchao-mxfp8-and-torchtitan-on-crusoe-b200-cluster/", "url": "https://github.com/pytorch/torchtitan/issues/1998", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-06T02:53:06Z", "updated_at": "2025-12-03T21:54:51Z", "comments": 0, "user": "danielvegamyhre" }, { "repo": "pytorch/pytorch", "number": 167172, "title": "[Profiler][XPU] Is there a miss?", "body": "Found something:\nhttps://github.com/pytorch/pytorch/blob/943227f57bcd638ab288331442748769f907d8c1/torch/csrc/autograd/init.cpp#L390-L419\n\nIs the XPU code should also be in the #if branch? Seems the XPU depends on macro `LIBKINETO_NOXPUPTI`?\nHmmmm, or the #if control misses the `|| !defined(LIBKINETO_NOXPUPTI)` also?\nNot a pro to XPU, so please correct me if something here is wrong.\n\ncc @gujinghui @EikanWang @fengyuan14 @guangyey", "url": "https://github.com/pytorch/pytorch/issues/167172", "state": "closed", "labels": [ "triaged", "module: xpu" ], "created_at": "2025-11-06T02:15:45Z", "updated_at": "2025-11-19T05:42:57Z", "comments": 1, "user": "KarhouTam" }, { "repo": "pytorch/pytorch", "number": 167118, "title": "[CI][CUDA][B200] Why does job keep encountering \"No devices were found\" while \"nvidia-smi\" on bare-metal returns normal results", "body": "### \ud83d\udc1b Describe the bug\n\nJOB link: https://github.com/pytorch/pytorch/actions/runs/19096449521/job/54559623146 \nRunner/user: dgxb200-08-1003 \n\nNvidia-smi output when logged on the machine: \n\n\"Image\"\n\n\n### Versions\n\nInfra \n\n\ncc @ezyang @gchanan @kadeng @msaroufim @ptrblck @eqy @tinglvv @atalman @malfet @huydhn @seemethere ", "url": "https://github.com/pytorch/pytorch/issues/167118", "state": "closed", "labels": [ "high priority", "triage review" ], "created_at": "2025-11-05T20:06:16Z", "updated_at": "2025-11-10T17:16:16Z", "comments": 4, "user": "nWEIdia" }, { "repo": "pytorch/ao", "number": 3295, "title": "Examples of using llms with PT2E workflow?", "body": "Are there examples of using llms with PT2E workflow? I'm interested in static quantization using qwen3 . ", "url": "https://github.com/pytorch/ao/issues/3295", "state": "closed", "labels": [ "triaged" ], "created_at": "2025-11-05T18:33:13Z", "updated_at": "2025-12-05T01:12:56Z", "comments": 3, "user": "cjm715" }, { "repo": "pytorch/pytorch", "number": 167062, "title": "How to use torch.compile on Windows GPU?", "body": "### \ud83d\udc1b Describe the bug\n\nI have installed Python 3.13.9 and PyTorch 2.9+cuda3.13\npip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130,\n And my GPU is RTX 380 12 GB. I have Windows 11\n\nI followed up on those steps\n\n- MSVC v143 - VS 2022 C++ x64/x86 build tools\n- Windows 11 SDK\n- C++ CMake tools for Windows\n- C++ core features\n\nand added the cl.exe into my environment path\n\"C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.44.35207\\bin\\Hostx64\\x64\"\n\nI tried this code\n\n```\nimport torch\ndevice=\"cuda\"\ndef foo(x, y):\n a = torch.sin(x)\n b = torch.cos(x)\n return a + b\nopt_foo1 = torch.compile(foo)\nprint(opt_foo1(torch.randn(10, 10).to(device), torch.randn(10, 10).to(device)))\n```\n\n### Error logs\n\nCppCompileError: C++ compile error Command: cl /I c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Include /I c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/include /I c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/include/torch/csrc/api/include /D NOMINMAX /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /O2 /DLL /MD /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /Zc:__cplusplus /permissive- /openmp /openmp:experimental C:/temp/torch_compile/bi/cbil2ud2wplsgzj6esiu72j2t7zq6phvrdyun5pl56vn2g26y5qg.main.cpp /FeC:/temp/torch_compile/bi/cbil2ud2wplsgzj6esiu72j2t7zq6phvrdyun5pl56vn2g26y5qg.main.pyd /LD /link /LIBPATH:c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/libs /LIBPATH:c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib c10.lib Output: Microsoft (R) C/C++ Optimizing Compiler Version 19.44.35219 for x64 Copyright (C) Microsoft Corporation. All rights reserved. cl : Command line warning D9025 : overriding \u2018/openmp\u2019 with \u2018/openmp:experimental\u2019 cbil2ud2wplsgzj6esiu72j2t7zq6phvrdyun5pl56vn2g26y5qg.main.cpp c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/include\\torch/csrc/inductor/cpp_prefix.h(3): fatal error C1083: Cannot open include file: \u2018omp.h\u2019: No such file or directory\n\n### Versions\n\npip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130\n\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/167062", "state": "open", "labels": [ "module: windows", "triaged", "oncall: pt2" ], "created_at": "2025-11-05T09:04:27Z", "updated_at": "2025-11-11T18:16:46Z", "user": "emadyounan" }, { "repo": "pytorch/pytorch", "number": 167042, "title": "Requesting Cuda 13 support", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi! I am trying to run Torch with GPU support. I am running on Windows, with CUDA toolkit 13 installed, and the latest nvidia drivers. `torch.cuda.is_available()` is showing as False. Is it safe to assume this is because it needs CUDA 12?\n\nI'm brand new to Torch, but do a bit of CUDA FFI from rust in my own code, and have been able to get Python FFI working with that. The gist is, if you just use the CUDA Driver API, the application (In this case me running Pytorch) doesn't even need CUDA installed; just compatible drivers. The PC *compiling* the program needs CUDA. For things like cuFFT, you can ship the DLL/SO with the program, then it will work. Maybe we need something like that? What specific things beyond the Driver API does Torch use? Or do you think something else is wrong?\n\nThank you! Happy to help narrow this down and solve.\n\n### Why this is something we should add\nWhen you go to the nvidia site and download CUDA, it is downloading by default a version that doesn't work with Torch (?).", "url": "https://github.com/pytorch/pytorch/issues/167042", "state": "closed", "labels": [], "created_at": "2025-11-05T01:41:01Z", "updated_at": "2025-11-05T01:51:37Z", "comments": 1, "user": "David-OConnor" }, { "repo": "pytorch/pytorch", "number": 167027, "title": "combine compiled vectorized function without recompiling already compiled part", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe nice thing of `torch.compile` is that it fuses the vectorized operations and avoid big intermediate tensors. For example, if I have\n```\ndef func(x):\n y = f1(x)\n z = f2(y)\n return z\n```\nAfter `torch.compile` it becomes something like\n```\nfor(int i=0;i \u201cWe also applied an auxiliary sequence-level balance loss with a 0.0001 weight to avoid extreme imbalance within any single sequence.\u201d\n\n(We could open a PR for the sequence-level balance loss if you\u2019re interested.)\n\nTo make this work, we need to compute the extra loss at each block, either by:\n\n- Caching the per-layer aux_loss loss (which breaks compile, but not PP), or\n\n- Passing both activations and aux_loss to the next PP stage (which doesn\u2019t affect compile).\n\nThe second option basically requires the PP API to support multiple-args input and output. We tried earlier this year to explicitly pass arguments when building PP stages, but it didn\u2019t work. I\u2019m wondering if there have been any updates since then, or if we might have missed something.\n\nDo you have any other suggestions or better solutions? @tianyu-l @H-Huang \nCC: @janEbert @garrett361 \n\n", "url": "https://github.com/pytorch/torchtitan/issues/1979", "state": "open", "labels": [], "created_at": "2025-11-03T13:37:44Z", "updated_at": "2025-11-20T02:22:30Z", "comments": 13, "user": "rakkit" }, { "repo": "pytorch/pytorch", "number": 166802, "title": "add ability to automatically set `set_per_process_memory_fraction` using env variable", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi,\nIn multi-user / multi-tenant GPU environments (e.g., Slurm clusters, Kubernetes GPU slicing, or MPS-based sharing), it is often desirable to constrain the GPU memory usage of a process externally, without modifying the application code.\n\nCurrently, torch.cuda.set_per_process_memory_fraction(fraction, device) can only be applied programmatically in Python. If there was a way to automatically set it via bash env variable it would be very efficent, as it will remove the requirement of adding something to each python script \n\n**Proposed Feature**\n\nSupport an optional environment variable, for example:\n```\nTORCH_CUDA_MEMORY_FRACTION= # e.g., 0.25\nTORCH_CUDA_MEMORY_FRACTION_DEVICE= # e.g., 0 or \"all\"\n```\n\nIf set at process startup, PyTorch would internally call:\n```\ntorch.cuda.set_per_process_memory_fraction(\n float(os.environ[\"TORCH_CUDA_MEMORY_FRACTION\"]),\n device = os.environ.get(\"TORCH_CUDA_MEMORY_FRACTION_DEVICE\", \"all\")\n)\n```\n\n**Motivation & Use Cases**\n\n1. Slurm GPU shards: e.g., cluster configured with GRES=shard or MIG. We want processes to auto-scale memory usage based on how many shards they were allocated.\n2. JupyterHub / multi-user labs: enforce memory fairness without requiring users to modify their notebooks.\n3. Inference services: multiple models share one GPU; memory partitioning prescribed via environment-level configuration.\n4. Containerized deployments (Kubernetes): memory constraints should be set from deployment manifests (yaml), not Python code.\n\n\n### Alternatives\n\nadding the suggested code to each of my python scripts.\n\n### Additional context\n\nConversation with ChatGPT - https://chatgpt.com/share/69065d93-3f28-8013-b3a2-52b2dd01dd5d it already has a pull request ready. \n\ncc @ptrblck @msaroufim @eqy @jerryzh168", "url": "https://github.com/pytorch/pytorch/issues/166802", "state": "closed", "labels": [ "module: cuda", "module: memory usage", "triaged" ], "created_at": "2025-11-01T19:22:40Z", "updated_at": "2025-11-07T16:58:15Z", "comments": 4, "user": "orena1" }, { "repo": "pytorch/pytorch", "number": 166796, "title": "[ROCm][CI] Machines under the label linux.rocm.gpu.2, label linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 are undergoing maintenance.", "body": "> NOTE: Remember to label this issue with \"`ci: sev`\"\n> If you want autorevert to be disabled, keep the ci: disable-autorevert label\n\n \n\n## Current Status\n*Status could be: preemptive, ongoing, mitigated, closed. Also tell people if they need to take action to fix it (i.e. rebase)*.\nongoing\n\n## Error looks like\n*Provide some way users can tell that this SEV is causing their issue.*\nOccasional rocm workflow failures for workflows with label linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100. Also, potentially longer queue times for linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 workflows.\n\n## Incident timeline (all times pacific)\n*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*\n11/01/2025\n\n## User impact\n*How does this affect users of PyTorch CI?*\nOccasional rocm workflow failures for workflows with label linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100. Also, potentially longer queue times for linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 workflows.\n\n## Root cause\n*What was the root cause of this issue?*\nSystem Maintenance\n\n## Mitigation\n*How did we mitigate the issue?*\nWill be resolve by EOD 11/01/2025\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\nN/A\n\ncc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd", "url": "https://github.com/pytorch/pytorch/issues/166796", "state": "closed", "labels": [ "module: rocm", "ci: sev" ], "created_at": "2025-11-01T14:59:52Z", "updated_at": "2025-11-03T11:04:50Z", "comments": 0, "user": "amdfaa" }, { "repo": "pytorch/ao", "number": 3274, "title": "Proposal to add a beginner-friendly introduction tutorial for TorchAO", "body": "Hello TorchAO community,\n\nI would like to contribute a beginner-friendly notebook tutorial that introduces TorchAO to users who are new to model optimization and to TorchAO (or even PyTorch in general).\n\nAs someone coming from a different background with limited experience in quantization and model optimization, I found that it can be challenging to understand:\n\n- What TorchAO is,\n- What its main capabilities are, and\n- How someone can start using it effectively in a simple workflow. \n\nWhile TorchAO already provides strong documentation and tutorials for quantization, some of them seem to assume a level of prior familiarity that newcomers might not yet have or they may target more advanced workflows. I would like to put together a simple notebook tutorial that demonstrates one simple TorchAO quantization flow on a very small model/toy model (e.g. 2-layer MLP or simple CNN). The goal isn't to duplicate the Quick Start or advanced tutorials, but to provide a high-level guide that can help absolute beginners understand what TorchAO is and when to use it.\n\nThe notebook would include clear descriptions and references to relevant PyTorch blog posts and documentation pages that already exist, so that users can easily explore more advanced material as well.\n\nWould this be useful to the community to add under tutorials/ or examples/? I\u2019m also open to suggestions on which specific tutorial topics might be most helpful for newcomers who are just starting out with TorchAO.\n\nI appreciate your consideration and feedback!", "url": "https://github.com/pytorch/ao/issues/3274", "state": "open", "labels": [ "topic: documentation" ], "created_at": "2025-11-01T07:47:08Z", "updated_at": "2025-11-04T04:25:26Z", "comments": 2, "user": "smishra8" }, { "repo": "pytorch/torchtitan", "number": 1977, "title": "Why is the ep mesh derived from a factoring of the dp mesh, instead of its own dimension?", "body": "I see that the data parallel shard dimension is factored into two dimensions, `dp_shard_mod_ep` and `dp_shard_in_ep`.\n\nThe experts use `dp_shard_mod_ep` submesh for FSDP while the rest of the blocks use the regular `dp_shard_cp` submesh. Why can't the experts use FSDP on the regular `dp_mesh`? The reason for this is unclear after reading the code. If only expert parallelism is used without data parallel or if the data parallel size is less than expert parallel, then the `dp_shard_mod_ep` dimension size would be 0, which doesn't make sense.\n\nFurthermore, the `ep` submesh is not actually a bona fide actual dimension, but rather a combination of `dp_shard_in_ep`, `cp` and `tp`. Why can't `ep` be its own dimension? Currently `ep` is like some weird factored submesh of `dp_shard` instead of being its own dimension, and I don't understand why.\n\nI understand the combining of various mesh dimensions into `dp_shard_cp` is used to limit those dimensions to a 1D mesh as FSDP accepts a 1D mesh and HSDP a 2D mesh.\n\nBut why can't the mesh dims be for example:\n\n(assuming cp = 1, tp = 1, etp = 1)\nworld mesh: `['pp', 'dp_replicate', 'dp_shard', 'ep', 'cp', 'tp']`\ndp_shard mesh: `['dp_shard']` (not flattening of `['dp_shard_in_ep', 'dp_shard_mod_ep']`\nep mesh: `['ep']` (not `'dp_shard_in_ep'`)\n\nSorry for all the questions I'm just pretty confused as to whats going on. The most important question is why does dp_shard need to be factored into two dimensions? I also think the ._flatten() function should be exposed publicly if so many places use that function.", "url": "https://github.com/pytorch/torchtitan/issues/1977", "state": "open", "labels": [ "question" ], "created_at": "2025-11-01T02:07:24Z", "updated_at": "2025-12-02T01:34:16Z", "user": "man2machine" }, { "repo": "pytorch/ao", "number": 3270, "title": "[DOCS] Quick Start Guide PT2E Example does not work as is. Undefined objects", "body": "PT2E example in quick start guide does not work as is. Many undefined objects. No import for `convert_pt2e` and `example_inputs` is not defined for example. Also some indentation issues.\n\nSee:\nhttps://docs.pytorch.org/ao/0.13/quick_start.html#pytorch-2-export-quantization", "url": "https://github.com/pytorch/ao/issues/3270", "state": "open", "labels": [ "topic: documentation", "triaged" ], "created_at": "2025-10-31T18:46:28Z", "updated_at": "2025-12-05T01:14:53Z", "comments": 1, "user": "cjm715" }, { "repo": "pytorch/pytorch", "number": 166736, "title": "Aarch64 unit test failures from nightly/manylinux build, jammy upgrade to gcc13 needed", "body": "### \ud83d\udc1b Describe the bug\n\nWe have noticed 2 test failures on AArch64 ( neoverse-v2 / c8g ) which are not happening in https://github.com/pytorch/pytorch/actions/workflows/linux-aarch64.yml\n\n```\nMismatched elements: 1 / 513 (0.2%)\nGreatest absolute difference: 253 at index (512,)\nGreatest relative difference: 1.0 at index (512,)\n\nTo execute this test, run the following from the base repo dir:\n python test/test_unary_ufuncs.py TestUnaryUfuncsCPU.test_contig_vs_every_other__refs__conversions_byte_cpu_float32\n```\n\nand\n\n```\nMismatched elements: 9 / 40 (22.5%)\nGreatest absolute difference: 1 at index (0, 0, 5)\nGreatest relative difference: 1.0 at index (0, 0, 5)\n\nThe failure occurred for item [3]\n\nTo execute this test, run the following from the base repo dir:\n python test/inductor/test_torchinductor.py CpuTests.test_to_dtype_cpu\n```\n\nThese problems exist on nightly build. We have investigated and it looks like it happens since nightly 10.25 which looks like this commit https://github.com/pytorch/pytorch/commit/b31bad1b8f1331bf43d47f46602cf6141db56844\n\nActions Requested.\n\nCan we upgrade jammy images to GCC13 @malfet which should show these problems and then we might need to revert https://github.com/pytorch/pytorch/commit/b31bad1b8f1331bf43d47f46602cf6141db56844 \n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.10.0.dev20251031+cpu\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (aarch64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.35\n\nPython version: 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:26:30) [GCC 14.3.0] (64-bit runtime)\nPython platform: Linux-6.8.0-1040-aws-aarch64-with-glibc2.35\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: aarch64\nCPU op-mode(s): 64-bit\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: ARM\nModel name: Neoverse-V2\nModel: 1\nThread(s) per core: 1\nCore(s) per cluster: 32\nSocket(s): -\nCluster(s): 1\nStepping: r0p1\nBogoMIPS: 2000.00\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti\nL1d cache: 2 MiB (32 instances)\nL1i cache: 2 MiB (32 instances)\nL2 cache: 64 MiB (32 instances)\nL3 cache: 36 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\nVulnerability Spectre v2: Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] mypy==1.16.0\n[pip3] mypy_extensions==1.1.0\n[pip3] numpy==1.22.4\n[pip3] onnx==1.19.1\n[pip3] onnx-ir==0.1.11\n[pip3] onnxscript==0.5.4\n[pip3] optree==0.13.0\n[pip3] torch==2.10.0.dev20251031+cpu\n[pip3] torchvision==0.25.0.dev20251031\n[conda] No relevant packages\n\ncc @seemethere @malfet @atalman @pytorch/pytorch-dev-infra @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01", "url": "https://github.com/pytorch/pytorch/issues/166736", "state": "closed", "labels": [ "module: binaries", "module: ci", "triaged", "module: arm" ], "created_at": "2025-10-31T17:25:47Z", "updated_at": "2025-12-09T20:47:45Z", "comments": 11, "user": "robert-hardwick" }, { "repo": "pytorch/pytorch", "number": 166721, "title": "Reference cycle in PyCodegen keeps tensors alive longer than necessary leading to OOM issues", "body": "### \ud83d\udc1b Describe the bug\n\nPR with fix: https://github.com/pytorch/pytorch/pull/166714\n\nRecursive function call creates a reference cycle: closure <- function <- cell inside closure\nCapturing self (PyCodegen instance) in same closure prolongs it's life until next gc.collect() which might result in worse resource management\n\nAfter the introduction of https://github.com/pytorch/pytorch/commit/e9209e08540e9edc69259ef0c6c715e0aa7c1b07 OOM issues has been observed. Looking for reference cycles one has been uncovered that would result in the prolonging lifetime of tensors. As the result of that OOM issues might occur. Such a dependency chain has been uncovered:\n\n\"Image\"\n\nAt the end of it a reference cycle can be found that consists of a closure for function collect_temp_source, the function itself, and a cell object inside closure that would point to the function due to the recursive call.\n\nThis issue can either be resolved by removing recurrency or removing PyCodegen instance from the closure.\nAnother precaution that can be made is to explicitly empty f_locals dict. This way we cut the tensor from the chain leading to reference cycle.\n\n### Error logs\n\n_No response_\n\n### Versions\n\nPyTorch version: 2.9.0+hpu_1.24.0-97.git4c6d653\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.39\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True 13:56:57 [32/1983]\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8480+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 8\nCPU(s) scaling MHz: 34%\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr\n sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid ap\nerfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic\n movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cd\np_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpci\nd cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xget\nbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat\n pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni av\nx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig\narch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 5.3 MiB (112 instances)\nL1i cache: 3.5 MiB (112 instances)\nL2 cache: 224 MiB (112 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: N", "url": "https://github.com/pytorch/pytorch/issues/166721", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2025-10-31T12:02:30Z", "updated_at": "2025-11-07T17:52:57Z", "comments": 1, "user": "jwieczorekhabana" }, { "repo": "pytorch/pytorch", "number": 166633, "title": "Command '['ninja', '-v']' returned non-zero exit status 255.", "body": "### \ud83d\udc1b Describe the bug\n\nI'm not sure it's linked to this warning message #[166580](https://github.com/pytorch/pytorch/issues/166580) and if it's a bug or how to correct it\n\n```\nptxas info : Used 128 registers, used 16 barriers, 104 bytes cumulative stack size\nptxas info : Compile time = 486.393 ms\nptxas info : Compiling entry function '_ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb0ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb0ELb1ELb1ELb1EEENS1_19SingleTileSchedulerILb0ELb1ELb1ELi192EEEEEEEEEvNT_6ParamsE' for 'sm_90a'\nptxas info : Function properties for _ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb0ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb0ELb1ELb1ELb1EEENS1_19SingleTileSchedulerILb0ELb1ELb1ELi192EEEEEEEEEvNT_6ParamsE\n 0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads\nptxas info : Used 128 registers, used 9 barriers\nptxas info : Compile time = 187.196 ms\nptxas info : Compiling entry function '_ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE' for 'sm_90a'\nptxas info : Function properties for _ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE\n 64 bytes stack frame, 140 bytes spill stores, 156 bytes spill loads\nptxas info : Used 128 registers, used 9 barriers, 64 bytes cumulative stack size\nptxas info : Compile time = 260.783 ms\nptxas info : Compiling entry function '_ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb1ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE' for 'sm_90a'\nptxas info : Function properties for _ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb1ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE\n 104 bytes stack frame, 280 bytes spill stores, 344 bytes spill loads\nptxas info : Used 128 registers, used 16 barriers, 104 bytes cumulative stack size\nptxas info : Compile time = 384.035 ms\nninja: build stopped: subcommand failed.\nTraceback (most recent call last):\n File \"/workspace/LightX2V/venv/lib/python3.11/site-packages/torch/utils/cpp_extension.py\", line 2506, in _run_ninja_build\n subprocess.run(\n File \"/usr/lib/python3.11/subprocess.py\", line 571, in run\n raise CalledProcessError(retcode, process.args,\nsubprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 255.\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/workspace/LightX2V/flash-attention/hopper/setup.py\", line 622, in \n setup(\n File \"/workspace/LightX2V/venv/lib/python3.11/site-packages/setuptools/__init__.py\", line 87, in setup\n return distutils.core.setup(**attrs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/workspace/LightX2V/venv/lib/python3.11/site-packages/setuptools/_distutils/core.py\", line 185, in setup\n return run_commands(dist)\n ^^^^^^^^^^^^^^^^^^\n File \"/workspace/LightX2V/venv/lib/python3.11/site-packages/setuptools/_distuti", "url": "https://github.com/pytorch/pytorch/issues/166633", "state": "open", "labels": [ "needs reproduction", "module: cpp-extensions", "module: cuda", "triaged" ], "created_at": "2025-10-30T11:07:43Z", "updated_at": "2025-12-31T18:42:43Z", "comments": 2, "user": "christopher5106" }, { "repo": "pytorch/torchtitan", "number": 1968, "title": "Avoiding device-to-host sync for input/output split sizes in expert parallel", "body": "I want to use the torchtitan code for a different MoE model, and I saw that if EP is used, then for FSDP, the module prefetching for forward and backward has to be manually set. This would be quite cumbersome as more models are used, and there would not be an easy standard way to do EP + FSDP.\n\nI looked through the code in expert_parallel.py and it seems that the input_sizes and output_sizes are set based on the number of tokens assigned to each expert. Since the input/output split size arguments to dist.all_to_all_single is a list of ints, I understand that the expert counts must be moved from GPU -> CPU which causes the D2H sync.\n\nHowever, it seems that dist.all_to_all just accepts a list of tensors, without any split size arguments. Would that avoid the D2H sync altogether? Or is the implementation underneath the same? For example, you could retrieve the list of inputs to each expert by using a mask or using index_select (instead of token reordering), and then use that as the input to dist.all_to_all. Would such a implementation simplify things and remove the D2H sync?\n\nFurthermore, in deepseed's moe implementation they seem to utilize the maximum capacity among all the experts and then don't specify the input/output split sizes (as it is an even split). Would this circumvent the D2H sync (at the expense of extra padded communication)?\n", "url": "https://github.com/pytorch/torchtitan/issues/1968", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-30T10:00:34Z", "updated_at": "2025-11-12T22:29:19Z", "user": "man2machine" }, { "repo": "pytorch/pytorch", "number": 166580, "title": "torch/utils/cpp_extension.py:531] There are no /usr/bin/g++-14 version bounds defined for CUDA version 13.0", "body": "### \ud83d\udc1b Describe the bug\n\nHi,\n\ni'm getting that error message whatever torch version I'm trying >= 2.7\n\n```\nW1029 20:55:47.576000 79341 torch/utils/cpp_extension.py:531] There are no /usr/bin/g++-14 version bounds defined for CUDA version 13.0\nbuilding 'flash_attn_3._C' extension\n```\n\nwhat does that mean exactly ? \n\nI noticed I'm a nvidia driver 12.4 and I have cuda tools 12.8. \n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.7.1+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.39\n\nPython version: 3.11.13 (main, Jun 4 2025, 08:57:30) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.1.62-nvidia-gpu-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA H100 PCIe\nNvidia driver version: 550.127.08\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 15\nOn-line CPU(s) list: 0-14\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9354 32-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 1\nCore(s) per socket: 1\nSocket(s): 15\nStepping: 1\nBogoMIPS: 6499.99\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities\nVirtualization: AMD-V\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 960 KiB (15 instances)\nL1i cache: 960 KiB (15 instances)\nL2 cache: 7.5 MiB (15 instances)\nL3 cache: 240 MiB (15 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-14\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas==13.0.0.19\n[pip3] nvidia-cublas-cu12==12.8.3.14\n[pip3] nvidia-cuda-cupti==13.0.48\n[pip3] nvidia-cuda-cupti-cu12==12.8.57\n[pip3] nvidia-cuda-nvrtc==13.0.48\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.61\n[pip3] nvidia-cuda-runtime==13.0.48\n[pip3] nvidia-cuda-runtime-cu12==12.8.57\n[pip3] nvidia-cudnn-cu12==9.7.1.26\n[pip3] nvidia-cudnn-cu13==9.13.0.50\n[pip3] nvidia-cufft==12.0.0.15\n[pip3] nvidia-cufft-cu12==11.3.3.41\n[pip3] nvidia-curand==10.4.0.35\n[pip3] nvidia-curand-cu12==10.3.9.55\n[pip3] nvidia-cusolver==12.0.3.29\n[pip3] nvidia-cusolver-cu12==11.7.2.55\n[pip3] nvidia-cusparse==12.6.2.49\n[pip3] nvidia-cusparse-cu12==12.5.7.53\n[pip3] nvidia-cusparselt-cu12==0.6.3\n[pip3] nvidia-cusparselt-cu13==0.8.0\n[pip3] nvidia-nccl-cu12==2.26.2\n[pip3] nvidia-nccl-cu13==2.27.7\n[pip3] nvidia-nvjitlink==13.0.39\n[pip3] nvidia-nvjitlink-cu12==12.8.61\n[pip3]", "url": "https://github.com/pytorch/pytorch/issues/166580", "state": "closed", "labels": [ "module: cpp-extensions", "triaged", "actionable" ], "created_at": "2025-10-29T22:24:15Z", "updated_at": "2025-12-29T10:58:14Z", "comments": 1, "user": "christopher5106" }, { "repo": "pytorch/pytorch", "number": 166563, "title": "[RFC] Modifying Getting started page for Experimental Wheel Variant Support", "body": "### Release highlight for proposed Feature\n\nRelated to Wheel Next Initiative: https://github.com/pytorch/pytorch/issues/159714\n\nThis proposal is for changes to the PyTorch \"Getting Started\" page to better promote variant enabled wheels and increase their visibility. This is a strategic move to ensure users are more aware of these new options, which can improve adoption and usage.\n\nPyTorch team have been producing an experimental set of Wheels for Release 2.8 and Release 2.9. \nPyTorch Release 2.8 Q&A: https://www.youtube.com/watch?v=amx4zUyfl3I\n\n#### What are Wheel Variants ?\n- Wheel variants are a mechanism for publishing platform-dependent Python wheels and selecting the most suitable package variant for a given platform.\n- This approach helps to remove the need for local identifier experience in PyTorch packaging and enhance user experience installing PyTorch\n\n\n**Disclaimer:** \nThis is a draft proposal. We are presenting only a schematic version at this stage. \n\nv1: https://wheelnext.github.io/pytorch_selector_revamp/v1.html\n\n\"Image\"\n\nv2: https://wheelnext.github.io/pytorch_selector_revamp/v2.html\n\n\"Image\"\n\n\ncc @svekars @sekyondaMeta @AlannaBurke @malfet @seemethere @anitakat @albanD @DEKHTIARJonathan @rgommers @mgorny @emmatyping @bdice @warsaw @msarahan @vyasr @aterrel @charliermarsh @konstin @geofft @zanieb @jezdez\n\n### Release Version\n\n2.10\n", "url": "https://github.com/pytorch/pytorch/issues/166563", "state": "open", "labels": [ "module: docs", "triaged", "release-feature-request" ], "created_at": "2025-10-29T20:11:37Z", "updated_at": "2025-10-31T15:22:23Z", "comments": 3, "user": "atalman" }, { "repo": "pytorch/pytorch", "number": 166555, "title": "[dynamo, docs] Suggest torch.compiler.set_stance(\"force_eager\") to determine if eager code causes issues", "body": "We should include in the programming model docs for users to try running their code on eager to see if eager-errors are causing graph breaks.\n\n`torch.compiler.set_stance(\"force_eager\")` is the preferred way to do this since users don't have to change their `torch.compile` decorators or `module.compile` calls.\n\nSee https://docs.pytorch.org/tutorials/recipes/torch_compiler_set_stance_tutorial.html#crashing-sooner for an existing example of `set_stance` usage for debugging.\n\ncc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela", "url": "https://github.com/pytorch/pytorch/issues/166555", "state": "open", "labels": [ "module: docs", "triaged", "oncall: pt2", "module: dynamo", "compile-docs", "module: compile ux" ], "created_at": "2025-10-29T19:15:49Z", "updated_at": "2025-12-03T00:48:27Z", "comments": 0, "user": "williamwen42" }, { "repo": "pytorch/vision", "number": 9253, "title": "Patch versions of the wheel available in the CPU only pypi registry", "body": "In the CPU only pypi registry, https://download.pytorch.org/whl/torchvision/, I can see some dev/patch versions of the wheels:\n\n```\ntorchvision-0.24.0+0429d73-cp311-cp311-win_arm64.whl\ntorchvision-0.24.0+0429d73-cp312-cp312-win_arm64.whl\ntorchvision-0.24.0+0429d73-cp313-cp313-win_arm64.whl\ntorchvision-0.24.0+7a9db90-cp311-cp311-win_arm64.whl\ntorchvision-0.24.0+7a9db90-cp312-cp312-win_arm64.whl\ntorchvision-0.24.0+7a9db90-cp313-cp313-win_arm64.whl\ntorchvision-0.24.0+b919bd0-cp311-cp311-win_arm64.whl\ntorchvision-0.24.0+b919bd0-cp312-cp312-win_arm64.whl\ntorchvision-0.24.0+b919bd0-cp313-cp313-win_arm64.whl\ntorchvision-0.24.0+e437e35-cp311-cp311-win_arm64.whl\ntorchvision-0.24.0+e437e35-cp312-cp312-win_arm64.whl\ntorchvision-0.24.0+e437e35-cp313-cp313-win_arm64.whl\n```\n\nI don't think they should be here in the wild, and it causes some confusion with `uv` where it's trying to use these, but is unable to download them. \n\nIs there a valid reason for these wheels to be here? If not could they be removed?\n", "url": "https://github.com/pytorch/vision/issues/9253", "state": "open", "labels": [], "created_at": "2025-10-29T16:42:44Z", "updated_at": "2026-01-04T11:06:45Z", "comments": 3, "user": "aandrestrumid" }, { "repo": "pytorch/pytorch", "number": 166519, "title": "Long queue for ROCM runners, also B200 and XPU queueing is observed", "body": "## Current Status\nmitigated\n\n## Error looks like\nJobs requiring following runners will be queueing:\n\n\"Image\"\n\nPlease see:\nhttps://hud.pytorch.org/metrics\n\n## Incident timeline (all times pacific)\nStarted Oct 28, 2PM PDT ~1hr queueing. Notified AMD team on the issue\nOct 29 5AM observing 7hrs queuing SEV is created\nOct 29 5AM also observing XPU and B200 queuing\nOct 30 11AM confirmed ROCm runners are no longer queueing\n\n## User impact\nRocm jobs will not start\n\n## Root cause\n*What was the root cause of this issue?*\n\n## Mitigation\n*How did we mitigate the issue?*\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\n\n\ncc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/166519", "state": "closed", "labels": [ "module: rocm", "module: ci", "triaged" ], "created_at": "2025-10-29T12:20:19Z", "updated_at": "2025-11-03T17:55:53Z", "comments": 4, "user": "atalman" }, { "repo": "pytorch/pytorch", "number": 166516, "title": "Performance issue of torch._higher_order_ops.scan", "body": "### \ud83d\udc1b Describe the bug\n\nI have a Monte Carlo code on CPU, and I want to get one sample each from many discrete distributions pi = Ai * Bi, where A and B are N x n, with n ~ 20 and N ~ 10^6. So I generate N random numbers from 0 ~ 1, and count the cumsum of pi below the random numbers. Ideally I want to loop over the n axis and keep only the cumsum value (instead of the full length-n vector). It leads to the `cumsum_count_fast` in the following, which is fast enough but requires recompilation for different n. If I keep the whole cumsum vector, it leads to the `cumsum_count_slow` in the following, and is much slower. I think it fits the `fori_loop` function but it's not available yet, so I just use `scan` with an empty y. Unfortuately it's just slightly faster than `cumsum_count_slow`, and much slower than `cumsum_count_fast`. Is there any solution to this?\n```python\nimport time\nimport functools\nimport torch\ntorch._dynamo.config.capture_scalar_outputs = True\nfrom torch._higher_order_ops import scan\n\ntorch.set_default_dtype(torch.float64)\n\n# C is the normalization coefficient\n\n@functools.partial(torch.compile, dynamic=True)\ndef cumsum_count_fast(A, B, C):\n N = len(C)\n counts = torch.zeros(N, dtype=torch.int)\n cumsum = torch.zeros(N)\n for i in range(n):\n cumsum = cumsum + torch.abs(A[i] * B[i])\n counts = counts + (cumsum < C)\n return counts\n\n\n@functools.partial(torch.compile, dynamic=True)\ndef cumsum_count_slow(A, B, C):\n cumsum = torch.cumsum(torch.abs(A * B), 0)\n return torch.sum((cumsum < C).to(torch.int), dim=0)\n\n\ndef fn(carry, ab):\n a, b = ab\n cumsum, counts = carry\n new_cumsum = cumsum - torch.abs(a * b)\n new_counts = counts + (cumsum > 0).to(torch.int)\n new_carry = (new_cumsum, new_counts)\n y = torch.tensor(0)\n return new_carry, y\n\n\n@functools.partial(torch.compile, dynamic=True)\ndef cumsum_count_scan(A, B, C):\n n, N = A.shape\n\n cumsum = C\n counts = torch.zeros((N, ), dtype=torch.int)\n carry = (cumsum, counts)\n (_, counts), _ = scan(fn, carry, (A, B))\n return counts\n\n\nN = 1000000\nfor func in [cumsum_count_fast, cumsum_count_slow, cumsum_count_scan]:\n print(f\"{func.__name__}\")\n for n in [20, 30]:\n A = torch.rand((n, N))\n B = torch.rand((n, N))\n total = torch.sum(torch.abs(A * B), dim=0)\n random = torch.rand((N, ))\n C = total * random\n for _ in range(3):\n t1 = time.time()\n counts = func(A, B, C)\n t2 = time.time()\n print(n, t2 - t1)\n print()\n```\n\nOutput:\n```\ncumsum_count_fast\n20 2.491441488265991\n20 0.012846231460571289\n20 0.012821197509765625\n30 0.6074924468994141\n30 0.016744375228881836\n30 0.01685929298400879\n\ncumsum_count_slow\n20 0.18932175636291504\n20 0.05529618263244629\n20 0.05519819259643555\n30 0.07843923568725586\n30 0.08022069931030273\n30 0.08077597618103027\n\ncumsum_count_scan\n20 0.755068302154541\n20 0.038268089294433594\n20 0.03831148147583008\n30 0.05788612365722656\n30 0.05818939208984375\n30 0.05914735794067383\n```\n\n### Versions\n\n```\nCollecting environment information...\nPyTorch version: 2.9.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.2 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: 9.0.0 (https://github.com/conda-forge/clangdev-feedstock 284a3d5d88509307bcfba64b055653ee347371db)\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39\nIs CUDA available: False\nCUDA runtime version: 12.4.131\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: Could not collect\nNvidia driver version: Could not collect\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 8\nOn-line CPU(s) list: 0-7\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz\nCPU family: 6\nModel: 158\nThread(s) per core: 1\nCore(s) per socket: 8\nSocket(s): 1\nStepping: 13\nCPU(s) scaling MHz: 96%\nCPU max MHz: 4900.0000\nCPU min MHz: 800.0000\nBogoMIPS: 7200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse", "url": "https://github.com/pytorch/pytorch/issues/166516", "state": "open", "labels": [ "module: autograd", "triaged", "oncall: pt2", "module: higher order operators", "module: pt2-dispatcher" ], "created_at": "2025-10-29T11:57:03Z", "updated_at": "2025-11-04T21:32:28Z", "comments": 2, "user": "SUSYUSTC" }, { "repo": "pytorch/torchtitan", "number": 1950, "title": "Break the tests/integration_tests/run_tests.py UT", "body": "### Bug description\n\nhttps://github.com/pytorch/torchtitan/pull/1922 this patch break the existing tests/integration_tests/run_tests.py \n\nError :\n[rank0]:[rank0]: Traceback (most recent call last):\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n[rank0]:[rank0]: return _run_code(code, main_globals, None,\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/runpy.py\", line 86, in _run_code\n[rank0]:[rank0]: exec(code, run_globals)\n[rank0]:[rank0]: File \"/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py\", line 683, in \n[rank0]:[rank0]: trainer.train()\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 358, in wrapper\n[rank0]:[rank0]: return f(*args, **kwargs)\n[rank0]:[rank0]: File \"/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py\", line 608, in train\n[rank0]:[rank0]: self.train_step(data_iterator)\n[rank0]:[rank0]: File \"/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py\", line 508, in train_step\n[rank0]:[rank0]: loss = self.forward_backward_step(input_dict, labels)\n[rank0]:[rank0]: File \"/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py\", line 453, in forward_backward_step\n[rank0]:[rank0]: self.pp_schedule.step(\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/schedules.py\", line 626, in step\n[rank0]:[rank0]: self._step_microbatches(args_split, kwargs_split, targets_split, losses)\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/schedules.py\", line 728, in _step_microbatches\n[rank0]:[rank0]: self._initialize_stage(arg_mbs[0], kwarg_mbs[0])\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/schedules.py\", line 585, in _initialize_stage\n[rank0]:[rank0]: self._stage._prepare_forward_infra(self._n_microbatches, args, kwargs)\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/stage.py\", line 1525, in _prepare_forward_infra\n[rank0]:[rank0]: outputs = self._shape_inference(args, kwargs)\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/stage.py\", line 1455, in _shape_inference\n[rank0]:[rank0]: outputs = self.submod(*args, **kwargs)\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1780, in _wrapped_call_impl\n[rank0]:[rank0]: return self._call_impl(*args, **kwargs)\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1886, in _call_impl\n[rank0]:[rank0]: return inner()\n[rank0]:[rank0]: File \"/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1834, in inner\n[rank0]:[rank0]: result = forward_call(*args, **kwargs)\n[rank0]:[rank0]: TypeError: Transformer.forward() got an unexpected keyword argument 'return_outputs'\n\n### Versions\n\ncommit 8228c0845aa8b2e6e9672c30f40fe4af9588dca2 (HEAD -> main, origin/main, origin/HEAD)", "url": "https://github.com/pytorch/torchtitan/issues/1950", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-28T13:14:37Z", "updated_at": "2025-10-29T08:55:34Z", "user": "dayanandav" }, { "repo": "pytorch/tutorials", "number": 3625, "title": "Will you release the TorchRL C++ API in the future, similar to the PyTorch C++ API?", "body": "Will you release the TorchRL C++ API in the future, similar to the PyTorch C++ API? We look forward to using the TorchRL C++ API in the future.", "url": "https://github.com/pytorch/tutorials/issues/3625", "state": "open", "labels": [ "question", "Reinforcement Learning" ], "created_at": "2025-10-28T11:27:52Z", "updated_at": "2025-10-28T15:36:30Z", "user": "hyl20012" }, { "repo": "pytorch/pytorch", "number": 166363, "title": "All Docker build failed due to Ubuntu archive outage", "body": "## Current Status\nClosed\n\n## Error looks like\nDocker build Error:\n```\n#9 82.65 W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease Could not connect to archive.ubuntu.com:80 (185.125.190.82), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.81), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.81), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.82), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.83), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.83), connection timed out [IP: 185.125.190.81 80]\n```\nMultiple CI/CD jobs are timing out on Calculate Image step\n\n## Incident timeline (all times pacific)\nStarted - Oct 27, 2025 06:00:36 PM\nMarket resolved on Oct 27, 2025 06:48:43 PM https://status.canonical.com/#/incident/KNms6QK9ewuzz-7xUsPsNylV20jEt5kyKsd8A-3ptQFMY_6s8e7AbWcGbatrjSU_aoghGrAcVK7slWXgWMkizA==\nHowever observed failure on Oct 27, 2025 7:10 PM - https://github.com/pytorch/pytorch/actions/runs/18859957307/job/53816104822\nReverted a PR causing trigger docker rebuild https://github.com/pytorch/pytorch/pull/165470 Oct 27, 7:15PM\nConfirmed that issue is resolved at Oct 28, 2025 6:00AM\n\n\n## User impact\nMultiple CI/CD failures\n\n## Root cause\nComponent \"archive.ubuntu.com\" and a few other components are Down\nhttps://status.canonical.com/#/incident/KNms6QK9ewuzz-7xUsPsNylV20jEt5kyKsd8A-3ptQFMY_6s8e7AbWcGbatrjSU_aoghGrAcVK7slWXgWMkizA==\n\n## Mitigation\nReverted the PR that trigger docker rebuild https://github.com/pytorch/pytorch/pull/165470\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\n", "url": "https://github.com/pytorch/pytorch/issues/166363", "state": "closed", "labels": [], "created_at": "2025-10-28T02:42:58Z", "updated_at": "2025-10-28T13:57:51Z", "comments": 0, "user": "atalman" }, { "repo": "pytorch/vision", "number": 9251, "title": "roi_align onnx export fails while seemingly supported in torchvision code", "body": "### \ud83d\udc1b Describe the bug\n\nONNX export of a model using roi_align fails:\n\nCode:\n\n```\nimport torch\nfrom torch import nn\nfrom torchvision.ops import roi_align\n\nclass TestModel(nn.Module):\n def forward(self, x, b):\n return roi_align(x, b, output_size=(7, 7), spatial_scale=1/16.0)\n\nx = torch.zeros((1, 128, 40, 40))\nb = torch.zeros((300, 5))\nmodel = TestModel()\nonnx_model = torch.onnx.export(model, (x, b), opset_version=22, report=True, verbose=True)\n```\n\nThe strange thing is that I am seeing support for ROIAlign ops in the code: https://github.com/pytorch/vision/blob/218d2ab791d437309f91e0486eb9fa7f00badc17/torchvision/ops/_register_onnx_ops.py\n\nJust unsure how to use it or activate the support.\n\nThe ONNX conversion report is attached.\n\n[onnx_export_2025-10-27_17-13-55-895426_conversion.md](https://github.com/user-attachments/files/23168838/onnx_export_2025-10-27_17-13-55-895426_conversion.md)\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.9.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Debian GNU/Linux 13 (trixie) (x86_64)\nGCC version: (Debian 14.2.0-19) 14.2.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.41\n\nPython version: 3.13.5 (main, Jun 25 2025, 18:55:22) [GCC 14.2.0] (64-bit runtime)\nPython platform: Linux-6.12.48+deb13-amd64-x86_64-with-glibc2.41\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: \nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti\nNvidia driver version: 550.163.01\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 16\nOn-line CPU(s) list: 0-15\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen 7 7700X 8-Core Processor\nCPU family: 25\nModel: 97\nThread(s) per core: 2\nCore(s) per socket: 8\nSocket(s): 1\nStepping: 2\nFrequency boost: enabled\nCPU(s) scaling MHz: 74%\nCPU max MHz: 5573.0000\nCPU min MHz: 400.0000\nBogoMIPS: 8983.06\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d\nVirtualization: AMD-V\nL1d cache: 256 KiB (8 instances)\nL1i cache: 256 KiB (8 instances)\nL2 cache: 8 MiB (8 instances)\nL3 cache: 32 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-15\nVulnerability Gather data sampling: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Mitigation; Safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs ba", "url": "https://github.com/pytorch/vision/issues/9251", "state": "open", "labels": [], "created_at": "2025-10-27T16:21:07Z", "updated_at": "2025-10-28T11:58:28Z", "comments": 2, "user": "timstokman" }, { "repo": "pytorch/pytorch", "number": 166303, "title": "Pytorch Operators on older pytorch version", "body": "### \ud83d\udcda The doc issue\n\nHi team,\n\nI've seen that PyTorch has recently been transitioning to `pip install` (https://github.com/pytorch/pytorch/issues/152276).\n\nFor projects doing custom operators like Kaolin we want to support a reasonable version matrix of PyTorch, what are we supposed to do?\n\nThe documentation for custom operators is not accessible on older versions (automatically lead to latest version).\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/166303", "state": "open", "labels": [ "needs reproduction", "module: docs", "triaged" ], "created_at": "2025-10-27T14:04:02Z", "updated_at": "2025-10-27T16:55:38Z", "comments": 2, "user": "Caenorst" }, { "repo": "pytorch/torchtitan", "number": 1936, "title": "Is it possible to train Vision-Language Model with different parallelism plan for vision and language parts of the model?", "body": "can we train a Vision-Language Model using torchtitan? \n\nAnd can we set different parallelism plan for different parts of the model: fsdp2+dp for vision part, and fsdp2+dp+sp+ep+pp for the llm part? If it is possible, how to do it?\n\nThanks very much.", "url": "https://github.com/pytorch/torchtitan/issues/1936", "state": "open", "labels": [], "created_at": "2025-10-27T06:47:47Z", "updated_at": "2025-10-27T14:16:04Z", "comments": 2, "user": "airlsyn" }, { "repo": "pytorch/pytorch", "number": 166282, "title": "Why does my PR still show \"Missing CLA Authorization\" even though I have already signed the CLA document?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWhy does my PR still show \"Missing CLA Authorization\" even though I have already signed the CLA document?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/166282", "state": "closed", "labels": [], "created_at": "2025-10-27T01:19:21Z", "updated_at": "2025-10-27T16:45:23Z", "comments": 1, "user": "wenlinchong17-web" }, { "repo": "pytorch/pytorch", "number": 166238, "title": "[Dynamo][BUG] Regression about `collections.defaultdict` creation", "body": "### \ud83d\udc1b Describe the bug\n\nSee CI error log: https://github.com/pytorch/pytorch/actions/runs/18803810990/job/53655896530#step:27:2137\n\n### Error logs\n\n```pytb\n----------------------------- Captured stdout call -----------------------------\ninline_call [(\"Unsupported function call\n Explanation: Dynamo does not know how to trace the function ``\n Hint: Avoid calling `` in your code.\n Hint: Please report an issue to PyTorch.\n\n Developer debug context:\ncall_function UserDefinedClassVariable() [GetAttrVariable(DefaultDictVariable(), default_factory), ConstDictVariable()] {}\n\n For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0147.html\", 1)]\n- generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/dynamo.test_misc/dynamo.test_misc-271de5e392c25fc0.xml -\n=========================== short test summary info ============================\nFAILED [0.2732s] dynamo/test_misc.py::MiscTestsPyTree::test_pytree_tree_map_dict_order_cxx - torch._dynamo.exc.Unsupported: Unsupported function call\n Explanation: Dynamo does not know how to trace the function ``\n Hint: Avoid calling `` in your code.\n Hint: Please report an issue to PyTorch.\n\n Developer debug context: call_function UserDefinedClassVariable() [GetAttrVariable(DefaultDictVariable(), default_factory), ConstDictVariable()] {}\n```\n\n\n### Versions\n\nmain\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela", "url": "https://github.com/pytorch/pytorch/issues/166238", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "dynamo-must-fix", "dynamo-variable-tracker" ], "created_at": "2025-10-25T15:26:06Z", "updated_at": "2025-11-05T06:09:41Z", "comments": 4, "user": "XuehaiPan" }, { "repo": "pytorch/pytorch", "number": 166233, "title": "license: Is it possible to stop using Conda in the Dockerfile? Due to Conda\u2019s licensing issues, many companies have already received legal warning letters.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nStarting this year, many companies have received legal letters from Conda\u2019s lawyers, explicitly stating that using Conda requires a paid license. Although I have checked Conda\u2019s official website, it does not clearly specify this. I also noticed that the current PyTorch Dockerfile still uses Conda, which makes me very concerned. Therefore, I strongly recommend removing Conda and using **uv** or building Python from source as the base environment instead.\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @seemethere @malfet @atalman", "url": "https://github.com/pytorch/pytorch/issues/166233", "state": "open", "labels": [ "module: binaries", "triaged", "module: docker", "better-engineering" ], "created_at": "2025-10-25T08:50:40Z", "updated_at": "2025-10-28T03:42:37Z", "comments": 2, "user": "WangxuP" }, { "repo": "pytorch/pytorch", "number": 166219, "title": "Why are there so many warnings when building the C++ libtorch project? How to resolve it?", "body": "### \ud83d\udc1b Describe the bug\n\nWhen I compile the C++ libtorch project, there are many warnings. How can I resolve them? My configuration: Win11, MSVC, libtorch 2.8.0. My C++ code is as follows:\n```cpp\n#include \n#include \n\nint main() {\n torch::Tensor tensor_zeros = torch::zeros({3, 3});\n std::cout << \"Zeros Tensor:\\n\" << tensor_zeros << \"\\n\\n\";\n\n torch::Tensor tensor_ones;\n if (torch::cuda::is_available()) {\n tensor_ones = torch::ones({2, 2}, torch::kFloat).to(torch::kCUDA);\n std::cout << \"Ones Tensor on CUDA:\\n\" << tensor_ones << \"\\n\\n\";\n } else {\n tensor_ones = torch::ones({2, 2}, torch::kFloat);\n std::cout << \"CUDA not available. Ones Tensor on CPU:\\n\" << tensor_ones << \"\\n\\n\";\n }\n\n\n std::vector data = {1.0, 2.0, 3.0, 4.0};\n torch::Tensor tensor_from_vector = torch::from_blob(data.data(), {2, 2});\n std::cout << \"Tensor from vector:\\n\" << tensor_from_vector << \"\\n\";\n\n if (torch::cuda::is_available()) {\n auto cpu_tensor = torch::rand({5, 5});\n auto gpu_tensor = cpu_tensor.to(torch::kCUDA);\n std::cout << \"Tensor on GPU:\\n\" << gpu_tensor << \"\\n\";\n }\n\n return 0;\n}\n```\nThe warnings during compilation are as follows:\n```\n[1/5] Copying DLL files to build directory\n[2/5] Scanning E:\\coding\\cppcode\\libtorchtest\\test.cpp for CXX dependencies\n[3/5] Generating CXX dyndep file CMakeFiles\\test.dir\\CXX.dd\n[4/5] Building CXX object CMakeFiles\\test.dir\\test.cpp.obj\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\optional(82): warning C4267: \u201c\u521d\u59cb\u5316\u201d: \u4ece\u201csize_t\u201d\u8f6c\u6362\u5230\u201cint\u201d\uff0c\u53ef\u80fd\u4e22\u5931\u6570\u636e\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\optional(82): note: \u6a21\u677f\u5b9e\u4f8b\u5316\u4e0a\u4e0b\u6587(\u6700\u65e9\u7684\u5b9e\u4f8b\u5316\u4e0a\u4e0b\u6587)\u4e3a\nG:\\software\\libtorch280_cu126Release\\include\\ATen/core/function_schema.h(438): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201cstd::optional::optional(_Ty2 &&) noexcept\u201d\u7684\u5f15\u7528\n with\n [\n I=size_t,\n _Ty2=size_t\n ]\nG:\\software\\libtorch280_cu126Release\\include\\ATen/core/function_schema.h(438): note: \u8bf7\u53c2\u9605 \"c10::FunctionSchema::argumentIndexWithName\" \u4e2d\u5bf9 \"std::optional::optional\" \u7684\u7b2c\u4e00\u4e2a\u5f15\u7528\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\optional(258): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201cstd::_Optional_construct_base<_Ty>::_Optional_construct_base(std::in_place_t,const unsigned __int64 &&)\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=int\n ]\nE:\\coding\\cppcode\\libtorchtest\\test.cpp(32): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201cstd::_Optional_destruct_base<_Ty,true>::_Optional_destruct_base(std::in_place_t,const unsigned __int64 &&) noexcept\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=int\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\xutility(492): warning C4267: \u201c\u521d\u59cb\u5316\u201d: \u4ece\u201csize_t\u201d\u8f6c\u6362\u5230\u201c_Ty\u201d\uff0c\u53ef\u80fd\u4e22\u5931\u6570\u636e\n with\n [\n _Ty=unsigned int\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\xutility(492): note: \u6a21\u677f\u5b9e\u4f8b\u5316\u4e0a\u4e0b\u6587(\u6700\u65e9\u7684\u5b9e\u4f8b\u5316\u4e0a\u4e0b\u6587)\u4e3a\nG:\\software\\libtorch280_cu126Release\\include\\torch/csrc/dynamo/compiled_autograd.h(236): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201cunsigned int &std::vector>::emplace_back(const _Ty &)\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=size_t\n ]\nG:\\software\\libtorch280_cu126Release\\include\\torch/csrc/dynamo/compiled_autograd.h(236): note: \u8bf7\u53c2\u9605 \"torch::dynamo::autograd::TensorArgs::lookup\" \u4e2d\u5bf9 \"std::vector>::emplace_back\" \u7684\u7b2c\u4e00\u4e2a\u5f15\u7528\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\vector(909): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201c_Ty &std::vector<_Ty,std::allocator<_Ty>>::_Emplace_one_at_back(const unsigned __int64 &)\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=std::_Vbase\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\vector(830): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201c_Ty &std::vector<_Ty,std::allocator<_Ty>>::_Emplace_back_with_unused_capacity(const unsigned __int64 &)\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=std::_Vbase\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\vector(845): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201cvoid std::_Construct_in_place(unsigned int &,const _Ty &) noexcept\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=size_t\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\xutility(502): note: \u67e5\u770b\u5bf9\u6b63\u5728\u7f16\u8bd1\u7684\u51fd\u6570 \u6a21\u677f \u5b9e\u4f8b\u5316\u201c_Ty *std::construct_at<_Ty,const unsigned __int64&>(_Ty *const ,const unsigned __int64 &) noexcept()\u201d\u7684\u5f15\u7528\n with\n [\n _Ty=unsigned int\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\xutility(506): warning C4267: \u201c\u521d\u59cb\u5316\u201d: \u4ece\u201csize_t\u201d\u8f6c\u6362\u5230\u201cunsigned int\u201d\uff0c\u53ef\u80fd\u4e22\u5931\u6570\u636e\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\xutility(492): warning C4267: \u201c\u521d\u59cb\u5316\u201d: \u4ece\u201csize_t\u201d\u8f6c\u6362\u5230\u201c_Ty\u201d\uff0c\u53ef\u80fd\u4e22\u5931\u6570\u636e\n with\n [\n _Ty=int\n ]\nH:\\software\\Visual_Studio_022\\VC\\Tools\\MSVC\\14.43.34808\\include\\xutility(492): note: \u6a21\u677f\u5b9e\u4f8b\u5316\u4e0a\u4e0b\u6587(\u6700\u65e9\u7684\u5b9e", "url": "https://github.com/pytorch/pytorch/issues/166219", "state": "open", "labels": [ "module: windows", "module: cpp-extensions", "triaged" ], "created_at": "2025-10-25T03:09:34Z", "updated_at": "2025-10-25T15:39:20Z", "user": "hyl20012" }, { "repo": "pytorch/pytorch", "number": 166180, "title": "AOTI _register_aoti_cleanup line 47", "body": "### \ud83d\udc1b Describe the bug\n\nHi,\nTrying to run [this code](https://huggingface.co/spaces/zerogpu-aoti/wan2-2-fp8da-aoti-faster/tree/main) on Modal, I got this error message I absolute don't know how to interpret\n\n### Error logs\n\n```\n File \":/usr/local/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 120, in decorate_context\n File \":/usr/local/lib/python3.12/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py\", line 756, in __call__\n File \":/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1775, in _wrapped_call_impl\n File \":/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1786, in _call_impl\n File \":/usr/local/lib/python3.12/site-packages/diffusers/models/transformers/transformer_wan.py\", line 663, in forward\n File \":/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1775, in _wrapped_call_impl\n File \":/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1786, in _call_impl\n File \":/usr/local/lib/python3.12/site-packages/spaces/zero/torch/aoti.py\", line 77, in __call__\n File \":/usr/local/lib/python3.12/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n^^^^^^^^^^^^^^^\n File \":/usr/local/lib/python3.12/site-packages/spaces/zero/torch/aoti.py\", line 47, in _register_aoti_cleanup\n File \":/usr/local/lib/python3.12/pathlib.py\", line 1056, in iterdir\n for name in os.listdir(self):\n ^^^^^^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: '/proc/2/map_files'\n```\n\n### Versions\n\nPyTorch version: 2.9.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: Could not collect\nCMake version: version 3.27.6\nLibc version: glibc-2.35\n\nPython version: 3.12.1 | packaged by Anaconda, Inc. | (main, Jan 19 2024, 15:51:05) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35\nIs CUDA available: True\n\nVersions of relevant libraries:\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-nccl-cu12==2.27.5\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] torch==2.9.0\n[pip3] torchao==0.14.1\n[pip3] torchaudio==2.9.0\n[pip3] torchvision==0.24.0\n[pip3] triton==3.5.0\n[conda] numpy 1.26.4 pypi_0 pypi\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/166180", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2025-10-24T18:58:27Z", "updated_at": "2025-10-28T09:20:17Z", "comments": 2, "user": "christopher5106" }, { "repo": "pytorch/ao", "number": 3243, "title": "TorchAO Missing 3.13T (free-threading) Wheels", "body": "Latest `0.14.1` cuda builds does produce wheels for `3.13t` which is the `nogil` build of Python. \n\nOn Ubuntu 24.04 x86_64\n\n```py\n# pip install torchao==0.14.1 --index-url https://download.pytorch.org/whl/cu130 -U\nLooking in indexes: https://download.pytorch.org/whl/cu130\nERROR: Could not find a version that satisfies the requirement torchao==0.14.1 (from versions: none)\nERROR: No matching distribution found for torchao==0.14.1\n\n# python --version\nPython 3.13.8\n\n# python --version\nPython 3.13.8\n\n# pip show torch\nName: torch\nVersion: 2.9.0+cu130\nSummary: Tensors and Dynamic neural networks in Python with strong GPU acceleration\nHome-page: https://pytorch.org\nAuthor: \nAuthor-email: PyTorch Team \nLicense: BSD-3-Clause\nLocation: /root/vm313t/lib/python3.13t/site-packages\nRequires: filelock, fsspec, jinja2, networkx, nvidia-cublas, nvidia-cuda-cupti, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cudnn-cu13, nvidia-cufft, nvidia-cufile, nvidia-curand, nvidia-cusolver, nvidia-cusparse, nvidia-cusparselt-cu13, nvidia-nccl-cu13, nvidia-nvjitlink, nvidia-nvshmem-cu13, nvidia-nvtx, setuptools, sympy, triton, typing-extensions\nRequired-by: accelerate, bitblas, causal_conv1d, flash_attn, GPTQModel, lm_eval, MemLord, peft, torchvision\n```\n\nReported here\nhttps://github.com/pytorch/ao/issues/2919#issuecomment-3443814140\n\nAnd reproduced by another user here:\nhttps://github.com/pytorch/ao/issues/2919#issuecomment-3444060877\n", "url": "https://github.com/pytorch/ao/issues/3243", "state": "open", "labels": [], "created_at": "2025-10-24T16:53:03Z", "updated_at": "2025-10-30T19:30:57Z", "comments": 1, "user": "Qubitium" }, { "repo": "pytorch/ao", "number": 3232, "title": "nvfp4: why do we need to call weight.contiguous for Qwen3 during lm-eval?", "body": "TODO @andrewor14 add repro", "url": "https://github.com/pytorch/ao/issues/3232", "state": "open", "labels": [], "created_at": "2025-10-23T21:20:54Z", "updated_at": "2025-10-28T22:36:03Z", "comments": 1, "user": "vkuzo" }, { "repo": "pytorch/pytorch", "number": 166116, "title": "[CCA] CUDACachingAllocator always release physical memory handle when the expandable segment unmaps.", "body": "This may not be a bug. I'm just confused about the CUDACachingAllocator behavior.\n\nWhen enable expandable segments, CCA uses the CUDA virtual memory API.([cuMemCreate](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__VA.html#group__CUDA__VA_1g899d69a862bba36449789c64b430dc7c)/[cuMemRelease](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__VA.html#group__CUDA__VA_1g3014f0759f43a8d82db951b8e4b91d68), etc.). \n\nI've noticed that CCA will call `cuMemRelease` any time it unmaps a physical memory handle as shown here: https://github.com/pytorch/pytorch/blob/bf5aa9e42eb4049aad56264dacefd638233924b5/c10/cuda/CUDACachingAllocator.cpp#L704\nAnd when map virtual address to physical memory, it will re-create the physical memory handle : https://github.com/pytorch/pytorch/blob/bf5aa9e42eb4049aad56264dacefd638233924b5/c10/cuda/CUDACachingAllocator.cpp#L441\n\nMy question is , does the physical memory handle really need to be released anytime when we do unmap? Can we reuse the handle for next mapping? I think maybe there will be some performance gain when we reuse these handles?", "url": "https://github.com/pytorch/pytorch/issues/166116", "state": "open", "labels": [ "triaged", "module: CUDACachingAllocator" ], "created_at": "2025-10-23T07:30:24Z", "updated_at": "2025-10-29T02:57:00Z", "comments": 3, "user": "PHLens" }, { "repo": "pytorch/pytorch", "number": 166106, "title": "[Feature][BUG] need support for DispatchKey.AutocastXPU", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\ndetails information in this [issue](https://github.com/intel/intel-xpu-backend-for-triton/issues/5366#issuecomment-3433362148).\ni get error when i use torch.compile+autocast+triton:\n\n```\n File \"D:\\miniconda3\\envs\\compile\\Lib\\site-packages\\torch\\_ops.py\", line 493, in dispatch\n raise NotImplementedError(\ntorch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:\nNotImplementedError: could not find kernel for HigherOrderOperator triton_kernel_wrapper_mutation at dispatch key DispatchKey.AutocastXPU (resolved from DispatchKey.AutocastXPU)\n```\n\ni found DispatchKey.AutocastCPU and DispatchKey.AutocastCUDA in [https://github.com/pytorch/pytorch/blame/c746feb86a1459db5f6294730d1d72ed15f16dd3/torch/_higher_order_ops/triton_kernel_wrap.py#L1364](https://github.com/pytorch/pytorch/blame/c746feb86a1459db5f6294730d1d72ed15f16dd3/torch/_higher_order_ops/triton_kernel_wrap.py#L1364)\nbut no DispatchKey.AutocastXPU.\nso i think it's not a bug. i think pytorch need support this feature. does pytorch have some plan?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @gujinghui @EikanWang @fengyuan14 @guangyey", "url": "https://github.com/pytorch/pytorch/issues/166106", "state": "open", "labels": [ "triaged", "module: xpu" ], "created_at": "2025-10-23T01:58:34Z", "updated_at": "2025-10-23T14:47:11Z", "comments": 1, "user": "xiaohoua" }, { "repo": "pytorch/vision", "number": 9249, "title": "Non-local versions of torch are only available for linux(/mac) aarch64", "body": "When checking https://download.pytorch.org/whl/torchvision/ for e.g. 0.24.0 on Python 3.12, the following list of wheels is available for non-local (no `+`) versions:\n\n```\ntorchvision-0.24.0-cp312-cp312-macosx_11_0_arm64.whl\ntorchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl\ntorchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl\ntorchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl\ntorchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl\ntorchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl\n```\n\nThis caused resolution problems for uv users on x86_64 linux (https://github.com/astral-sh/uv/issues/16386), I'm not sure if that's intentional? It also seems that the `manylinux_2_28_aarch64` wheels are duplicated.\n\nAnother user reported a different problem with https://download.pytorch.org/whl/nightly/cu128 in the uv discord (https://discord.com/channels/1039017663004942429/1039017663512449056/1430596302764249100):\n\n```\nResolved 176 packages in 22ms\nerror: Distribution `torchvision==0.25.0.dev20251012 @ registry+https://download.pytorch.org/whl/nightly/cu128` can't be installed because it doesn't have a source distribution or wheel for the current platform\n\nhint: You're on Linux (`manylinux_2_35_x86_64`), but `torchvision` (v0.25.0.dev20251012) only has wheels for the following platform: `manylinux_2_28_aarch64`; consider adding your platform to `tool.uv.required-environments` to ensure uv resolves to a version with compatible wheels\n```\n\nI'm not sure if this is a bug or intentional, I wanted to discuss how we can improve the user experience either on the torch side or on the uv side.", "url": "https://github.com/pytorch/vision/issues/9249", "state": "closed", "labels": [], "created_at": "2025-10-22T17:04:55Z", "updated_at": "2025-12-15T19:09:29Z", "comments": 3, "user": "konstin" }, { "repo": "pytorch/ao", "number": 3226, "title": "question of blockwise quant fp8 training", "body": "Hi, the [blockwise_fp8_training](https://github.com/pytorch/ao/tree/7e68d5ee6fe6749a667edd2510d5fd2b599a27e2/torchao/prototype/blockwise_fp8_training) has been there for a while. Is there any reason we dont merge it into [float8](https://github.com/pytorch/ao/tree/main/torchao/float8) folder?\n\nAnd current moe training only supports `FP8_ROWWISE` and `MXFP8`, will `FP8_BlockWise` be considered to be added into `torchao` in the near future? (mainly for h100 users)\n\nthanks!", "url": "https://github.com/pytorch/ao/issues/3226", "state": "open", "labels": [ "float8", "moe" ], "created_at": "2025-10-22T13:18:40Z", "updated_at": "2025-10-24T04:00:47Z", "comments": 3, "user": "rakkit" }, { "repo": "pytorch/pytorch", "number": 166020, "title": "[doc] Clarify that torch.mean doesn't support integer dtypes like torch.long", "body": "### \ud83d\udcda The doc issue\n\n[doc] Clarify that torch.mean doesn't support integer dtypes like torch.long\n\n**Page:** `torch.mean` documentation\n\n**Problem:** The documentation for `torch.mean` doesn't explicitly mention that integer dtypes (like `torch.long`) are not supported and will raise a runtime error.\n\n**Current behavior:** When users try:\n```python\ntorch.mean(torch.tensor([1, 2, 3], dtype=torch.long))\n```\nThey get the error: `RuntimeError: mean not implemented for 'Long'`\n\nHowever, this limitation isn't mentioned in the current documentation, leading to confusion about whether this is a bug or intended behavior.\n\n**Expected:** The documentation should clearly state that `torch.mean` requires floating-point input types and explain why integer types are not supported.\n\n**Location:** This affects the `torch.mean` documentation page at https://pytorch.org/docs/stable/generated/torch.mean.html\n\n### Suggest a potential alternative/fix\n\nAdd a note in the \"Notes\" section of `torch.mean` documentation:\n\n\"Note: `torch.mean` requires floating-point dtypes for input tensors. Integer dtypes (like `torch.long`, `torch.int`) are not supported because the mean operation typically results in floating-point values. If you need integer division, consider using `torch.div` with the `rounding_mode` parameter instead.\"\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/166020", "state": "closed", "labels": [ "triaged" ], "created_at": "2025-10-21T19:27:50Z", "updated_at": "2025-10-21T22:13:29Z", "comments": 1, "user": "har5hdeep5harma" }, { "repo": "pytorch/pytorch", "number": 166014, "title": "Make Inductor Fallback Nodes Less Reliant on Invariants from Functionalization / AOT Autograd", "body": "### \ud83d\udc1b Describe the bug\n\n\nInductor has generic support for invoking operators as they would have been [in eager execution](https://github.com/pytorch/pytorch/blob/3dfd0c75847aad61a24e63d91bb330083db11857/torch/_inductor/graph.py#L1626-L1630). This path is hardened and works well both for custom ops and for bisecting a bad inductor lowering. However, it relies on invariants provided by AOT Autograd. If we want to compile without functionalization and decomposition, we may need to make it less reliant.\n\n#### Problem 1: Aliasing Relationships\n\nThe aliasing relationships of the graph must be statically known and correct. Likely the easiest and best path forward is to make sure that we have runtime checking of the aliasing relationships of custom ops. See https://github.com/pytorch/pytorch/issues/165349. When things are incorrect, there are two failure modes:\n\n**Incorrectly marked as aliasing**\n\nAn operator signature or meta may statically indicate that an input and output are aliasing when at execution time a new tensor will be returned. In this case, we will delay deleting the input until the output's final use, which can increase peak memory. \n\nSee: https://github.com/pytorch/pytorch/pull/163182#discussion_r2380201053 There's no reason why we can't delete the input eagerly here, since the view should keep the tensor alive.\n\n**Incorrectly marked as non-aliasing**\n\nThe failure mode here is that we may reuse the buffer with `config.inplace_buffers = True`. See this discussion on an operator which was incorrectly marked: https://github.com/pytorch/pytorch/issues/165349\n\nBoth of these failure modes interact with the scheduler in a) [DCE (Dead Code Elimination)](https://github.com/pytorch/pytorch/blob/c40048472cc4e28f44e8e5835cae319add231bf5/torch/_inductor/scheduler.py#L2860) and b) [Weak dependency mutation ordering](https://github.com/pytorch/pytorch/blob/c40048472cc4e28f44e8e5835cae319add231bf5/torch/_inductor/scheduler.py#L1102-L1104)\n\n\n#### Problem 2: Limited Mutation Support\n\nMutation has a limited form, mostly on inputs, and aliasing is limited with fallback nodes. \n\nSee related issue: https://github.com/pytorch/pytorch/issues/166009\n\n### Versions\n\nmain\n\ncc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben", "url": "https://github.com/pytorch/pytorch/issues/166014", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-10-21T18:59:09Z", "updated_at": "2025-10-21T18:59:31Z", "comments": 0, "user": "eellison" }, { "repo": "pytorch/pytorch", "number": 165985, "title": "Can I provide a Chinese version of the readme file to submit", "body": "### \ud83d\udcda The doc issue\n\nCan I provide a Chinese version of the readme file to submit?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/165985", "state": "closed", "labels": [], "created_at": "2025-10-21T11:32:28Z", "updated_at": "2025-10-27T23:04:37Z", "comments": 1, "user": "wenlinchong17-web" }, { "repo": "pytorch/xla", "number": 9684, "title": "RFC: Evolving PyTorch/XLA for a more native experience on TPU", "body": "### Motivation\n\nFor many years, `torch_xla` has been the primary way for the community to run PyTorch programs on Cloud TPUs. It has successfully enabled the training of massive models by bringing the power of the XLA compiler to the PyTorch ecosystem.\n\nThe current implementation, while powerful, presents a developer experience that can sometimes feel distinct from \"native\" PyTorch. The reliance on a lazy tensor model and explicit graph tracing (`xm.mark_step`) creates a separation from PyTorch's eager-first philosophy. This can introduce challenges in debugging, complicates integration with the broader PyTorch ecosystem, and requires users to learn a `torch_xla`-specific set of APIs and concepts.\n\nWe believe we can deliver a more seamless and native experience for PyTorch users on TPUs. The goal is to provide the best of both worlds: the interactive, flexible development experience of PyTorch's eager mode and the world-class performance of the XLA compiler for scaled-out workloads.\n\n---\n\n### Proposal: A Native TPU Backend\n\nWe propose a TPU backend for PyTorch that is designed to align with modern PyTorch architecture and eager-first design. The goal is to make a \"native\" device in PyTorch, where `tensor.to('tpu')` feels just as natural and intuitive as `tensor.to('cuda')`. This new direction aims to fully embrace PyTorch's eager mode while still leveraging the powerful XLA compiler for performance-critical code paths.\n\nThe core principles of this new stack are:\n\n1. **XLA**: Similarly to `torch_xla`, our proposal assumes that we can continue to rely on XLA as the underlying compiler infrastructure. However, we would call it in a profoundly different way which enables new techniques and a better user experience. Note that on TPU, compilation is required for the best performance \u2014 but it should be possible to hide the compile times. \n1. **Eager Mode with Deferred Execution**: Similar to standard PyTorch eager mode, ops are being dispatched. However, the new stack can then choose to compile and execute individual ops, shorter or longer sequences of ops, or potential candidates for fusion clusters\u2014all the way up to a full compile of a forward or backward pass.
\nCompilation would happen asynchronously, which means compilation of graphs and their execution could overlap, and compilation results would be cached. We would work with the XLA team to further reduce overall compile time overhead with techniques such as persistent deduping and by limiting inlining and unrolling. As a result, the compile time overhead would be drastically minimized even for larger incrementally compiled graphs.\n1. **JIT**: This approach would enable a true just-in-time compilation engine with recompilation, feedback-directed optimizations, autotuning, and active memory management to avoid OOMs. With this, users would get the eager experience but with compiled performance after just a few inferences or training steps.\n\nWith these principles in mind, we could deliver on the following features:\n\n1. **Eager Execution by Default**: As described above, operations will appear as being eagerly executed, just as they do on CPU or GPU, even though they are being compiled in the background with minimal, and mostly hidden, compile time overhead. This would provide a familiar, intuitive, and much easier-to-debug workflow where users can inspect tensors and use standard Python tooling. \n1. **Integration with `torch.compile`**: For maximizing performance, TPU would integrate as a first-class backend for `torch.compile`. This would allow users to get the performance benefits of XLA compilation and TPUs at scale on their performance-critical code with a simple `@torch.compile` decorator. \n1. **Distributed Training via DTensor**: The new backend would natively support PyTorch's distributed APIs. This would allow users to leverage advanced, large-scale distributed training strategies like Fully Sharded Data Parallel (FSDP) and other model parallelism techniques out of the box, making it much simpler to scale up models. \n1. **A More \"PyTorch Native\" Feel**: The end goal is to abstract away the complexities of the underlying compiler. Developing for a TPU should not require a fundamentally different programming model. This would mean moving away from `torch_xla`-specific APIs and toward the standard PyTorch API surface. This approach would provide the best of both worlds: the interactive, flexible development experience of PyTorch's eager mode and the world-class performance of the XLA compiler for scaled-out workloads.\n\n---\n\n### We Want Your Feedback!\n\nWe're excited for this direction, and to bring together PyTorch's eager mode and the XLA compiler in a way that helps the community achieve new levels of performance and scale. This is a significant undertaking, and we want to build it with the community. We're open to feedback on this direction.\n\n- Does this proposal address the pain points you've experienced with `torch_xla?`\n- Are there specific work", "url": "https://github.com/pytorch/xla/issues/9684", "state": "open", "labels": [ "RFC" ], "created_at": "2025-10-20T22:12:20Z", "updated_at": "2025-12-19T04:58:36Z", "comments": 18, "user": "qcc4cp" }, { "repo": "pytorch/pytorch", "number": 165933, "title": "[Distributed] fully_shard: support no_shard (ddp) strategy?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIt looks like the `fully_shard` API is recommended these days over `torch.distributed.FSDP`. The latter allows a `ShardingStrategy` argument to control the degree of sharding (i.e. zero1/2/3) - this is useful in some cases where we don't want to shard the params, only grads, or not shard anything at all, and just use FSDP for its CPU offload / mixed precision features. \n\nChecking the `fully_shard` docs: https://docs.pytorch.org/docs/stable/distributed.fsdp.fully_shard.html, it appears to support zero-2/HSDP but not `NO_SHARD`. Couple questions:\n\n2) Are there any plans to add no_shard (DDP) support? \n3) If not for (2), would `torch.distributed.FSDP` be supported and recommended for these use cases? \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/165933", "state": "open", "labels": [ "oncall: distributed" ], "created_at": "2025-10-20T20:48:14Z", "updated_at": "2025-10-22T14:44:13Z", "comments": 0, "user": "rohan-varma" }, { "repo": "pytorch/pytorch", "number": 165909, "title": "AWS was down, GHA infrastructure effected / recovering", "body": "> NOTE: Remember to label this issue with \"`ci: sev`\"\n> If you want autorevert to be disabled, keep the ci: disable-autorevert label\n\n \n\n## Current Status\n\nMitigated, queues are recovering.\n\nAWS experienced a big outage (https://health.aws.amazon.com/health/status) this morning resulting in most of our GHA infra going down with them.\n\nWe are still in the process of recovering and will update as soon as our services are able to recover.\n\n## Error looks like\n*Provide some way users can tell that this SEV is causing their issue.*\n\n## Incident timeline (all times pacific)\n*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*\n\n## User impact\n*How does this affect users of PyTorch CI?*\n\n## Root cause\n*What was the root cause of this issue?*\n\n## Mitigation\n*How did we mitigate the issue?*\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\n", "url": "https://github.com/pytorch/pytorch/issues/165909", "state": "closed", "labels": [ "ci: sev", "ci: sev-mitigated" ], "created_at": "2025-10-20T15:28:48Z", "updated_at": "2025-10-21T16:41:19Z", "comments": 0, "user": "seemethere" }, { "repo": "pytorch/pytorch", "number": 165907, "title": "Feedback on profiler key_averages documentation", "body": "### \ud83d\udcda The doc issue\n\nIt would be great to have more documentation on how to use key_averages beyond the Table method. Right now there is no documentation for the EventList and FunctionEventAvg data types.\n\n### Suggest a potential alternative/fix\n\nAdding pages for EventList and FunctionEventAvg classes would be a good start, and it would be nice to have an easy way to create a dataframe from the results.\n\ncc @svekars @sekyondaMeta @AlannaBurke @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise", "url": "https://github.com/pytorch/pytorch/issues/165907", "state": "closed", "labels": [ "module: docs", "actionable", "oncall: profiler" ], "created_at": "2025-10-20T14:56:48Z", "updated_at": "2025-11-14T02:03:22Z", "comments": 0, "user": "alexracape" }, { "repo": "pytorch/pytorch", "number": 165902, "title": "torchcodec in pytorch url", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIs it possible to have torchcodec in pytorch url?\npip3 install torch torchvision torchaudio torchcodec--index-url https://download.pytorch.org/whl/cu130\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @seemethere @malfet @atalman", "url": "https://github.com/pytorch/pytorch/issues/165902", "state": "open", "labels": [ "module: binaries", "triaged" ], "created_at": "2025-10-20T12:11:01Z", "updated_at": "2025-10-20T14:27:16Z", "comments": 0, "user": "johnnynunez" }, { "repo": "pytorch/pytorch", "number": 165900, "title": "Converting weights `.pt` content between `dict` and `RecursiveScriptModule`", "body": "When using PyTorch inside Isaac Lab to train RL policies, the program saves weights `.pt` file as a Python dict (policy, value, and optimizer keys). It can be further loaded with `torch.load` function.\n\nHowever, Isaac Sim's policy loader expects a `torch.jit._script.RecursiveScriptModule` object to be loaded with `torch.jit.load` and attempting `torch.jit.load` leads to errors like:\n\n`RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found`\n\nIs there any way to convert between these file content formats? This may be the crucial issue regarding usage of PyTorch inside Isaac Lab / Sim, so I posted the original thread also on their repo if you find this useful: https://github.com/isaac-sim/IsaacLab/issues/3697\n\ncc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel", "url": "https://github.com/pytorch/pytorch/issues/165900", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2025-10-20T11:25:01Z", "updated_at": "2025-10-20T14:27:26Z", "comments": 0, "user": "PsorTheDoctor" }, { "repo": "pytorch/pytorch", "number": 165861, "title": "Reflect padding: CUDA errror when one of the batch dimensions is larger than uint16 max value (2**16)", "body": "### \ud83d\udc1b Describe the bug\n\nReflect padding breaks when one of the batch dimensions is larger than uint16 max value (2**16).\nThe total memory footprint is not important, as when the tensor holds more numbers, but all but last dimension is within the uint16 range everything is fine. \n\nOther padding modes behave fine, the problem is only with the reflection one.\n\n## why is this important?\n`torch.stft` only accepts 2D tensors (B, L), requiring flattening of higher dimensions into the batch dimension. This commonly produces batch sizes > 65536 for large batches or multi-dimensional audio/signal data.\n\n## reproduce\n```python\nimport torch\nimport torch.nn.functional as F\n\n# these break cuda\nx = torch.rand(2**16, 2, device=\"cuda\")\n# x = torch.rand(1, 2**16, 2, device=\"cuda\")\n# x = torch.rand(2**16, 1, 2, device=\"cuda\")\n\n# these are fine even if the total number of samples is more than 2**16, but not along a single dimension\n# x = torch.rand(2**16 - 1, 200, device=\"cuda\") # everything ok\n# x = torch.rand(8, 2**16 - 1, 200, device=\"cuda\") # everything ok\n# x = torch.rand(2**16 - 1, 8, 200, device=\"cuda\") # everything ok\n\n# x = torch.rand(2, 2**18, device=\"cuda\") # everything ok\n\nF.pad(x, (1, 1), mode=\"constant\")\nprint(\"constant pad ok\")\nF.pad(x, (1, 1), mode=\"circular\")\nprint(\"circular pad ok\")\nF.pad(x, (1, 1), mode=\"replicate\")\nprint(\"replicate pad ok\")\n\nF.pad(x, (1, 1), mode=\"reflect\")\nprint(\"this won't print\")\n```\n\noutput (error message):\n```\nconstant pad ok\ncircular pad ok\nreplicate pad ok\nTraceback (most recent call last):\n File \"/home/milu10/src/temp/torch-pad-cuda-bug.py\", line 23, in \n F.pad(x, (1, 1), mode=\"reflect\")\n File \"/home/milu10/src/temp/.pixi/envs/default/lib/python3.12/site-packages/torch/nn/functional.py\", line 5294, in pad\n return torch._C._nn.pad(input, pad, mode, value)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\ntorch.AcceleratorError: CUDA error: invalid configuration argument\nSearch for `cudaErrorInvalidConfiguration' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n```\n\n### Versions\n\n`python collect_env.py`: \n\nCollecting environment information...\nPyTorch version: 2.9.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Rocky Linux 9.5 (Blue Onyx) (x86_64)\nGCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)\nClang version: Could not collect\nCMake version: version 3.26.5\nLibc version: glibc-2.34\n\nPython version: 3.12.12 | packaged by conda-forge | (main, Oct 13 2025, 14:34:15) [GCC 14.3.0] (64-bit runtime)\nPython platform: Linux-5.14.0-503.14.1.el9_5.x86_64-x86_64-with-glibc2.34\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: \nGPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB\nNvidia driver version: 565.57.01\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 43 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 64\nOn-line CPU(s) list: 0-63\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 7502 32-Core Processor\nCPU family: 23\nModel: 49\nThread(s) per core: 1\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 0\nFrequency boost: enabled\nCPU(s) scaling MHz: 97%\nCPU max MHz: 2500.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 4990.34\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthres", "url": "https://github.com/pytorch/pytorch/issues/165861", "state": "closed", "labels": [ "module: cuda", "triaged", "module: edge cases" ], "created_at": "2025-10-19T12:07:23Z", "updated_at": "2025-10-22T21:53:53Z", "comments": 2, "user": "michal-lukomski" }, { "repo": "pytorch/xla", "number": 9681, "title": "Improve PyTorch/XLA Documentation and Clarify SPMD Usage", "body": "## \ud83d\udcda Documentation\n\n### [Feature Request / Documentation Improvement] Improve PyTorch/XLA Documentation and Clarify SPMD Usage\n\nHello PyTorch/XLA team,\n\nDuring my TPU grant I encountered many undocumented pitfalls and unclear behaviors, which made the setup process very time-consuming and confusing.\n\nI\u2019d like to ask for clarification and improvement on several key points that caused me significant confusion and wasted time. \nPerhaps the documentation seems clear to experienced users, but when reading it for the first time, there are many implicit assumptions and missing explanations.\n\n---\n\n### General Request\nPlease improve the documentation \u2014 make it more **explicit** and **practical**, especially for multi-host and SPMD setups. \n\nFor example, while it\u2019s indeed mentioned in the [*Running on TPU Pods*](https://docs.pytorch.org/xla/master/learn/pytorch-on-xla-devices.html#running-on-tpu-pods) section that the code must be launched on all hosts, this information is **buried too deep** and is **not referenced** in other critical sections like \u201cTroubleshooting Basics.\u201d \nIt would be much clearer if you placed a visible note near the top of documentation saying something like:\n\n> \u26a0\ufe0f For multi-host TPU setups, you must launch the code on all hosts simultaneously. \n> See [Running on TPU Pods (multi-host)](...) for details.\n\nThis would help avoid confusion, since right now it\u2019s easy to miss and leads to situations where the code just hangs with no clear reason.\n\n---\n\n### Specific Questions and Issues\n\n1. What is recommended to use \u2014 `.launch` or `spmd`? \n2. Should SPMD be started on all hosts as well? \n3. In SPMD, is the batch size **global** or **per-host**? \n - How is data distributed if each process sees all devices and I have 4 hosts with 4 devices each? \n - If the batch size is global, what is the purpose of having multiple hosts? Only for data loading? \n - How does XLA decide what data goes to which device \u2014 does it shard across all devices globally or only locally per host? \n4. How to correctly use `scan/scan_layers` if the transformer block takes multiple arguments and one of them is of type `torch.bool`? \n5. `assume_pure` seems to break if the model contains `nn.Parameter`. Is it even correct to use it like that? \n - Can I reuse \u201cparams and buffers\u201d between steps, or should I retrieve them every time before a training pass? \n6. `syncfree.AdamW(model.parameters(), lr=lr, betas=(0.9, 0.95), weight_decay=0)` seems to trigger recompilation around step ~323 (possibly due to `beta2`, not sure). \n7. In SPMD, how to correctly get the process ID? `world_size` and `global_ordinal` don\u2019t work. Should I use `process_index`? `is_master_ordinal(local=False)` also doesn\u2019t work. \n8. Please add a note to the docs: when logging, it\u2019s better to use `flush=True`, otherwise logs might not appear (which is confusing). Also, wrap training code in `try/except`, since exceptions sometimes don\u2019t log either. \n9. How can I perform **sampling and logging** in SPMD mode if I want **only one host** to handle these tasks (not all hosts)? \n10. Please provide **fully explicit examples** \u2014 with comments, no abstractions, step-by-step explanations of what each part does and how it can be modified. \n11. Compilation caching seems broken \u2014 when trying to load, it says \u201cnot implemented.\u201d \n12. Can I pass only one `input_sharding=xs.ShardingSpec(mesh, ('fsdp', None))` to `MpDeviceLoader` if my dataset returns a tuple of 10 tensors with different shapes? \n13. `xm.rendezvous` seems to do nothing in SPMD mode (at least before the training loop). \n14. How to verify that all hosts are actually training **one shared model**, and not each training separately? \n15. In the docs, `HybridMesh(ici_mesh_shape, dcn_mesh_shape, ('data','fsdp','tensor'))` is shown, \n but in practice it only works if you pass named arguments like `ici_mesh_shape=ici_mesh_shape`, otherwise it errors out. \n16. How to correctly do **gradient checkpointing** per layer with FSDP? \n17. How to correctly do **gradient clipping**? \n18. If model weights are expected to remain in FP32 when using `autocast`, please **explicitly state that in the training docs** \u2014 it would help avoid second-guessing. \n19. What is a **reasonable compilation time** during training? Mine can take **20\u201330 minutes**.\n20. What are the actual intended purposes of `torch_xla.step()` and `torch_xla.compile()`? \n - Since PyTorch/XLA already compiles and executes lazily, it\u2019s unclear when and why these should be used explicitly. \n\n---\n\nAll of this was tested on `v4-32 TPU`. \nMaybe some of it is covered somewhere in the docs and I just missed it, but I hope you can clarify and improve the documentation.\n\nThank you for your time and support.\n", "url": "https://github.com/pytorch/xla/issues/9681", "state": "open", "labels": [ "distributed", "documentation" ], "created_at": "2025-10-19T04:58:44Z", "updated_at": "2025-10-20T13:27:34Z", "comments": 1, "user": "Muinez" }, { "repo": "pytorch/torchtitan", "number": 1920, "title": "Potentially incorrect attention flop calculation due to wrong head_dim?", "body": "### Bug description\n\nhttps://github.com/pytorch/torchtitan/blob/a8899e4b2cab74eadbe4b9a2ca2776ceb8829db3/torchtitan/models/utils.py#L432-L437\n\nHowever, `head_dim` is not necessarily equal to `dim / n_heads`\n\ne.g. Qwen3-4B, dim=2560, n_heads=32, head_dim=128\n\n### Versions\n\nlatest main", "url": "https://github.com/pytorch/torchtitan/issues/1920", "state": "closed", "labels": [ "high priority", "triage review" ], "created_at": "2025-10-18T15:56:57Z", "updated_at": "2025-10-29T22:03:17Z", "comments": 4, "user": "gau-nernst" }, { "repo": "pytorch/pytorch", "number": 165836, "title": "[ROCm][CI] Machines under the label linux.rocm.gpu.2 are undergoing maintenance.", "body": "> NOTE: Remember to label this issue with \"`ci: sev`\"\n> If you want autorevert to be disabled, keep the ci: disable-autorevert label\n\n \n\n## Current Status\n*Status could be: preemptive, ongoing, mitigated, closed. Also tell people if they need to take action to fix it (i.e. rebase)*.\nongoing\n\n## Error looks like\n*Provide some way users can tell that this SEV is causing their issue.*\nWe may expect higher queue times for PyTorch ROCm linux.rocm.gpu.2 workflows.\n\n\n## Incident timeline (all times pacific)\n*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*\n10/18/2025\n\n## User impact\n*How does this affect users of PyTorch CI?*\nWe may expect higher queue times for PyTorch ROCm linux.rocm.gpu.2 workflows.\n\n\n## Root cause\n*What was the root cause of this issue?*\nMaintenance\n\n## Mitigation\n*How did we mitigate the issue?*\nWill be resolve by EOD 10/19/2025\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\nN/A\n\n\ncc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd", "url": "https://github.com/pytorch/pytorch/issues/165836", "state": "closed", "labels": [ "module: rocm", "ci: sev" ], "created_at": "2025-10-18T12:54:28Z", "updated_at": "2025-10-20T16:09:25Z", "comments": 0, "user": "amdfaa" }, { "repo": "pytorch/pytorch", "number": 165811, "title": "[RFC] A Python backend registration API", "body": "In this dev post (https://dev-discuss.pytorch.org/t/embrace-tensor-subclass-as-a-python-device-registration-api/2771) I have talked about creating a PyTorch backend purely in Python. After chatting with few folks (@FFFrog @gabrieldemarmiesse), we decided that it's a good idea to formalize APIs around registering Backend in Python.\n\nPlease take a look, looking forward on any feedbacks. \nhttps://github.com/pytorch/rfcs/pull/83\n\nThanks!\n\ncc @bdhirsh @albanD", "url": "https://github.com/pytorch/pytorch/issues/165811", "state": "open", "labels": [ "triaged", "module: backend", "module: python frontend" ], "created_at": "2025-10-18T00:46:37Z", "updated_at": "2025-10-27T17:28:37Z", "comments": 1, "user": "qihqi" }, { "repo": "pytorch/pytorch", "number": 165799, "title": "`torch.where` does not accept scalar argument when `out=` is passed", "body": "### \ud83d\udc1b Describe the bug\n\n`torch.where` accepts scalar arguments as per documentation. This works fine for the most part, but when the `out` argument is provided, then a `TypeError` is raise complaining that scalar arguments are not accepted.\n\nTo reproduce the error, run\n```\nimport torch\nx = torch.tensor([1.0, 2.0])\ncond = torch.tensor([True, False])\nprint(torch.where(cond, x, 3.0)) # works fine, prints `tensor([1., 3.])`\nprint(torch.where(cond, x, 3.0, out=x))\n```\nwhich raises error\n```\nTraceback (most recent call last):\n File \"\", line 1, in \nTypeError: where(): argument 'other' (position 3) must be Tensor, not float\n```\nI have tested this both on Linux and MacOS.\n\n### Versions\n\n```\nPyTorch version: 2.9.0\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: macOS 15.6.1 (arm64)\nGCC version: Could not collect\nClang version: 12.0.0 (clang-1200.0.32.28)\nCMake version: version 3.31.3\nLibc version: N/A\n\nPython version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:44:07) [Clang 18.1.8 ] (64-bit runtime)\nPython platform: macOS-15.6.1-arm64-arm-64bit\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nApple M1 Pro\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.4\n[pip3] torch==2.9.0\n[conda] numpy 2.2.4 pypi_0 pypi\n[conda] torch 2.9.0 pypi_0 pypi\n```\n\ncc @albanD", "url": "https://github.com/pytorch/pytorch/issues/165799", "state": "open", "labels": [ "triaged", "module: python frontend" ], "created_at": "2025-10-17T22:30:10Z", "updated_at": "2025-10-19T19:21:34Z", "user": "hchau630" }, { "repo": "pytorch/executorch", "number": 15222, "title": "How to support custom LLMs with qualcomm backend?", "body": "``examples/qualcomm/oss_scripts/llama/llama.py`` gives an example on how to export LLMs. \n\nI would like to know if there are any guidelines for supporting custom LLMs with architectures similar to LLaMA. Specifically, I have a huggingface-style checkpoint folder. \n\ncc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin", "url": "https://github.com/pytorch/executorch/issues/15222", "state": "closed", "labels": [ "partner: qualcomm", "module: qnn" ], "created_at": "2025-10-17T15:22:28Z", "updated_at": "2025-10-30T21:20:11Z", "user": "xiaoxiaosuaxuan" }, { "repo": "pytorch/torchtitan", "number": 1903, "title": "Promlem with converting dcp ceckpoint to huggingface format", "body": "Hi ! I started a run with Llama_3_8b and saved the DCP checkpoint of step 0 (the original model). Then I used https://github.com/pytorch/torchtitan/blob/main/scripts/checkpoint_conversion/convert_to_hf.py\nto convert the step-0 DCP checkpoint into .safetensors files, and copied the config.json and tokenizer from meta-llama/Llama-3.1-8B. Then I used the converted checkpoint to generate simple test but got unreadable results.\nThe result and the code for generation:\n\n\"Image\"\n\n\"Image\"\n\nI wander if the issue is caused by incorrect config.json ? Thanks a lot !", "url": "https://github.com/pytorch/torchtitan/issues/1903", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-17T03:00:52Z", "updated_at": "2025-10-17T05:04:22Z", "user": "kv-wang" }, { "repo": "pytorch/torchtitan", "number": 1900, "title": "checkpoint.initial_load_in_hf should overwrite everything and load from hf weights.", "body": "### Bug description\n\nI have a `checkpoint` folder and I set `initial_load_in_hf: true` in yaml config like [this](https://github.com/meta-pytorch/forge/blob/main/apps/grpo/qwen3_1_7b.yaml#L78), when running `python -m apps.grpo.main --config apps/grpo/qwen3_1_7b.yaml`, I will get the error `step-1` not found. From the log I saw the warning :\n```\n[0] WARNING checkpoint.initial_load_path is provided but the checkpoint.folder exists. Checkpointer will use the checkpoints from the checkpoint.folder checkpoint.\n[0] WARNING checkpoint.initial_load_in_hf is True but the checkpoint.folder exists. Checkpointer will not load from HF safetensors\n```\nLooking closer, I noticed that `If the checkpoint folder for the current run is not empty, located at {--job.dump_folder}/{--checkpoint.folder}` at this [line](https://github.com/pytorch/torchtitan/blob/main/torchtitan/config/job_config.py#L464). Since the checkpoint.folder will by default be `checkpoints`, it will check if `checkpoints` folder exist or not and try to search from `checkpoints` folder.. totally ignore the setting `initial_load_in_hf: true`.\n\nI hope we can change it so that when `initial_load_in_hf=True` , it will load from HF weights not matter if `checkpoint.folder` exist or not. This is more user-friendly as the user already configured explicitly `initial_load_in_hf=True` and expect the program to load from HF weights.\n\n### Versions\n\nLatest main", "url": "https://github.com/pytorch/torchtitan/issues/1900", "state": "open", "labels": [ "question" ], "created_at": "2025-10-16T21:08:59Z", "updated_at": "2025-10-16T21:33:32Z", "user": "wukaixingxp" }, { "repo": "pytorch/xla", "number": 9679, "title": "PJRT Computation Client Teardown Function", "body": "## \u2753 Questions and Help\n\nIs there a teardown function that can be hooked from PJRT Plugin implementers for system teardown purposes? For example, graceful device closure at session termination?\n\nIt seems like the PJRT Computation Client is instantiated with a [leaky singleton](https://github.com/pytorch/xla/blob/d291621f583574f575888da33eaabe866056592c/torch_xla/csrc/runtime/runtime.cpp#L58-L60) pattern, so its destructor is not called, and we cannot leverage our PJRT Client's destructor.\n\nIs there some client shutdown hook that can be used? It seems like [PJRT_Client_Destroy](https://github.com/openxla/xla/blob/71a4e6e6e4e9f0f8b8f25c07a32ad489aff19239/xla/pjrt/c/pjrt_c_api.h#L374-L375C21) would be a suitable candidate, except that I don't see it ever being called from pytorch/xla.\n\nThe reason for this is that we would like to have some automatic device cleanup / other system resource teardown implemented in our plugin that triggers at the end of a session. It would also be nice to have a user-accessible API that permits session teardown within PJRT, for example to reset devices between pytests within the same process.", "url": "https://github.com/pytorch/xla/issues/9679", "state": "open", "labels": [ "question" ], "created_at": "2025-10-16T20:21:27Z", "updated_at": "2025-10-17T16:52:08Z", "user": "jameszianxuTT" }, { "repo": "pytorch/pytorch", "number": 165612, "title": "RFC: Optionally accept NumPy dtypes in all APIs where torch dtypes are accepted", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nOn behalf of the Python Data API Consortium / Python array API standard, to follow up with the conclusion we reached in the September 18 meeting I am filing this RFC for PyTorch stakeholders to consider \ud83d\ude42 \n \nThe Python array API standard currently specifies that each array library should make the supported dtypes available under the array library's namespace: https://data-apis.org/array-api/latest/API_specification/data_types.html. It does not specify how the dtype objects should be implemented, however, and in theory each library can have its own dtype object implementation. As a result, questions such as \n- How to translate `libA.float32` to `libB.float32`? ([example](https://github.com/data-apis/array-api/issues/972))\n- How to write portable library code without constantly checking which/whose dtype object to use?\n\ndo not have a definite answer today. \n\nThere exist workarounds, of course. For example, one could extract the string name of `libA.float32`, do a module lookup through [`__array_namespace__`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__array_namespace__.html), and then `getattr` to map it to `libB.float32`. But generally speaking the current state remains challenging for writing array-library-agnostic code. This is [one example](https://github.com/NVIDIA/nvmath-python/blob/6bddfa71c39c07804127adeb23f5b0d2168ae38c/nvmath/internal/ndbuffer/package_utils.pyx#L25-L44) from `nvmath-python`, NVIDIA's Python math library.\n\nAfter examining the Python ecosystem, however, we found that PyTorch is by far the only major Python array/tensor library that does not already use (alias) NumPy dtype objects; NumPy, CuPy, Jax, Dask, ndonnx, dpctl, ... all already do so. \n\nAs a result, one arguably \"simple\" solution to solve such interoperability/portability problems is to simply recognize NumPy dtype objects wherever a PyTorch dtype is accepted, including but not limited to `empty()`, `zeros()`, `.to()`, `.type_as()`, ... \n\nA further step we should evaluate is whether to return `Tensor.dtype` as a NumPy dtype object, if a tensor was created with a NumPy dtype. This might require extra efforts to keep track of the input state, so based on discussions for this RFC we can decide whether we want to include this extra step.\n\nThe proposal seeks for **optional**, **backward compatible** support for NumPy dtype types (ex: `np.float32`) and objects (ex: `np.dtype(np.float32)`). PyTorch need not introduce NumPy as a required dependency, unless there are other strong reasons; such optional support can be easily hidden behind a try-import-except guard, and this RFC does not mean to introduce any new dependency to PyTorch.\n\nThe benefits of adding this optional support includes:\n- Avoid ecosystem fragmentation\n- Help the array API from having to standardize yet another protocol for dtype exchange (DLPack is not and should not be the solution)\n- Allow writing array-library-agnostic code\n- Centralize all efforts in hardware-accelerated exotic (narrow precision) dtypes behind [ml_dtypes](https://github.com/jax-ml/ml_dtypes), a dtype extension based on NumPy's dtype registration system (and therefore provides proper NumPy dtype types and objects)\n\nRelated past discussions: https://github.com/pytorch/pytorch/issues/40471, https://github.com/pytorch/pytorch/issues/40568\n\ncc @albanD @rgommers (NumPy/SciPy) @lucascolley @ev-br (SciPy) @kmaehashi (CuPy) @aterrel @rparolin (CUDA Python) @jrhemstad (CUDA C++, aka CCCL) @kkraus14 (CUDA C++/Python) @seberg (NumPy) @brycelelbach (cuTile Python) @samaid (nvmath-python) @ptrblck (NVIDIA) @tqchen (DLPack) @jacobtomlinson (Dask) @jakevdp @hawkinsp (Jax/ml_dtype) @betatim (sklearn) @tomwhite (cubed) @kgryte @asmeurer (array API) for vis.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/165612", "state": "open", "labels": [ "triaged", "enhancement", "module: python frontend", "module: floatx (formerly float8)" ], "created_at": "2025-10-16T04:24:52Z", "updated_at": "2025-11-27T00:42:21Z", "comments": 1, "user": "leofang" }, { "repo": "pytorch/pytorch", "number": 165590, "title": "RuntimeError: non-positive groups is not supported", "body": "### \ud83d\udc1b Describe the bug\n\ntorch==2.7.1\nI got an RuntimeError: non-positive groups is not supported while using conv1d in my model. I tried to add more logs and asserts to find what is going wrong, but it didn't help. Even I set groups parameter to 128 the error remains\n\nfrom output i got sizes if input tensors\n```\ntorch.Size([4, 1, 128]) torch.Size([128, 1, 4]) torch.Size([128]) 4 \n```\ncode below\n```\n assert hasattr(self, 'd_inner'), \"self.d_inner is not defined!\"\n assert self.d_inner > 0, f\"self.d_inner must be positive, got {self.d_inner}\"\n if conv_weight.dim() == 3:\n print(f'{x_proj_out.shape} {conv_weight.shape} {conv_bias.shape} {self.d_conv}')\n assert 128 > 0, \"groups must be > 0\"\n x_conv = F.conv1d(\n x_proj_out.transpose(1, 2),\n conv_weight, # (d_inner, 1, d_conv)\n bias=conv_bias, # (d_inner,)\n padding=self.d_conv - 1,\n groups=128#self.d_inner\n )\n```\n\n### Versions\n\nCollecting environment information... \nPyTorch version: 2.7.1+cu126 \nIs debug build: False \nCUDA used to build PyTorch: 12.6 \nROCM used to build PyTorch: N/A \n \nOS: Debian GNU/Linux 12 (bookworm) (x86_64) \nGCC version: (Debian 12.2.0-14) 12.2.0 \nClang version: Could not collect \nCMake version: version 3.25.1 \nLibc version: glibc-2.36 \n \nPython version: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:29:23) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.1.0-34-amd64-x86_64-with-glibc2.36 \nIs CUDA available: True \nCUDA runtime version: 12.4.131 \nCUDA_MODULE_LOADING set to: LAZY \nGPU models and configuration: \nGPU 0: NVIDIA A100-PCIE-40GB \nGPU 1: NVIDIA A100-PCIE-40GB \nGPU 2: NVIDIA A100-PCIE-40GB \nGPU 3: NVIDIA A100-PCIE-40GB \nGPU 4: NVIDIA A100-PCIE-40GB \nGPU 5: NVIDIA A100-PCIE-40GB \nGPU 6: NVIDIA A100-PCIE-40GB \n \nNvidia driver version: 570.133.20 \ncuDNN version: Could not collect \nIs XPU available: False \nHIP runtime version: N/A \nMIOpen runtime version: N/A \nIs XNNPACK available: True \n \nCPU: \nArchitecture: x86_64 \nCPU op-mode(s): 32-bit, 64-bit \nAddress sizes: 43 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 256\nOn-line CPU(s) list: 0-255\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 7742 64-Core Processor\nCPU family: 23\nModel: 49\nThread(s) per core: 2 \nCore(s) per socket: 64 \nSocket(s): 2 \nStepping: 0 \nFrequency boost: enabled ", "url": "https://github.com/pytorch/pytorch/issues/165590", "state": "open", "labels": [ "needs reproduction", "module: nn", "triaged" ], "created_at": "2025-10-15T22:44:54Z", "updated_at": "2025-10-17T18:59:35Z", "comments": 1, "user": "st085318" }, { "repo": "pytorch/pytorch", "number": 165578, "title": "Out of tree backend documentation does not seem accurate", "body": "### \ud83d\udcda The doc issue\n\nLooking at the \"How does this mechanism apply to out-of-tree extensions\" section of [the autoloading tutorial](https://docs.pytorch.org/tutorials/unstable/python_extension_autoload.html#how-to-apply-this-mechanism-to-out-of-tree-extensions), it looks to me like importing setting a backend `torch_foo = torch_foo:_autoload` is going to automagically attach either the `torch_foo` or `torch_foo.foo` module to `torch` directly, since there is no code in there that manipulates the `torch` namespace, but when I try to do this, like in [this MWE](https://github.com/pganssle-google/torch-backend-mwe), it doesn't work.\n\n### Suggest a potential alternative/fix\n\nEither this documentation is inaccurate and should be made accurate to show how to attach your backend to the `torch` namespace or maybe the problem is that the namespace attachment is smuggled in under the assumption that `foo` is \"a backend\" (and the assumption is that backends show up in that namespace). Preferably the relevant code would be extracted into this tutorial to show how it works, but failing that it would be nice to get a link to something showing the essential elements of \"a backend\" that make this code example work.\n\ncc @svekars @sekyondaMeta @AlannaBurke @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD", "url": "https://github.com/pytorch/pytorch/issues/165578", "state": "open", "labels": [ "module: docs", "triaged", "module: PrivateUse1" ], "created_at": "2025-10-15T20:49:12Z", "updated_at": "2025-10-17T04:30:01Z", "comments": 1, "user": "pganssle-google" }, { "repo": "pytorch/pytorch", "number": 165577, "title": "CI: What is the purpose of `slow.yml`", "body": "### \ud83d\udc1b Describe the bug\n\nWhat is the purpose of `slow.yml` job, when we can shard more and probably can rely on TD to skip slow tests if they are not needed? \n\nIn the past `slow.yml` job was a way of keeping time to signal low, while running some tests post commit, but now that we have TD we probably can get rid of concept of slow test and just run them conditionally on TD's decision\n\n\nAt the very least, I think this job should be viable/strict blocking, as it just runs a subset of tests from pull request, that are decorated with `@slowTest` or take more than 90 sec to finish\n### Versions\n\nCI\n\ncc @seemethere @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/165577", "state": "open", "labels": [ "module: ci", "triaged", "needs research" ], "created_at": "2025-10-15T20:33:34Z", "updated_at": "2025-10-16T09:24:45Z", "user": "malfet" }, { "repo": "pytorch/rl", "number": 3197, "title": "[Question] How to handle MultiDiscrete action spaces in TorchRL", "body": "I have created a custom Parallel API PettingZoo environment with **MultiDiscrete action spaces**. The _env.action_spec()_ function succeeds. \n\nI am using the **Multi-Agent PPO tutorial of TorchRL**, but I\u2019m struggling to understand how to modify the architecture so it supports **MultiDiscrete action spaces**. Specifically, I\u2019d like to know how to correctly adapt the `MultiAgentMLP`, `TensorDictModule`, and `ProbabilisticActor` so that the policy network outputs a `MultiDiscrete` (or equivalently, `MultiCategorical`) action distribution for each agent.\n\nShould I create number of `ProbabilisticActor` modules as the length of the MultiDiscrete action space? In the case where a single `ProbabilisticActor` module is used, which distribution class should replace `Categorical` to support a MultiDiscrete action space? Is there an existing script or tutorial in TorchRL that demonstrates how to handle `MultiDiscrete` action spaces (or `MultiCategorical` distributions) in a multi-agent setup?", "url": "https://github.com/pytorch/rl/issues/3197", "state": "open", "labels": [], "created_at": "2025-10-15T11:56:00Z", "updated_at": "2025-10-16T19:38:12Z", "user": "AnastasiaPsarou" }, { "repo": "pytorch/xla", "number": 9678, "title": "Heterogeneous execution across multiple PJRT clients (GPU + custom accelerator)", "body": "## \u2753 Questions and Help\nHi, I\u2019m developing a PJRT plugin for a custom accelerator, and I\u2019m exploring whether PyTorch/XLA can support heterogeneous execution across multiple PJRT clients \u2014 for example, splitting a model or HLO module between GPU, CPU, and the custom accelerator.\n\nConcretely, I\u2019d like to enable availability-aware, cost-driven partitioning so that:\n\n1. If only CPU + accelerator are available, the model runs using those.\n\n2. If a GPU is also available, certain subgraphs can automatically offload to the accelerator when it\u2019s beneficial.\n\nI have a few questions:\n\nDoes PyTorch/XLA or its PJRT integration layer support running a single model using multiple PJRT clients/devices (e.g., GPU + custom accelerator) at the same time?\n\nIf not, is there any supported or recommended way to partition the computation graph manually and execute subgraphs on different PJRT backends?\n\nWould implementing this orchestration externally (via multiple PJRT clients) be more realistic today, or can PyTorch/XLA\u2019s runtime be extended to handle multi-client coordination?\n\nAny pointers to examples, design discussions, or relevant code paths would be really helpful.\n\nThanks!", "url": "https://github.com/pytorch/xla/issues/9678", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-15T02:56:43Z", "updated_at": "2025-10-16T14:52:40Z", "user": "milinbhade1214" }, { "repo": "pytorch/pytorch", "number": 165444, "title": "AOTInductor not updating buffers inplace", "body": "Hey all, \n\nI'd like to double check whether updating buffers inplace is currently supported with AOTInductor? Based on the answers on this issue https://github.com/pytorch/pytorch/issues/159124 I think it should be, but it does not seem to work when I load the module from file. If not, is there any workaround we can use at this time (short of making the function pure)? I'm currently on libtorch 2.8.0.\n\n```\nclass DummyModel(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.register_buffer(\"counter\", torch.ones(1))\n \n def forward(self):\n self.counter = self.counter + 1.0\n return self.counter\n \nep = torch.export.export(DummyModel(), tuple())\nso_path = torch._inductor.aoti_compile_and_package(\n ep,\n inductor_configs={\"always_keep_tensor_constants\": True}\n)\nloaded_module = torch._inductor.aoti_load_package(so_path)\n\nprint(ep.module()())\nprint(ep.module()())\nprint(ep.module()())\nprint(loaded_module())\nprint(loaded_module())\nprint(loaded_module())\n```\n```\ntensor([2.])\ntensor([3.])\ntensor([4.])\ntensor([2.])\ntensor([2.])\ntensor([2.])\n```\n@desertfire @ezyang \n\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1", "url": "https://github.com/pytorch/pytorch/issues/165444", "state": "open", "labels": [ "oncall: pt2", "oncall: export", "module: aotinductor" ], "created_at": "2025-10-14T16:48:34Z", "updated_at": "2025-10-19T23:42:43Z", "comments": 2, "user": "olarucatalin" }, { "repo": "pytorch/pytorch", "number": 165428, "title": "Using NCCL for Global Group and MPI for Sub-Groups in torch.distributed", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI want to mix NCCL and MPI backends in the `torch.distributed` package. Does torch.distributed support using NCCL as the backend when initializing the global process group with `torch.distributed.init_process_group()`, and then using MPI as the backend when creating a sub-process group with `torch.distributed.new_group()`? Or is the opposite operation supported? I encountered errors when I tried this myself.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/165428", "state": "closed", "labels": [], "created_at": "2025-10-14T09:26:47Z", "updated_at": "2025-10-15T13:44:42Z", "comments": 11, "user": "cq-eng" }, { "repo": "pytorch/pytorch", "number": 165419, "title": "[RFC] Make PyTorch Expandable Segments interoperate with CUDA VMM-based allocators (NCCL ncclMemAlloc)", "body": "## Summary\nPyTorch\u2019s expandable segments reduce fragmentation by using CUDA Virtual Memory Management (VMM) to grow/shrink virtual segments instead of relying on cudaMalloc blocks. \n\nSeparately, NCCL\u2019s user buffer registration\u2014including NVLS, General (intra-node) buffer registration, and Window Registration\u2014expects buffers to come from VMM-backed allocators (e.g., ncclMemAlloc or any allocator that produces VMM handles with the documented properties). These registrations lower memory pressure and [can improve overlap/latency for collectives](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/bufferreg.html).\n\nToday these two don\u2019t compose in PyTorch #147851 enabling expandable segments can prevent custom/VMM allocators from being used (breaking NCCL registration flows), and the current NCCL mem-pool registration path is brittle when expandable segments is on.\n\nThis RFC proposes incremental changes so expandable segments and VMM-based allocators interoperate cleanly, enabling users to (a) keep expandable segments on to reduce fragmentation and (b) opt into NCCL registration (NVLS / General / Window) for zero-copy and better communication\u2013computation overlap.\n\nSolving this problem could lead to other beneficial outcomes, #158029 maybe related\n\n## Design overview\n\nWe propose two compatible tracks. Plan 2 is minimal risk and unblocks NCCL users quickly; Plan 1 is a deeper integration that generalizes expandable segments to any VMM source, including ncclMemAlloc.\n\n### Plan 2 (near-term): Make NCCL registerMemPool fully work when expandable segments is on\n\nWhen users register tensors with ncclCommRegister, it should just work with expandable segments enabled. \n\n[NCCL\u2019s docs](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/bufferreg.html#mem-allocator) explicitly allow buffer registration with any VMM-based allocator so long as the allocation/handle/align rules are met (recommended granularity, shared handle types for NVLS, etc.). Expandable segments already use CUDA VMM under the hood; the missing piece is to ensure the buffers we hand to NCCL truly originate from CUDACachingAllocator that NCCL can recognize/retain, and that our allocator bookkeeping & snapshots remain consistent with expandable segments. \n\nWe currently depend on c10::cuda::CUDACachingAllocator::snapshot to dump segments for ncclCommRegister import. We must ensure this process functions correctly.\n\n### Plan 1 (mid-term): Let expandable segments adopt external VMM allocations (generalize expandable segments to any VMM allocator)\n\nIf users (or plugins) allocate memory via ncclMemAlloc or another VMM allocator, expandable segments can \u201cimport\u201d those physical allocations by retaining the underlying VMM handle and mapping them into expandable segments virtual address ranges. Then expandable segments can manage growth/shrink and all the usual segment lifecycle while keeping NCCL registration happy.\n\nWe can use cuMemRetainAllocationHandle to recover the CUmemGenericAllocationHandle from any mapped address the external allocator returned. The API guarantees the returned handle equals the one used for mapping; any address within the mapped range works. Once we have the handle, expandable segments can unmap/remap subranges into its own reserved VA space (cuMemAddressReserve, cuMemMap, cuMemSetAccess) and track page-level occupancy in expandable segments bookkeeping, enabling co-existence with expandable segments growth policies and freeing fully unused pages.\n\ncc @ptrblck @msaroufim @eqy @jerryzh168 @ngimel @syed-ahmed \n", "url": "https://github.com/pytorch/pytorch/issues/165419", "state": "closed", "labels": [ "module: cuda", "triaged", "module: nccl", "module: CUDACachingAllocator" ], "created_at": "2025-10-14T07:53:01Z", "updated_at": "2025-12-10T17:12:45Z", "comments": 14, "user": "eee4017" }, { "repo": "pytorch/pytorch", "number": 165324, "title": "How to enable Bfloat16 when using torch.func.jvp", "body": "### \ud83d\udc1b Describe the bug\n\n```python\nmodel_partial = partial(model_fn, **inputs)\njvp_args = (\n lambda z, t, r: model_partial(latents=z, timestep=t, r_timestep=r),\n (z, t, r),\n (v_hat, torch.ones_like(t).to(x.dtype), torch.zeros_like(r).to(x.dtype)),\n)\n\nwith torch.autocast(device_type=\"cuda\", dtype=torch.bfloat16, enabled=False):\n if self.create_graph:\n u, dudt = self.jvp_fn(*jvp_args, create_graph=True)\n else:\n u, dudt = self.jvp_fn(*jvp_args)\n```\n\nI was training models in mixed precision of bfloat16. And I used deepspeed stage3, and enabled gradient checkpointing. But while calling the torch.func.jvp, the input seems to be automatically converted to float type. The error is as follows:\n\n```\n File \"/datadrive/DiffSynth-Studio/diffsynth/models/qwen_image_dit.py\", line 283, in forward\n img_q, img_k, img_v = self.to_q(image), self.to_k(image), self.to_v(image)\n File \"/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1739, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1750, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/peft/tuners/lora/layer.py\", line 758, in forward\n result = self.base_layer(x, *args, **kwargs)\n File \"/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1739, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1750, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/linear.py\", line 127, in forward\n return F.linear(input.to(self.weight.dtype), self.weight, self.bias)\nRuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16\n```\n\nHow to force the jvp function to use bfloat16 type in its calculation?\n\n### Versions\n\nPyTorch version: 2.6.0+cu124\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 20.04.6 LTS (x86_64)\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\nClang version: Could not collect\nCMake version: version 3.16.3\nLibc version: glibc-2.31\n\nPython version: 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-5.15.0-1017-azure-x86_64-with-glibc2.31\nIs CUDA available: True\nCUDA runtime version: 12.1.66\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA A100 80GB PCIe\nGPU 1: NVIDIA A100 80GB PCIe\nGPU 2: NVIDIA A100 80GB PCIe\nGPU 3: NVIDIA A100 80GB PCIe\n\nNvidia driver version: 530.30.02\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 48 bits physical, 48 bits virtual\nCPU(s): 96\nOn-line CPU(s) list: 0-95\nThread(s) per core: 1\nCore(s) per socket: 48\nSocket(s): 2\nNUMA node(s): 4\nVendor ID: AuthenticAMD\nCPU family: 25\nModel: 1\nModel name: AMD EPYC 7V13 64-Core Processor\nStepping: 1\nCPU MHz: 2445.434\nBogoMIPS: 4890.86\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 3 MiB\nL1i cache: 3 MiB\nL2 cache: 48 MiB\nL3 cache: 384 MiB\nNUMA node0 CPU(s): 0-23\nNUMA node1 CPU(s): 24-47\nNUMA node2 CPU(s): 48-71\nNUMA node3 CPU(s): 72-95\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid ", "url": "https://github.com/pytorch/pytorch/issues/165324", "state": "open", "labels": [ "triaged", "module: amp (automated mixed precision)", "release notes: torch.func" ], "created_at": "2025-10-13T14:52:15Z", "updated_at": "2025-10-27T15:23:35Z", "user": "pnotp" }, { "repo": "pytorch/pytorch", "number": 165319, "title": "Memory leak when converting from numpy array", "body": "### \ud83d\udc1b Describe the bug\n\nJust faced a weird memory leak in my code that uses both numpy and pytorch on cpu (to exploit some scipy functionalities first, before using pytorch ones). Here is a minimal example that reproduces the leak on my laptop. I faced it on python 3.10 and then python 3.13 with pytorch 2.8.0.\n\n```python\nfrom typing import List, Tuple\nimport time\n\nimport numpy as np\nimport torch\nimport tqdm\nimport psutil\n\n\ndef minimal_leak(n: int, shape: Tuple[int, ...]) -> List:\n L = []\n for _ in tqdm.trange(n):\n time.sleep(0.001) # Let's not be to fast to see how memory explodes\n x = np.zeros(shape, dtype=np.float64)\n # x = f(x) # Use some numpy fct\n\n # Convert to torch to use some torch fct\n x_pt = torch.from_numpy(x).to(torch.float32)\n L.append(x_pt) # Add x_pt = f_pt(x_pt)\n\n # But in fact, let's remove x_pt and keep only something related to it\n # The memory for x/x_pt should be released at some point\n L[-1] = torch.ones(10, dtype=torch.int64)\n\n return L\n\n\ndef minimal_no_leak(n: int, shape: Tuple[int, ...]) -> List:\n \"\"\"Same as minimal link, but store a float instead of a tensor in L\"\"\"\n L = []\n for _ in tqdm.trange(n):\n time.sleep(0.001) # Let's not be to fast to see how memory explodes\n x = np.zeros(shape, dtype=np.float64)\n # x = f(x) # Use some numpy fct\n\n # Convert to torch to use some torch fct\n x_pt = torch.from_numpy(x).to(torch.float32)\n L.append(x_pt) # Add x_pt = f_pt(x_pt)\n\n # But in fact, let's remove x_pt and keep only something related to it\n # The memory for x/x_pt should be released at some point\n L[-1] = 0.0\n\n return L\n\n\ndef minimal_no_leak_2(n: int, shape: Tuple[int, ...]) -> List:\n \"\"\"Same as minimal link, but don't clone (or move to another dtype) x\"\"\"\n L = []\n for _ in tqdm.trange(n):\n time.sleep(0.001) # Let's not be to fast to see how memory explodes\n x = np.zeros(shape, dtype=np.float64)\n # x = f(x) # Use some numpy fct\n\n # Convert to torch to use some torch fct\n x_pt = torch.from_numpy(x)\n L.append(x_pt) # Add x_pt = f_pt(x_pt)\n\n # But in fact, let's remove x_pt and keep only something related to it\n # The memory for x/x_pt should be released at some point\n L[-1] = torch.ones(10, dtype=torch.int64)\n\n return L\n\n\nprocess = psutil.Process()\nresults = []\nfor i in tqdm.trange(50):\n tqdm.tqdm.write(f\"Memory used: {process.memory_info().rss / 1024**3}\")\n\n results.extend(minimal_leak(1000, (1, 500, 500))) # Leak\n # results.extend(minimal_leak(1000, (50, 500, 500))) # With large tensors the leak vanishes (probably from specific reuse of \"small\" tensors by torch?)\n # results.extend(minimal_no_leak(1000, (1, 500, 500))) # No leak\n # results.extend(minimal_no_leak_2(1000, (1, 500, 500))) # No leak\n```\n\nClearly minimal_leak should not leak memory (though I agree my example is a bit far-fetched, my code somehow does something similar, but going through more complex structure and operations). I provided two similar version of the code that do not leak memory, which clearly shows that something weird is happening.\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: 14.0.0-1ubuntu1.1\nCMake version: version 4.1.2\nLibc version: glibc-2.35\n\nPython version: 3.13.7 | packaged by Anaconda, Inc. | (main, Sep 9 2025, 19:59:03) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-5.15.0-153-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU\nNvidia driver version: 535.247.01\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 16\nOn-line CPU(s) list: 0-15\nVendor ID: GenuineIntel\nModel name: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz\nCPU family: 6\nModel: 141\nThread(s) per core: 2\nCore(s) per socket: 8\nSocket(s): 1\nStepping: 1\nCPU max MHz: 4600,0000\nCPU min MHz: 800,0000\nBogoMIPS: 4608.00\nFlags: ", "url": "https://github.com/pytorch/pytorch/issues/165319", "state": "open", "labels": [ "module: memory usage", "triaged", "module: numpy" ], "created_at": "2025-10-13T13:58:22Z", "updated_at": "2025-10-14T08:06:37Z", "comments": 4, "user": "raphaelreme" }, { "repo": "pytorch/ao", "number": 3157, "title": "Is there no tutorial for dynamic quantization of BERT model in torch.ao?", "body": "I saw that some quant related tutorials in [the PyTorch tutorials repo](https://github.com/pytorch/tutorials) have been deleted, and [the PR](https://github.com/pytorch/tutorials/pull/3432) stated that these tutorials will be moved to torchao. However, I can't find [the BERT dynamic quantization tutorial](https://github.com/pytorch/tutorials/pull/3432/files#diff-ffe2cf0ed3702611468c41af499f514e6fb0d4e5497a296df75e99422a200353) in the torchao repository. Where can I find it?", "url": "https://github.com/pytorch/ao/issues/3157", "state": "open", "labels": [ "triaged" ], "created_at": "2025-10-12T15:47:06Z", "updated_at": "2026-01-03T14:43:56Z", "comments": 6, "user": "Esttelle" }, { "repo": "pytorch/pytorch", "number": 165177, "title": "cryptic symbolic shape error with FSDP2 and torch.compile", "body": "### \ud83d\udc1b Describe the bug\n\nUsing FSDP2 and torch.compile with Llama3 (and most other generative models on HuggingFace). I get the following error:\n```\nAssertionError: s52 (could be from [\"L['position_ids']._base.size()[0]\"]) not in {\ns53: [\"L['attention_mask'].size()[1]\", \"L['attention_mask'].stride()[0]\"],\ns58: [\"L['cache_position'].size()[0]\", \"L['position_ids']._base.size()[0]\"],\ns55: [\"L['input_embeds'].size()[1]\"],\ns9: [\"L['position_ids'].size()[1]\", \"L['position_ids'].stride()[0]\"],\ns52: []\n}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665\n``` \n\nA reasonable suggestion would be there is something in the code that isn't torch.compile friendly. That is certainly possible. However, without the `fully_shard` and with `torch.compile()` there is no error. Hence, I'm inclined to believe it's a bug with how torch.compile and FSDP2 interacts. The link https://github.com/pytorch/pytorch/pull/90665 was not instructive as to the cause. \n\nThe following code reproduces the error. The error produces with 1 GPU, 2 GPUs, 2x8 GPUs, and possibly more settings.\n\n```py\nimport os\nimport numpy as np\n\nimport torch\nfrom torch.distributed.fsdp import fully_shard\nimport torch.nn.functional as F\n\nfrom transformers import AutoModelForCausalLM\nfrom transformers.models.llama.modeling_llama import LlamaDecoderLayer\n\ndef init_distributed() -> tuple[int,torch.device]:\n world_size = int(os.environ.get(\"WORLD_SIZE\", 1))\n rank = int(os.environ.get(\"RANK\", 0))\n local_rank = int(os.environ.get(\"LOCAL_RANK\", 0))\n\n if \"SLURM_NTASKS\" in os.environ:\n world_size = int(os.environ[\"SLURM_NTASKS\"])\n rank = int(os.environ.get(\"SLURM_PROCID\", os.environ.get(\"SLURM_TASK_PID\", 0)))\n local_rank = int(os.environ.get(\"SLURM_LOCALID\", int(os.environ.get(\"SLURM_PROCID\", 0)) % (torch.cuda.device_count() or 1)))\n\n torch.cuda.set_device(local_rank)\n init_method = os.environ.get(\"INIT_METHOD\", \"env://\")\n torch.distributed.init_process_group(\n backend=\"nccl\",\n init_method=init_method,\n world_size=world_size,\n rank=rank\n )\n return rank, torch.device(f\"cuda:{local_rank}\")\n\ndef main():\n # Setup distributed + device\n rank, device = init_distributed()\n\n # Only rank 0 downloads/prepares model weights; others wait.\n gen_model = AutoModelForCausalLM.from_pretrained(\n 'meta-llama/Meta-Llama-3-8B-Instruct',\n use_safetensors=True,\n dtype=torch.bfloat16,\n pad_token_id=0,\n use_cache=False\n )\n\n torch.distributed.barrier()\n\n if gen_model is None:\n gen_model = AutoModelForCausalLM.from_pretrained(\n 'meta-llama/Meta-Llama-3-8B-Instruct',\n use_safetensors=True,\n pad_token_id=0,\n use_cache=False\n )\n\n gen_model.to(device)\n\n for submodule in gen_model.model.layers:\n if isinstance(submodule, LlamaDecoderLayer):\n fully_shard(submodule)\n fully_shard(gen_model)\n\n gen_model = torch.compile(gen_model) # type: ignore\n\n torch.distributed.barrier()\n\n dataset = [\n np.array([[1, 27, 91, 882, 91, 397]], dtype=np.int64),\n np.array([[1, 27, 91, 882, 91, 397, 45, 45, 45, 45, 45, 45]], dtype=np.int64)\n ]\n assert gen_model is not None\n\n for input_ids in dataset:\n batch = torch.from_numpy(input_ids).to(device)\n # padding changes the issue to a crash with no error.\n # batch = F.pad(torch.from_numpy(input_ids), (0, 200 - input_ids.shape[1]), value=0).to(device)\n\n logits = gen_model(input_ids=batch,\n attention_mask=torch.ones_like(batch)).logits # AssertionError\n\n print('SUCCESS')\n\nif __name__ == \"__main__\":\n torch.set_float32_matmul_precision('high')\n main()\n```\n\nFull error:\n\n```\nRunning setup on gpu-54\nUsing CPython 3.13.7\nCreating virtual environment at: /tmp/pyenv\nActivate with: source /tmp/pyenv/bin/activate\nCloning into '/tmp/code'...\ndone.\n/tmp/code ~/workspace/reward-based-ift\nHEAD is now at 72d2b68 job submission\nUsing Python 3.13.7 environment at: /tmp/pyenv\nResolved 134 packages in 1.15s\n Building gl @ file:///tmp/code\n Built gl @ file:///tmp/code\nPrepared 1 package in 1.48s\nInstalled 134 packages in 3.04s\n + ai2-olmo-eval==0.8.5\n + aiofiles==24.1.0\n + aiohappyeyeballs==2.6.1\n + aiohttp==3.11.18\n + aiosignal==1.4.0\n + annotated-types==0.7.0\n + attrs==25.4.0\n + boto3==1.40.49\n + botocore==1.40.49\n + cached-path==1.8.0\n + cachetools==6.2.0\n + certifi==2025.10.5\n + charset-normalizer==3.4.3\n + click==8.3.0\n + datasets==4.0.0\n + deepspeed==0.16.9\n + dill==0.3.8\n + distlib==0.4.0\n + docker-pycreds==0.4.0\n + einops==0.8.1\n + filelock==3.20.0\n + frozenlist==1.8.0\n + fsspec==2025.3.0\n + gitdb==4.0.12\n + gitpython==3.1.45\n + gl==0.1.0 (from file:///tmp/code)\n + google-api-core==2.26.0\n + google-auth==2.41.1\n + google-cloud-core==2.4.3\n + google-cloud-storage==2.19.0\n + google-crc32c==1.7.1\n + google-resumable-media==2.7.2", "url": "https://github.com/pytorch/pytorch/issues/165177", "state": "closed", "labels": [ "high priority", "oncall: distributed", "triaged", "oncall: pt2", "module: dynamic shapes" ], "created_at": "2025-10-10T19:53:04Z", "updated_at": "2025-10-30T18:03:53Z", "comments": 8, "user": "AndreasMadsen" }, { "repo": "pytorch/tutorials", "number": 3611, "title": "Feedback about Quickstart", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html#optimizing-the-model-parameters\n\nSystem specs: Windows 11, python3.11, pytorch==2.8.0+xpu, Intel oneAPI 2025.2.\n\nBeen following this tut, I got this error raising from test function\n```\ncorrect += (pred.argmax(1) == y).type(torch.float).sum().item()\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nRuntimeError: UR error\n```\n\nChecked all the compatibility of oneAPI, pytorch, and intel_extension_for_pytorch\n```\nprint(torch.xpu._is_compiled())\nprint(torch.xpu.is_available())\n```\nBoth prints True\n\n\nReally new to ML and NN, but not dev, so tried using torch.FloatTensor\n`correct += (pred.argmax(1) == y).type(torch.FloatTensor).sum().item()\n`\nIt works and output almost matches to what's given in tut.\n\nI hope what I did is correct in terms of ML.\nIf not please suggest where can I lookup to understand this better.\n\ncc @albanD @jbschlosser @gujinghui @EikanWang @fengyuan14 @guangyey", "url": "https://github.com/pytorch/tutorials/issues/3611", "state": "open", "labels": [ "question", "core", "module: xpu", "windows" ], "created_at": "2025-10-10T17:06:21Z", "updated_at": "2025-10-20T03:25:53Z", "user": "BhavneetSingh7" }, { "repo": "pytorch/pytorch", "number": 165100, "title": "Header files not found during build", "body": "### \ud83d\udc1b Describe the bug\n\nI'm trying to build pytorch from source but getting the following error:\n\n```\npytorch/aten/src/ATen/core/ivalue.h:4:10: fatal error: ATen/core/TensorBody.h: No such file or directory\n```\n\nSeems these files are generated and I see this line printed before\n\n```\ncore header install: pytorch/build/aten/src/ATen/core/TensorBody.h\n```\n\nHow can I get pytorch to build without these errors?\n\nI'm running the following command\n\n```\nTORCH_CUDA_ARCH_LIST=\"8.0 9.0\" BUILD_TEST=0 USE_DISTRIBUTED=1 USE_NCCL=1 USE_CUDA=1 python setup.py install\n```\n\n### Versions\n\n```\nCollecting environment information...\nPyTorch version: N/A\nIs debug build: N/A\nCUDA used to build PyTorch: N/A\nROCM used to build PyTorch: N/A\n\nOS: CentOS Stream 9 (x86_64)\nGCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11)\nClang version: Could not collect\nCMake version: version 3.27.0\nLibc version: glibc-2.34\n\nPython version: 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.4.3-0_fbk15_hardened_2630_gf27365f948db-x86_64-with-glibc2.34\nIs CUDA available: N/A\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: \nGPU 0: NVIDIA PG509-210\nGPU 1: NVIDIA PG509-210\n\nNvidia driver version: 550.90.07\ncuDNN version: Could not collect\nIs XPU available: N/A\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: N/A\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 44\nOn-line CPU(s) list: 0-43\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz\nCPU family: 6\nModel: 85\nThread(s) per core: 1\nCore(s) per socket: 44\nSocket(s): 1\nStepping: 11\nBogoMIPS: 3591.76\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 arat vnmi umip pku ospke avx512_vnni md_clear flush_l1d arch_capabilities\nVirtualization: VT-x\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 1.4 MiB (44 instances)\nL1i cache: 1.4 MiB (44 instances)\nL2 cache: 176 MiB (44 instances)\nL3 cache: 16 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-43\nVulnerability Gather data sampling: Unknown: Dependent on hypervisor status\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Vulnerable\nVulnerability Retbleed: Vulnerable\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers\nVulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Mitigation; TSX disabled\n\nVersions of relevant libraries:\n[pip3] flake8==7.3.0\n[pip3] flake8-bugbear==24.12.12\n[pip3] flake8-comprehensions==3.16.0\n[pip3] flake8-executable==2.1.3\n[pip3] flake8-logging-format==2024.24.12\n[pip3] flake8-pyi==25.5.0\n[pip3] flake8_simplify==0.22.0\n[pip3] mypy_extensions==1.1.0\n[pip3] numpy==2.3.1\n[pip3] nvidia-cublas-cu12==12.6.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.6.80\n[pip3] nvidia-cuda-nvrtc-cu12==12.6.77\n[pip3] nvidia-cuda-runtime-cu12==12.6.77\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.0.4\n[pip3] nvidia-curand-cu12==10.3.7.77\n[pip3] nvidia-cusolver-cu12==11.7.1.2\n[pip3] nvidia-cusparse-cu12==12.5.4.2\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-nccl-cu12==2.27.5\n[pip3] nvidia-nvjitlink-cu12==12.6.85\n[pip3] nvidia-nvtx-cu12==12.6.77\n[pip3] pytorch-triton==3.5.0+git27664085\n[pip3] torch==2.10.0.dev20251008+cu126\n[pip3] torchaudio==2.8.0.dev20251009+cu126\n[pip3] torc", "url": "https://github.com/pytorch/pytorch/issues/165100", "state": "open", "labels": [ "module: build", "triaged", "has workaround" ], "created_at": "2025-10-09T20:51:23Z", "updated_at": "2025-10-10T13:43:50Z", "comments": 1, "user": "tushar00jain" }, { "repo": "pytorch/ao", "number": 3137, "title": "README should highlight our huggingface models", "body": "We've got a few quantized models here and plan to keep adding to it: https://huggingface.co/pytorch. This should be highlighted close to the top of the README", "url": "https://github.com/pytorch/ao/issues/3137", "state": "open", "labels": [ "topic: documentation" ], "created_at": "2025-10-09T18:07:51Z", "updated_at": "2025-10-09T18:08:06Z", "comments": 0, "user": "andrewor14" }, { "repo": "pytorch/pytorch", "number": 165051, "title": "`[__recompiles] - 0/3: expected type of 'args[1]' to be a tensor type, ' but found ` cryptic recompilation cause", "body": "### \ud83d\udc1b Describe the bug\n\nHello,\n\nIn some private workload I am running (unfortunately I don't have a minimal repro - I can try to get one if needed), the recompilation cause:\n\n```\nV1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] Recompiling function inner in /root/miniforge3/lib/python3.12/site-packages/torch/_dynamo/external_utils.py:68\nV1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] triggered by the following guard failure(s):\nV1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] - 0/1: expected type of 'args[1]' to be a tensor type, ' but found \n```\n\ngets printed.\n\nThe log `expected type of 'args[1]' to be a tensor type, ' but found ` is surprising to me. What does it mean?\n\nThank you.\n\n(this is on torch 2.7 - I'll test on 2.8 shortly)\n\n### Versions\n\n```\nCollecting environment information...\nPyTorch version: 2.7.1+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.1 LTS (x86_64)\nGCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.39\n\nPython version: 3.12.10 | packaged by conda-forge | (main, Apr 10 2025, 22:21:13) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.6.85\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration:\nGPU 0: NVIDIA H100 80GB HBM3\nGPU 1: NVIDIA H100 80GB HBM3\nGPU 2: NVIDIA H100 80GB HBM3\nGPU 3: NVIDIA H100 80GB HBM3\nGPU 4: NVIDIA H100 80GB HBM3\nGPU 5: NVIDIA H100 80GB HBM3\nGPU 6: NVIDIA H100 80GB HBM3\nGPU 7: NVIDIA H100 80GB HBM3\n\nNvidia driver version: 570.133.20\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 384\nOn-line CPU(s) list: 0-383\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9654 96-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 2\nCore(s) per socket: 96\nSocket(s): 2\nStepping: 1\nFrequency boost: enabled\nCPU(s) scaling MHz: 46%\nCPU max MHz: 3707.8120\nCPU min MHz: 1500.0000\nBogoMIPS: 4800.35\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap\nVirtualization: AMD-V\nL1d cache: 6 MiB (192 instances)\nL1i cache: 6 MiB (192 instances)\nL2 cache: 192 MiB (192 instances)\nL3 cache: 768 MiB (24 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-95,192-287\nNUMA node1 CPU(s): 96-191,288-383\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: M", "url": "https://github.com/pytorch/pytorch/issues/165051", "state": "open", "labels": [ "needs reproduction", "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2025-10-09T11:43:27Z", "updated_at": "2025-10-10T17:59:08Z", "comments": 3, "user": "fxmarty-amd" }, { "repo": "pytorch/pytorch", "number": 164971, "title": "[dynamo] Keep stack trace where mutations happened", "body": "### \ud83d\udc1b Describe the bug\n\nThis is essential to figure out where we want to use strict-export but there is a side effect, and we want to inform the user about how to rewrite their code to remove the side-effect.\n\n### Error logs\n\n_No response_\n\n### Versions\n\nNA\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/164971", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "oncall: export" ], "created_at": "2025-10-08T18:52:09Z", "updated_at": "2025-10-09T17:23:32Z", "comments": 1, "user": "anijain2305" }, { "repo": "pytorch/pytorch", "number": 164966, "title": "XPU OOM when allocate tensor according to its reported available memory", "body": "### \ud83d\udc1b Describe the bug\n\nrun below\n```\nimport torch\n\ntorch.xpu.empty_cache()\n\n## bring up the context, it may occupy memory\na = torch.rand(5).to(\"xpu:0\")\n\nfree_memory_bytes = torch.xpu.mem_get_info(\"xpu:0\")[0]\nrequired_memory_bytes = 5000 * 5000 * (32 // 8)\n\n# Leaving 50 MB of free memory for possible buffers, etc.\nn_vals = (free_memory_bytes - required_memory_bytes - int(50e6)) // (32 // 8)\nfoo = torch.rand(n_vals, device=\"xpu:0\") \n```\n\nYou'll get exception as below:\n\n> Traceback (most recent call last):\n> File \"/workspace/accelerate/./test.py\", line 13, in \n> foo = torch.rand(n_vals, device=\"xpu:0\")\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> torch.OutOfMemoryError: XPU out of memory. Tried to allocate 63.71 GiB. GPU 0 has a total capacity of 63.98 GiB. Of the allocated memory 512 bytes is allocated by PyTorch, and 2.00 MiB is reserved by PyTorch but unallocated. Please use `empty_cache` to release all unoccupied cached memory.\n\n### Versions\n\nlatest xpu pytorch\n\ncc @gujinghui @EikanWang @fengyuan14 @guangyey", "url": "https://github.com/pytorch/pytorch/issues/164966", "state": "open", "labels": [ "module: memory usage", "triaged", "module: xpu" ], "created_at": "2025-10-08T18:39:18Z", "updated_at": "2025-10-11T01:40:46Z", "comments": 3, "user": "yao-matrix" }, { "repo": "pytorch/pytorch", "number": 164951, "title": "Docker checkouts take 30+ min on H100 runners", "body": "### \ud83d\udc1b Describe the bug\n\nSee https://github.com/pytorch/pytorch/actions/runs/18344478781/job/52264153169 for example where \"Pull docker image\" takes 37 min!!! Can we cache/slim the docker? Or connect those runners to more powerful IO system\n\n### Versions\n\nCI\n\ncc @seemethere @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/164951", "state": "open", "labels": [ "module: ci", "triaged" ], "created_at": "2025-10-08T17:12:15Z", "updated_at": "2025-10-08T17:12:25Z", "comments": 0, "user": "malfet" }, { "repo": "pytorch/pytorch", "number": 164922, "title": "`torch.compile` fails to trace `datetime.now()` with Dynamo guard check failure", "body": "### \ud83d\udc1b Describe the bug\n\nWhen compiling a model that uses `datetime.now()` function, `torch.compile` fails with a Dynamo guard check error. The warning message explicitly identifies this as a Python builtin that Dynamo cannot trace, and suggests filing an issue to add support.\n```python\nimport torch\nfrom datetime import datetime\n\nclass TestModel(torch.nn.Module):\n def forward(self, x):\n current_time = datetime.now()\n return x + current_time.second\n\nx = torch.randn(5)\nmodel = TestModel()\nprint(\"Eager output:\", model(x))\nprint(\"Compiled output:\", torch.compile(model)(x))\n```\n\n### Error logs\n\n```\nEager output: tensor([51.0676, 52.0309, 52.6077, 50.6691, 53.6591])\nD:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\variables\\functions.py:1598: UserWarning: Dynamo does not know how to trace the builtin `.datetime.now.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).\nIf it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.\nIf it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.\n torch._dynamo.utils.warn_once(explanation + \"\\n\" + \"\\n\".join(hints))\nTraceback (most recent call last):\n File \"E:\\DL_Compiler_Test\\torch_code\\test.py\", line 12, in \n print(\"Compiled output:\", torch.compile(model)(x))\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\eval_frame.py\", line 418, in __call__\n return super().__call__(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1777, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1788, in _call_impl\n return forward_call(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\eval_frame.py\", line 886, in compile_wrapper\n return fn(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1777, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1788, in _call_impl\n return forward_call(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 2010, in __call__\n result = self._torchdynamo_orig_backend(\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 1760, in __call__\n result = self._inner_convert(\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 691, in __call__\n result = _compile(\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 1569, in _compile\n guarded_code, tracer_output = compile_inner(code, one_graph, hooks)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_utils_internal.py\", line 97, in wrapper_function\n return function(*args, **kwargs)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 1251, in compile_inner\n return _compile_inner(code, one_graph, hooks)\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 1385, in _compile_inner\n check_fn = dynamo_output.build_guards(\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\convert_frame.py\", line 860, in build_guards\n return CheckFunctionManager(\n File \"D:\\Programs\\Python\\virtualenvs\\torch_code-afvE469o\\lib\\site-packages\\torch\\_dynamo\\guards.py\", line 3593, in __init__\n raise AssertionError(f\"Guard check failed: {reasons}\")\nAssertionError: Guard check failed: 0/0: ___check_obj_id(G['datetime'].now, 2702242757856) # current_time = datetime.now() # E:\\DL_Compiler_Test\\torch_code\\test.py:6 in forward\n```\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.10.0.dev20251005+cpu\nIs debug build: False\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: Microsoft Windows 11\nGCC version: Could not collect\nClang version: Could not collect\nCMake version: version 4.0.2\nLibc version: N/A\n\nPython version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)\nPython platform: Windows-10-10.0.26100-SP0\nIs CUDA available: Fal", "url": "https://github.com/pytorch/pytorch/issues/164922", "state": "open", "labels": [ "triaged", "function request", "oncall: pt2", "module: dynamo" ], "created_at": "2025-10-08T10:19:41Z", "updated_at": "2025-10-14T20:25:33Z", "comments": 9, "user": "LiSsHhUuAaIi" }, { "repo": "pytorch/pytorch", "number": 164878, "title": "Ban and remove plain asserts with no message in our python code", "body": "In a similar spirit to https://github.com/pytorch/pytorch/issues/148114\n\nWe should remove asserts without any message explaining what is happening.\nOn top of that, we should move them to proper errors to avoid any issue with python -O.\n\nThere are two parts here:\n- [x] Enable Ruff lint for this https://docs.astral.sh/ruff/rules/assert/ (with appropriate skips)\n- [ ] Remove all the existing ones\n\ncc @malfet", "url": "https://github.com/pytorch/pytorch/issues/164878", "state": "open", "labels": [ "module: error checking", "triaged", "actionable", "module: python frontend" ], "created_at": "2025-10-07T21:36:50Z", "updated_at": "2025-12-16T20:02:43Z", "comments": 26, "user": "albanD" }, { "repo": "pytorch/torchtitan", "number": 1805, "title": "TP gradient update is wrong during MoE backward", "body": "### Bug description\n\nhttps://github.com/pytorch/torchtitan/blob/main/torchtitan/experiments/llama4/infra/parallelize.py#L454\n\nTP used Dtensor's local tensor by calling to_local(), and the local tensor's gradient can not be correctly propagated back to the DTensor , because we didn't set grad_placements to tell autograd how to back propagete the gradients. So we missed a reduce_scatter() during backward in this line here.\n\n### Versions\n\nCurrent main torchtitan", "url": "https://github.com/pytorch/torchtitan/issues/1805", "state": "closed", "labels": [ "high priority", "triage review" ], "created_at": "2025-10-07T03:43:55Z", "updated_at": "2025-10-15T03:32:04Z", "comments": 1, "user": "wwwjn" }, { "repo": "pytorch/pytorch", "number": 164786, "title": "How should we handle PyTorch build flags in torch/headeronly for custom ops?", "body": "### \ud83d\udc1b Describe the bug\n\nThis isn't exactly a bug, per s\u00e9, but it is misleading. Thanks to @mikaylagawarecki pointing out the following phenomenon in a parallel file, I'm realizing we have the following behavior in torch/headeronly/util/Half.h today:\n\nConsider the following ifdef\nhttps://github.com/pytorch/pytorch/blob/6861fa43e5fee7fedc0213e352fa983edea8aa78/torch/headeronly/util/Half.h#L44-L47\n\nWhen libtorch is compiling Half.h, it will properly generate the fast vectorization logic depending on how CPU_CAPABILITY_AVX2 and CPU_CAPABILITY_AVX512 is set. Great. This is expected.\n\nWhat may be unexpected is that custom ops including the headeronly Half.h will _not_ have CPU_CAPABILITY_AVX2 or CPU_CAPABILITY_AVX512 set and so will not have performant CPU code for `float2half_scalar` and `half2float_scalar` of Half.h.\n\n### Versions\n\non main\n\ncc @malfet @seemethere @chauhang @penguinwu @zou3519 @bdhirsh @swolchok ", "url": "https://github.com/pytorch/pytorch/issues/164786", "state": "open", "labels": [ "module: build", "triaged", "module: custom-operators", "oncall: pt2", "module: pt2-dispatcher" ], "created_at": "2025-10-06T21:22:09Z", "updated_at": "2025-10-07T15:26:28Z", "comments": 1, "user": "janeyx99" }, { "repo": "pytorch/ao", "number": 3122, "title": "Access to compact internal representation for `target_dtype=torch.uint4`", "body": "Hello, for my use case, I need to access and store the internal representation of 4-bit quantization. This is because I'd like to quantize and write back part of the full buffer. Think about \"add some new channels\" or \"overwrite content of a channel\".\n\nI have problems getting to the compressed representation. I wrote this:\n\n```\n from torchao.quantization.observer import AffineQuantizedMinMaxObserver\n from torchao.quantization.granularity import PerAxis\n from torchao.quantization.quant_primitives import MappingType\n from torchao.dtypes import to_affine_quantized_intx_static\n from torchao.dtypes.affine_quantized_tensor import (\n get_tensor_impl_constructor,\n AffineQuantizedTensor,\n )\n from torchao.dtypes.utils import PlainLayout\n\n source_dtype = torch.float32\n target_dtype = torch.uint4\n blocksize = 4096\n num_slots = 128\n\n input_float = torch.randn((num_slots, blocksize), dtype=source_dtype)\n print(f\"shape={(num_slots, blocksize)}: Compute scales, zero_points\")\n obs = AffineQuantizedMinMaxObserver(\n mapping_type=MappingType.ASYMMETRIC,\n target_dtype=target_dtype,\n granularity=PerAxis(axis=0),\n eps=torch.finfo(torch.float32).eps,\n scale_dtype=torch.float32,\n zero_point_dtype=torch.float32,\n )\n obs(input_float)\n scales, zero_points = obs.calculate_qparams()\n # Quantize\n print(\"Quantize\")\n quant_tensor = to_affine_quantized_intx_static(\n input_float=input_float,\n scale=scales,\n zero_point=zero_points,\n block_size=(1, blocksize),\n target_dtype=target_dtype,\n )\n int_data = quant_tensor.tensor_impl.get_plain()[0]\n print(f\"input_float {input_float.shape}, blocksize={blocksize}, int_data {int_data.shape}, int_data.dtype={int_data.dtype}\")\n print(f\"int_data.min={int_data.min().item()}, int_data.max={int_data.max().item()}\")\n # Dequantize\n print(\"Dequantize\")\n tensor_impl_ctr = get_tensor_impl_constructor(PlainLayout)\n tensor_impl = tensor_impl_ctr(\n int_data, scales, zero_points, PlainLayout(),\n )\n reconstructed = AffineQuantizedTensor(\n tensor_impl,\n block_size=(1, blocksize),\n shape=int_data.shape,\n dtype=source_dtype,\n ).dequantize(output_dtype=source_dtype)\n print(f\"reconstructed {reconstructed.shape}, dtype={reconstructed.dtype}\")\n```\n\nFrom this, I get that `quant_tensor.tensor_impl.get_plain()[0]` returns an array of the correct shape, but with `dtype=torch.uint8`, yet values are in fact in the `uint4` range.\n\nThis cannot be how you store these things internally, otherwise this would not be 4-bit quantization.\n\nIs there a way to get to the internal representation? I suppose this is something like a `(num_slots, blocksize // 2)` array of `uint8` type. I can compute this myself, but this seems a detour.\n\nI know it is not nice to have to use internal representations, but your external API just does not support what I need.\n\nEssentially, I want to maintain the quantized version of a buffer of shape `(all_slots, blocksize)`, but be able to modify slices. Say `buffer[a:b, :]` changes, I want to only re-quantize this part and write it back. I don't want to compute and store your supported representations for every single slot, that would be slow. So, getting to the internal representation seems the way to go, unless you'd support such use cases directly.", "url": "https://github.com/pytorch/ao/issues/3122", "state": "open", "labels": [ "question", "triaged" ], "created_at": "2025-10-06T11:02:12Z", "updated_at": "2025-10-09T08:29:55Z", "user": "mseeger" }, { "repo": "pytorch/xla", "number": 9670, "title": "`all_reduce` does not apply `scale` when `xr.world_size == 1`", "body": "## \u2753 Questions and Help\n\nHi, I have noticed that when `world_size == 1`, `all_reduce` is a no-op and does not apply `scale`:\n\nIn `torch_xla.core.xla_model` in `def all_reduce`:\n```\n# No-op if there is only one device\n if runtime.world_size() == 1 and not xu.getenv_as('XLA_ALWAYS_ALLREDUCE',\n bool, False):\n if isinstance(inputs, torch.Tensor):\n return inputs.clone()\n else:\n return inputs\n```\n\nIs this intended behavior? If it is indeed intended, it makes the use of `all_reduce` inconsistent when using `world_size == 1` vs `world_size > 1`. The issue manifests, for example, when you are logging running average loss value:\n\n```\nepoch_loss = xm.all_reduce(xm.REDUCE_SUM, loss_accum, scale=1.0 / ((idx + 1) * world_size))\n``` ", "url": "https://github.com/pytorch/xla/issues/9670", "state": "open", "labels": [ "question", "distributed" ], "created_at": "2025-10-06T04:40:24Z", "updated_at": "2025-10-17T06:31:12Z", "user": "afzalxo" }, { "repo": "pytorch/pytorch", "number": 164696, "title": "Support torch._inductor.config.inplace_buffers for custom_op whenever possible", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIs it possible to add this support to custom_op?\nThe user would annotate what buffers can be used for in_place and torch compile should reuse buffers whenever possible (if they are not required by other ops or backward etc).\n\nThis is to reduce mem usage.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben @zou3519 @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/164696", "state": "open", "labels": [ "triaged", "module: custom-operators", "function request", "oncall: pt2", "module: inductor", "module: pt2-dispatcher" ], "created_at": "2025-10-05T08:30:21Z", "updated_at": "2025-11-12T20:52:44Z", "comments": 6, "user": "mayank31398" }, { "repo": "pytorch/pytorch", "number": 164662, "title": "Improper batch processing in torch.linalg.eig with cuda", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWhen calculating large eigenvalues of non-symmetric matrices, I noticed that torch processes the matrices one by one, with only one core getting loaded. The processing time of multiple matrices is more or less similar between a Python loop and a batched execution of linalg.eig. I think this could drastically improve the performance of eigenvalue calculations, basically in the order of magnitude of the available CPU cores.\n\nI think the issue comes from torch only using magma's geev implementation. This implementation seems to largely rely on magma's geev solver. This solver seems to be single-threaded for large parts of the execution. Py parallelizing the execution using multiple GPUs (with none of them seeing any load) or using the CPU as another device, speedups in the order of 2x for 2 simultaneous calls. Therefore, I think it would be beneficial to try and perform the eigenvalue decomposition of multiple matrices in parallel.\n\nPlease also see [discuss.pytorch.org](https://discuss.pytorch.org/t/torch-linalg-eig-parallelisation/223386/7) for the discussion on the matter and to see the code I have used in order to evaluate this.\n\n### Alternatives\n\nAlternatively, it might also be beneficial to implement cuSolvers' geev implementation cusolverDnXgeev. I am trying to evaluate its performance compared to magmas geev implementation, but I only have limited experience in C++.\n\n### Additional context\n\nI would love to contribute myself, but I have only used C++ in the context of Arduino and ESP32 Microcontrollers, so this would probably only make sense if someone with some more experience could share some advice on how to tackle this. \n\ncc @ptrblck @msaroufim @eqy @jerryzh168 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano", "url": "https://github.com/pytorch/pytorch/issues/164662", "state": "open", "labels": [ "module: cuda", "triaged", "module: linear algebra" ], "created_at": "2025-10-04T16:38:37Z", "updated_at": "2025-10-07T21:39:03Z", "comments": 0, "user": "johannesz-codes" }, { "repo": "pytorch/ao", "number": 3120, "title": "Question: How to implement my quantization algorithm?", "body": "The docs mention that one could ask here for help if unsure how to implement a new quantization algorithm with `torchao`, so I'll use that chance.\n\nFirst, in general, the current situation around pytorch quantization seems a bit unclear to me. As far as I understand:\n- there used to be two quantization APIs: \"Eager\" and \"FX Graph Mode\"\n- now there is a third API: \"PT2E\"\n- the quantization implementation moved from `torch` to `ao` package\n- all of this is still in experimental phase\n\nBut the docs still seem very unpolished (broken links, missing explanations), so I'm confused about the current state of this. In particular, let's say I have a new quantization algorithm (say similar to GPTQ), and I want to make experiments to evaluate it on large models like Llama4, gpt-oss, etc. Could I already use PT2E for that or is it still too unstable? Would I rather use GPTQModel perhaps? Or something else?\n\nAnd then, my question is how I would implement it, because I'm not sure if `torchao` supports the correct \"flow\". Let me explain the necessary flow based on a concrete example. Let's say we have a neural network with the following layers:\n- `Linear(28*28, 512)`\n- `ReLU`\n- `Linear(512, 128)`\n- `ReLU`\n- `Linear(128, 10)`\n\nNow I want to turn the linear layers (which use `float32`) into quantized linear layers whose scalars are 4-bit or so. My algorithm needs calibration data. Let's call the calibration data (i.e. sample inputs) `X`. Now the flow for quantization would look like this:\n1. **Quantize `Linear(28*28, 512)` into `QuantizedLinear(28*28,512)`**.\n This needs the calibration data `X`. (as well as the original `float32` weights of linear layer of course)\n2. **Quantize `Linear(512, 128)` into `QuantizedLinear(512,128)`**.\n Here comes the crux. Because I sort-of need two kinds of calibration data. First, I need the result of passing X through `Linear(28*28, 512)` and `ReLU`. (I guess that's already possible?!) But second, I also need the result of passing X through `QuantizedLinear(28*28,512)` and `ReLU`, i.e., the result of passing X through the already-quantized network.\n\n The idea is that in the quantization of second layer one can \"correct\" some inaccuracies that the quantization of the first layer caused. For this one needs to know the difference of the calibration data passed through the original first layer versus passed through the quantized first layer.\n3. **Quantize `Linear(128, 10)` into `QuantizedLinear(128,10)`**.\n Again, I need both of the following:\n - X passed through the first 4 original units\n - X passed through the first 4 quantized units\n\nIs it possible to implement that with pytorch quantization, either with the new PT2E API or with one of the older APIs?\n\nThank you very much in advance!", "url": "https://github.com/pytorch/ao/issues/3120", "state": "closed", "labels": [], "created_at": "2025-10-03T19:18:39Z", "updated_at": "2025-10-04T19:56:39Z", "user": "jbirnick" }, { "repo": "pytorch/pytorch", "number": 164559, "title": "fwd_rng_state show up in the aot_export_joint grpah input", "body": "See https://github.com/pytorch/torchtitan/pull/1794\n\nP1975157784: rank0_autograd_function_0fea2786.py\n\nSetting `torch._functorch.config.graphsafe_rng_functionalization = False` doesn't work. \n\nHow to avoid `fwd_rng_state` from showing up?\n\ncc @chauhang @penguinwu @zou3519 @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/164559", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: aotdispatch", "module: pt2-dispatcher" ], "created_at": "2025-10-03T07:28:10Z", "updated_at": "2025-10-06T19:10:58Z", "comments": 1, "user": "SherlockNoMad" }, { "repo": "pytorch/pytorch", "number": 164536, "title": "Very confused about conda-forge", "body": "### \ud83d\udc1b Describe the bug\n\nIs this the cpu or gpu version? https://anaconda.org/conda-forge/pytorch\nWhat is this? https://anaconda.org/pytorch/pytorch-cuda\nHow should it be used? Is conda no longer a good way to install?\n\n### Versions\n\nIs this the cpu or gpu version? https://anaconda.org/conda-forge/pytorch\nWhat is this? https://anaconda.org/pytorch/pytorch-cuda\nHow should it be used? Is conda no longer a good way to install?", "url": "https://github.com/pytorch/pytorch/issues/164536", "state": "closed", "labels": [], "created_at": "2025-10-03T01:25:26Z", "updated_at": "2025-10-03T05:26:01Z", "comments": 1, "user": "7735986" }, { "repo": "pytorch/pytorch", "number": 164529, "title": "[RFC] Implement shrink_group API to expose ncclCommShrink", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n### PyTorch Process Group Shrink API \n\nAuthors: @brchang24 @spotluri @bosilca \n\n#### Summary\n\nThis document outlines proposed API changes to improve fault tolerance and flexibility in PyTorch Process Groups. \n\n#### Motivation\n\n**Fault Tolerance support**\n\nThe API is designed to enhance fault tolerance capabilities in PyTorch by enabling exclusion of faulty ranks. \n\nWith the new API, malfunctioning participants should be excluded to avoid hangings, as their reliability can't be guaranteed. This API is not limited to hard faults (where processes disappear due to hardware failures) but allows the application to mold the execution environment as needed (for correctness or performance reasons). \n\n**Performance**\n\nThe existing abort \\+ init method entails a fixed cost for full initialization, as illustrated in the chart below with rank shrink as an example.\n\n![Image](https://github.com/user-attachments/assets/eb264cbf-48a5-4117-8e60-0a179a45d75f)\n\n \nWe also explored alternatives like split, but that approach requires the participation of all ranks, including malfunctioning ones, to form a new group. \n\nSo, both above factors must be considered when designing the API. \n\n#### Proposed API changes \n\nTo address these concerns above, we are proposing the following API changes: \n\nNew API: shrink\\_group() \n\n```python\nshrink_group(ranks_to_exclude: List[int],\n Pg: Optional[ProcessGroup] = None,\n shrink_flags : int = NCCL_SHRINK_DEFAULT)\n\nShrink an existing distributed group. Only group members of the updated ProcessGroup need to enter this function. The excluded ranks do not need to call this function. The scope of this call is therefore collective across the processes that belong to the shrunk distributed group.\n\nArgs:\nranks_exclude (list[int]): List of group ranks to be excluded in the updated ProcessGroup. This list must be consistent across all participating processes.\npg (ProcessGroup, optional): The process group to work on, If None, the default process group will be used.\nshrink_flags (int): NCCL_SHRINK_DEFAULT (default)\n NCCL_SHRINK_ABORT (attempt to terminate ongoing operations in the parent communicator before shrinking. \n```\n\n#### Implementation \n\nPyTorch should directly use the shrink functionality if supported by the backend. \n\n**Use Cases** \n\nRank Shrink \n\nWhen one rank is detected defective and needs to be excluded, the new shrink\\_group() can be invoked as the below example: \n\nExample \n\nremove rank 2 \n**shrink\\_group**(\\[2\\], pg) \n\n\"Image\"\n\nIn the example above, only the original rank 0, 1, and 3 need to invoke shrink\\_group(), not rank 2 to avoid potential hang. After the shrink, the original rank 2 is excluded, so, the original rank 3 will be shifted down to become rank 2 in the updated group, if no parameter key is passed in. \n\nErrors could be reported by the backend or PyTorch. However, it is up to the upper layer to decide whether to exclude a rank from the group. \n\nIdeally, the defective ranks should be excluded from all associated groups. However, it appears to be enforcing a policy on PyTorch. This design suggests delegating the decision to exclude ranks from a group or completely dissolve the group to the Upper Layer (UL). Meanwhile, PyTorch should verify that no subgroups use the rank before it is removed from the default group. \n\n**Things to Consider** \n\n**Group Rank recalculation** \n\nShrink can lead to changes in the group's rank order. It will apply the default method to recalculate group ranks as detailed below. \n\nWhen shrinking, ranks will be shifted to close any gaps left by excluded ranks. \n\n**Metrics**\n\nWhat are the main metrics to measure the value of this feature? \n\n1. When one node/rank goes down completely \n To compare with the existing solution. \n\n \n\n2. Performance comparison with the existing solutions \n1. Shrink \n\n**Drawbacks**\n\nAre there any reasons why we should not do this? Here we aim to evaluate risk and check ourselves. \nPlease consider: \n\n* Is it a breaking change? \n This change should be backward compatible. \n* Impact on UX \n No \n* implementation cost, both in terms of code size and complexity \n The assumption is that the pytorch cost should not be high, but the backend support cost might be high, especially for backends that do not have already support for fault management. \n* integration of this feature with other existing and planned features \n PyTorch layer needs to integrate with the backend when it has the support. \n\n**Alternatives**\n\nWhat other designs have been considered? What is the impact of not doing this? \n \nCan be implemented using abort+init method. \nNeed full init method which potentially requires broadcast for the NCCL bootstrap. \n\n**Prior Art**\n\nDiscuss prior art (both good and bad) in relation to this proposal: \n\n", "url": "https://github.com/pytorch/pytorch/issues/164529", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2025-10-03T00:26:11Z", "updated_at": "2025-10-17T17:55:06Z", "comments": 0, "user": "brchang24" }, { "repo": "pytorch/torchtitan", "number": 1790, "title": "Distributed training hangs on local error instead of exit", "body": "In our model, we have the following code\n```python\nif x.shape[2:] != y.shape[2:]:\n print(f\"RANK {torch.distributed.get_rank()}: SPATIAL DIM MISMATCH!\")\n raise ValueError(f\"x.shape[2:] != y.shape[2:], {x.shape[2:]=}, {y.shape[2:]=}\")\n\nx = torch.cat([x, y], dim=1)\n```\nHowever, if one rank get mismatch error, it can reach the print point, but not the raise error point, the program hangs forever. \n\nHow to debug this and what's the potential reason? Thanks.", "url": "https://github.com/pytorch/torchtitan/issues/1790", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-02T21:18:54Z", "updated_at": "2025-10-03T02:49:24Z", "user": "yzhao30" }, { "repo": "pytorch/torchtitan", "number": 1781, "title": "How to add supervised finetuning mask in torchtitan?", "body": "How do I implement supervised fine-tuning (SFT) masking in TorchTitan for posttraining using a synthetic dataset?", "url": "https://github.com/pytorch/torchtitan/issues/1781", "state": "open", "labels": [ "post training" ], "created_at": "2025-10-01T23:36:12Z", "updated_at": "2025-12-12T19:37:12Z", "user": "kailashg26" }, { "repo": "pytorch/pytorch", "number": 164360, "title": "Would maintainers be open to a contribution that adds lightweight progress bar support (based on tqdm) in torch.utils?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nFeature request:\nAdd a lightweight progress bar utility (based on tqdm) in torch.utils that users can optionally import to visualize training/validation/test loop progress.\n\nMotivation:\nPyTorch core currently does not provide any built-in progress tracking for long-running loops. While users can integrate tqdm manually, it requires repetitive boilerplate in tutorials and quick scripts. A small utility in torch.utils would lower the barrier for beginners and improve user experience without adding significant complexity to the core library.\n\nPitch:\nThe utility would remain optional, minimal, and import tqdm only if used. This way, PyTorch maintains its philosophy of flexibility while offering a small but meaningful quality-of-life improvement for users.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/164360", "state": "closed", "labels": [ "triaged", "enhancement" ], "created_at": "2025-10-01T15:22:57Z", "updated_at": "2025-10-06T17:16:15Z", "comments": 2, "user": "wtfPrethiv" }, { "repo": "pytorch/xla", "number": 9662, "title": "XLA mul with bf16\u00d7bf16 upcasts to f32 \u2014 op math type and option to disable?", "body": "## \u2753 Questions and Help\n\nHi folks, I have a question about the XLA mul op.\n\nWhen both inputs are bf16, the generated graph converts to f32, performs the multiply, then converts back to bf16. Two questions:\n\nIn this case, is the op math type effectively f32 (not bf16)?\n\nIf this upcast exists primarily for TPU accuracy/stability, would it be acceptable to gate it behind a flag (e.g., env option) so we can treat that path as a no-op and keep the op in native bf16 when desired?\n\nReference code path:\nhttps://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_xla_type.cpp#L187-L211\n\nIf there\u2019s a better approach please let me know. Thanks!", "url": "https://github.com/pytorch/xla/issues/9662", "state": "closed", "labels": [ "enhancement", "tracing" ], "created_at": "2025-10-01T14:12:53Z", "updated_at": "2025-10-03T18:22:12Z", "comments": 3, "user": "sshonTT" }, { "repo": "pytorch/pytorch", "number": 164342, "title": "Official support for sm_120 (RTX 50-series / Blackwell) in stable PyTorch builds", "body": "### \ud83d\udc1b Describe the bug\n\nHello PyTorch team, \n\nI would like to kindly request official support for sm_120 (RTX 50-series / Blackwell GPUs, e.g. RTX 5070 Ti) in the stable PyTorch builds. \n\nCurrent situation: \n- CUDA 12.8/12.9 already includes support for Blackwell architectures. \n- PyTorch nightly builds (e.g., 2.10.0.dev + cu12.9) can detect sm_120, but they are not yet fully stable. \n- In my case, I tested the nightly build on Windows 11 with an RTX 5070 Ti. PyTorch itself launches, but DeepLabCut (DLC GUI, which relies heavily on PyTorch) still fails to start properly. \n- Interestingly, Annolid GUI works fine on the same PC with RTX 5070 Ti. This suggests the underlying CUDA/NVIDIA support is there, but stable PyTorch integration is still missing. \n\nProblem: \n- DLC (and many other research tools) depend strictly on stable PyTorch releases. Without official sm_120 support in the stable channel, we cannot run these applications on RTX 50-series GPUs. \n- As a researcher, I purchased RTX 5070 Ti for deep learning workloads, but currently it cannot be used productively with DLC due to this gap. \n\nRequest: \n- Please prioritize adding official sm_120 support into stable PyTorch builds. \n- Even partial support in an upcoming stable release (e.g., wheels with cu12.9) would greatly help researchers and developers adopt RTX 50-series hardware. \n- At minimum, could you provide an ETA or roadmap for when sm_120 will be supported in stable builds? \n\nThank you very much for your efforts and for maintaining this essential framework. \n\nBest regards, \n\n### Versions\n\nRTX 5070 Ti requires CUDA 12.0+ for full support.\nMultiple rebuilds of environments tested.\nPyTorch, NumPy, and OpenCV work independently.\nFailures appear specific to DLC\u2019s internal module loading mechanism.\n\ncc @seemethere @malfet @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @msaroufim @eqy @jerryzh168", "url": "https://github.com/pytorch/pytorch/issues/164342", "state": "open", "labels": [ "needs reproduction", "module: windows", "module: cuda", "triaged" ], "created_at": "2025-10-01T07:21:36Z", "updated_at": "2025-11-13T00:29:02Z", "comments": 14, "user": "endvntgf-design" }, { "repo": "pytorch/pytorch", "number": 164247, "title": "Dynamo graph break on flex attention code", "body": "### \ud83d\udc1b Describe the bug\n\n```python\nimport torch\nimport torch.nn as nn\nfrom torch.nn.attention.flex_attention import create_block_mask, flex_attention\n\n\nclass MixedFakeModeModel(nn.Module):\n def __init__(self, dim=64):\n super().__init__()\n self.dim = dim\n self.lin = torch.nn.Linear(64, 64)\n\n def forward(self, x):\n batch_size, seq_len, _ = x.shape\n\n # Process input first - this creates fake tensors in export's fake mode\n processed = self.lin(x)\n\n # Create some computation that depends on processed tensor\n intermediate = processed.sum(dim=-1).detach() # Shape: (batch, seq_len)\n\n def dynamic_mask_function(batch_idx, head_idx, q_idx, kv_idx):\n threshold = intermediate[\n batch_idx, q_idx % seq_len\n ] # Access the captured tensor\n return (kv_idx <= q_idx) & (threshold > 0)\n\n block_mask = create_block_mask(\n mask_mod=dynamic_mask_function,\n B=batch_size,\n H=None,\n Q_LEN=seq_len,\n KV_LEN=seq_len,\n device=x.device,\n _compile=False, # HF sets this to True, which runs into the issue i am talking below\n )\n q = processed.view(batch_size, 1, seq_len, self.dim)\n k = processed.view(batch_size, 1, seq_len, self.dim)\n v = processed.view(batch_size, 1, seq_len, self.dim)\n\n # this doesn't work\n out = torch.compile(flex_attention)(q, k, v, block_mask=block_mask)\n # this works (flex attention internally calls torch.compile(backend=eager) which\n # has special handling similar to torch.cond\n out = flex_attention(q, k, v, block_mask=block_mask)\n\n return out\n\ntorch.compile(MixedFakeModeModel(), fullgraph=True)(torch.randn(2, 128, 64))\n```\n\nWhen we are tracing through create_block_mask, dynamo graph breaks with:\n```\nUnsupported: id() with unsupported args\n Explanation: Dynamo doesn't know how to trace id() call with args (NestedUserFunctionVariable(),)\n Hint: Supported args are Tensors, and functions/nn.Modules/user-defined objects from outside the compiled region.\n Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.\n\n Developer debug context: (NestedUserFunctionVariable(),)\n\n For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0191.html\n\nfrom user code:\n File \"/tmp/ipykernel_2970620/3759915601.py\", line 27, in forward\n block_mask = create_block_mask(\n File \"/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/torch/nn/attention/flex_attention.py\", line 1067, in create_block_mask\n mod_type = _get_mod_type(mask_mod)\n File \"/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/torch/nn/attention/flex_attention.py\", line 244, in _get_mod_type\n for param in inspect.signature(fn).parameters.values()\n File \"/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py\", line 3348, in signature\n return Signature.from_callable(obj, follow_wrapped=follow_wrapped,\n File \"/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py\", line 3085, in from_callable\n return _signature_from_callable(obj, sigcls=cls,\n File \"/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py\", line 2538, in _signature_from_callable\n obj = unwrap(obj, stop=(lambda f: hasattr(f, \"__signature__\")\n File \"/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py\", line 773, in unwrap\n memo = {id(f): f}\n\nSet TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS=\"+dynamo\"\n```\n\n### Versions\n\nmain\n\ncc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames @Lucaskabela @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng", "url": "https://github.com/pytorch/pytorch/issues/164247", "state": "closed", "labels": [ "high priority", "triaged", "oncall: pt2", "module: dynamo", "module: graph breaks", "module: higher order operators", "module: pt2-dispatcher", "module: flex attention" ], "created_at": "2025-09-30T15:16:18Z", "updated_at": "2025-10-17T17:44:48Z", "comments": 7, "user": "tugsbayasgalan" }, { "repo": "pytorch/torchtitan", "number": 1773, "title": "Unreachable code in `CheckpointManager`", "body": "Hi! I've noticed that `def maybe_wait_for_staging` basically never does anything as `self.staging` is set to `False` in `__init__` and never modified. Is there something wrong or is this code never supposed to run?\n\nhttps://github.com/pytorch/torchtitan/blob/a3104201ba3a0fa19e9c3cc5ba748b0398551410/torchtitan/components/checkpoint.py#L616", "url": "https://github.com/pytorch/torchtitan/issues/1773", "state": "closed", "labels": [], "created_at": "2025-09-30T13:59:22Z", "updated_at": "2025-10-02T16:43:43Z", "comments": 3, "user": "antony-frolov" }, { "repo": "pytorch/torchtitan", "number": 1771, "title": "Posttraining Library", "body": "# Posttraining Library Support\n\n## Summary\nI understand that torchtune is being phased out and the team announced in July 2025 that they are developing a new product in a new repo for end-to-end post-training with scale. It's now been several months since that announcement. Could you share an update on when this new library will be released?\n\n## Motivation\nIn [[Issue #2883](https://github.com/pytorch/torchtune/issues/2883)](https://github.com/pytorch/torchtune/issues/2883), the torchtune team announced plans to develop a new product focused on end-to-end post-training with scale. That announcement was made several months ago in July 2025, and torchtune is now in maintenance mode (receiving only critical bug fixes and security patches during 2025).\n\n## Questions\n- **When will the new post-training library be released?** It's been several months since the July announcement, can you share a timeline or expected release date?\n- **Will the new library be part of torchtitan or a separate repository?** The announcement mentioned a \"new repo,\" but given torchtitan's focus on production-grade training, would it make sense to integrate?\n- **What's the relationship between the new library and torchtitan?** Will they share infrastructure, or are they separate projects?\n- **Which post-training techniques will be prioritized?** (eg SFT, RLHF/DPO, continued pretraining)\n- **Is there a beta or early access program?** Many in the community are eager to start testing and contributing.\n\n## Why I'm asking here (instead of torchtune)\nI'm posting this question in the torchtitan repo rather than torchtune because:\n\n1. **Architectural excellence**: The torchtitan team has demonstrated exceptional work in building a production-grade, PyTorch-native training system with modular composability and scale as a first-class citizen, exactly the qualities mentioned in the torchtune transition announcement.\n\n2. **Natural evolution**: Given that torchtitan already handles pretraining at scale with features like 3D parallelism, distributed checkpointing, and native PyTorch integration, it seems like a natural foundation or model for a post-training library with similar scale requirements.\n\n3. **Team expertise**: The torchtitan team's deep expertise in distributed training, parallelism techniques, and PyTorch internals makes them well-positioned to build or be involved with the successor to torchtune.\n\n4. **Unified vision**: Both the torchtitan philosophy and the announced new post-training library share similar goals: hackable code, minimal abstraction, scale-first design, and native PyTorch.\n\n\n## Additional Context\nWith torchtune entering maintenance mode and no longer accepting new features, many practitioners are in a transitional period waiting for the new post-training solution. Understanding the timeline and scope of the new library would help the community plan their training workflows accordingly.\n\nThank you for your excellent work on torchtitan and the broader PyTorch training ecosystem, we're excited to see what's coming!", "url": "https://github.com/pytorch/torchtitan/issues/1771", "state": "open", "labels": [ "post training" ], "created_at": "2025-09-30T09:42:49Z", "updated_at": "2025-10-24T07:58:26Z", "comments": 2, "user": "MarkLiLabs" }, { "repo": "pytorch/pytorch", "number": 164145, "title": "Improvements to profiler for bitwise equivalence use case", "body": "### \ud83d\udc1b Describe the bug\n\nSuppose that you want to verify that eager and aot_eager are numerically equivalent. The profiler can be a good tool for determining why there is a small numerical difference, as one might reasonably expect to get exactly the same kernels between the two. However, the profiler has obviously not been setup to handle this situation. Here are some obvious problems I ran into on the way:\n\n- [ ] No documentation for FunctionEvent at https://docs.pytorch.org/docs/stable/profiler.html . We need to postprocess events() in an unusual way, but because there are no docs it's difficult to tell what the format of events are. In particular, there's a hierarchical structure that we need to know about.\n- [ ] A convenient way to get all events in chronological order at a \"consistent\" level of abstraction, with no overlapping. For example, we might be interested specifically in what at:: kernel the dispatch dispatches to at the CPU/CUDA key. When looking at this, we do NOT want internal redispatches (e.g., an at::empty call to perform an allocation). Similarly, we might something equivalent to the top level first dispatcher invocation. Call it \"list_operators\"\n- [ ] There should be a convenient function for dumping a string trace at the highest level of abstraction, so you can quickly eyeball what code was run (in a similar niche to DebugMode, but \"better\" because it is guaranteed not to interfere with what exactly is executed in eager mode).\n\n### Versions\n\nmain\n\ncc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise", "url": "https://github.com/pytorch/pytorch/issues/164145", "state": "open", "labels": [ "oncall: profiler" ], "created_at": "2025-09-29T15:30:14Z", "updated_at": "2025-10-26T03:18:33Z", "comments": 2, "user": "ezyang" }, { "repo": "pytorch/pytorch", "number": 164133, "title": "Use libtorch export onnx", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHow to export onnx using libtorch after training a model with libtorch \uff1f \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/164133", "state": "closed", "labels": [], "created_at": "2025-09-29T13:55:19Z", "updated_at": "2025-09-29T14:43:24Z", "comments": 1, "user": "yongxin3344520" }, { "repo": "pytorch/pytorch", "number": 164124, "title": "torch.compile compiles multiple Triton autotune kernels, but uses the wrong ones", "body": "### \ud83d\udc1b Describe the bug\n\nWhen torch.compile autotunes a Triton kernel multiple times for different shapes, it uses the wrong kernel afterwards. Interestingly, this only happens when no torchinductor-cache files exist. On next run of the same program, it uses the correct kernels!\n\nHere are the details:\n\nI have adapted your example under \"Advanced Usage\" here, which explains how to use autotune with torch.compile:\nhttps://docs.pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html\n\nHere is the test case:\n\n[test.py](https://github.com/user-attachments/files/22595171/test.py)\n\nChanges:\n - I have added a time waster to the kernel, that clearly shows autotune which configuration is the inefficient one\n - made the shape of `x` a key for autotuning. The use case for this in reality is using large block sizes for large tensors\n - Call the kernel with different shapes and let it tune\n - Call it normally - it uses the wrong kernel\n\nIt appears that it *has* compiled 2 separate kernels for the 2 shapes, but it consistently uses the wrong one for *both* shapes, as if it intentionally tried to use the wrong one.\nBut only until you run the program a second time. When it reads the kernels from the torchinductor cache, it uses the correct kernels!\n\n\n### Error logs\n\n- Without torch.compile:\n\n```\nTRITON_PRINT_AUTOTUNING=1 TORCH_COMPILE_DISABLE=1 python test.py\n[...]\nbest config selected: BLOCK_SIZE: 2, num_warps: 4, num_ctas: 1, num_stages: 4, maxnreg: None;\n[...]\nbest config selected: BLOCK_SIZE: 4, num_warps: 8, num_ctas: 1, num_stages: 3, maxnreg: None;\n```\nBest config for both shapes is selected, no errors.\n\n\n\n- With torch.compile **the first time**: [make sure your torchinductor cache is deleted]\n\n```\nTORCH_COMPILE_DISABLE= python test.py\npid (2, 0, 0) idx () --- BADLY TUNED KERNEL ---: 2\n```\n\n- With torch.compile, ran **a second time**, with existing torchinductor cache:\n\n```\nTORCH_COMPILE_DISABLE= python test.py\n```\n\nNo errors. It uses the correct kernels.\n\n### Versions\n\n```\nPyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.3 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Ti SUPER\nNvidia driver version: 580.82.07\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 12\nOn-line CPU(s) list: 0-11\nVendor ID: GenuineIntel\nModel name: 12th Gen Intel(R) Core(TM) i5-12400F\nCPU family: 6\nModel: 151\nThread(s) per core: 2\nCore(s) per socket: 6\nSocket(s): 1\nStepping: 5\nCPU(s) scaling MHz: 54%\nCPU max MHz: 4400.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4992.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 288 KiB (6 instances)\nL1i cache: 192 KiB (6 instances)\nL2 cache: 7.5 MiB (6 instances)\nL3 cache: 18 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-11\nVulnerability Gather data ", "url": "https://github.com/pytorch/pytorch/issues/164124", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamic shapes", "module: user triton" ], "created_at": "2025-09-29T10:19:35Z", "updated_at": "2025-09-29T16:50:17Z", "comments": 3, "user": "dxqb" }, { "repo": "pytorch/pytorch", "number": 164094, "title": "Failed to change backward stream", "body": "in [pytorch cuda semantic ](https://docs.pytorch.org/docs/stable/notes/cuda.html#stream-semantics-of-backward-passes)\n> Each backward CUDA op runs on the same stream that was used for its corresponding forward op. If your forward pass runs independent ops in parallel on different streams, this helps the backward pass exploit that same parallelism.\n\n\nI currently have a requirement to run the backward pass on a different stream. I implemented an `autograd.Function` node and used `torch.cuda.set_stream()` in its backward method to switch streams, but I observed in the nsys timeline that the backward still runs on the same stream as the forward. Is there any way to force PyTorch\u2019s backward to use a different CUDA stream than the forward?\n\n```\nclass BackwardStream(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input_tensor: torch.Tensor, stream: torch.cuda.Stream) -> torch.Tensor:\n ctx.stream = stream\n return input_tensor\n\n @staticmethod\n def backward(ctx, grad_output: torch.Tensor) -> torch.Tensor:\n stream = ctx.stream\n stream.wait_stream(torch.cuda.current_stream())\n torch.cuda.set_stream(stream)\n return grad_output\n \n```\n\ncc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan", "url": "https://github.com/pytorch/pytorch/issues/164094", "state": "closed", "labels": [ "module: autograd", "triaged" ], "created_at": "2025-09-29T02:14:19Z", "updated_at": "2025-10-05T23:40:49Z", "comments": 16, "user": "shadow150519" }, { "repo": "pytorch/pytorch", "number": 164074, "title": "When will the version for ROCM 7 be released?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe homepage still shows version 6.4.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd", "url": "https://github.com/pytorch/pytorch/issues/164074", "state": "closed", "labels": [ "module: rocm", "triaged" ], "created_at": "2025-09-28T16:27:21Z", "updated_at": "2025-09-30T00:40:11Z", "comments": 3, "user": "mihongyu" }, { "repo": "pytorch/pytorch", "number": 164061, "title": "GPU Memory Leak due to distributions", "body": "I am using the [MixStyle](https://arxiv.org/abs/2104.02008) methodology for domain adaptation and it involves using a custom layer which is inserted after every encoder stage. However, it is causing VRAM to grow linearly, which causes OOM error. No memory leak occurs on disabling the layer. Any idea on why this is happening?\n\n```\nclass MixStyle(nn.Module):\n \"\"\"MixStyle.\n Reference:\n Zhou et al. Domain Generalization with MixStyle. ICLR 2021.\n \"\"\"\n\n def __init__(self, p=0.5, alpha=0.1, eps=1e-6, mix='random'):\n \"\"\"\n Args:\n p (float): probability of using MixStyle.\n alpha (float): parameter of the Beta distribution.\n eps (float): scaling parameter to avoid numerical issues.\n mix (str): how to mix.\n \"\"\"\n super().__init__()\n self.p = p\n self.beta = torch.distributions.Beta(alpha, alpha)\n self.eps = eps\n self.alpha = alpha\n self.mix = mix\n self._activated = True\n\n def __repr__(self):\n return f'MixStyle(p={self.p}, alpha={self.alpha}, eps={self.eps}, mix={self.mix})'\n\n def set_activation_status(self, status=True):\n self._activated = status\n\n def update_mix_method(self, mix='random'):\n self.mix = mix\n\n def forward(self, x):\n if not self.training or not self._activated:\n return x\n\n if random.random() > self.p:\n return x\n\n B = x.size(0)\n\n mu = x.mean(dim=[2, 3], keepdim=True)\n var = x.var(dim=[2, 3], keepdim=True)\n sig = (var + self.eps).sqrt()\n mu, sig = mu.detach(), sig.detach()\n x_normed = (x-mu) / sig\n\n lmda = self.beta.sample((B, 1, 1, 1))\n lmda = lmda.to(x.device)\n\n if self.mix == 'random':\n # random shuffle\n perm = torch.randperm(B)\n\n elif self.mix == 'crossdomain':\n # split into two halves and swap the order\n perm = torch.arange(B - 1, -1, -1) # inverse index\n perm_b, perm_a = perm.chunk(2)\n perm_b = perm_b[torch.randperm(B // 2)]\n perm_a = perm_a[torch.randperm(B // 2)]\n perm = torch.cat([perm_b, perm_a], 0)\n\n else:\n raise NotImplementedError\n\n mu2, sig2 = mu[perm], sig[perm]\n mu_mix = mu*lmda + mu2 * (1-lmda)\n sig_mix = sig*lmda + sig2 * (1-lmda)\n\n return x_normed*sig_mix + mu_mix\n```\n\n\ncc @fritzo @neerajprad @alicanb @nikitaved", "url": "https://github.com/pytorch/pytorch/issues/164061", "state": "open", "labels": [ "module: distributions", "triaged" ], "created_at": "2025-09-28T05:08:15Z", "updated_at": "2025-09-29T14:54:42Z", "comments": 1, "user": "vedantdalimkar" }, { "repo": "pytorch/pytorch", "number": 163982, "title": "Need to update Magma version in Pytorch", "body": "### \ud83d\udc1b Describe the bug\n\nNeed to look into updating Magma for Pytorch CUDA builds\nNeed to understand what is the perf increase. \nDo we need MAGMA at all ?\n\n### Versions\n\n2.10.0\n\ncc @ptrblck @msaroufim @eqy @jerryzh168", "url": "https://github.com/pytorch/pytorch/issues/163982", "state": "open", "labels": [ "module: cuda", "triaged" ], "created_at": "2025-09-26T19:21:26Z", "updated_at": "2025-09-26T19:23:09Z", "comments": 0, "user": "atalman" }, { "repo": "pytorch/pytorch", "number": 163946, "title": "ModuleNotFoundError: No module named 'importlib_metadata'", "body": "### \ud83d\udc1b Describe the bug\n\nI encountered this error when I used torchrun.\n\nTraceback (most recent call last):\n File \"xxx/bin/torchrun\", line 5, in \n from torch.distributed.run import main\n File \"xxx/lib/python3.9/site-packages/torch/distributed/run.py\", line 381, in \n from torch.distributed.elastic.rendezvous.utils import _parse_rendezvous_config\n File \"xxx/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/__init__.py\", line 142, in \n from .registry import _register_default_handlers, _register_out_of_tree_handlers\n File \"xxx/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/registry.py\", line 19, in \n from importlib_metadata import entry_points\nModuleNotFoundError: No module named 'importlib_metadata'\n\nI saw the following code in the source code\n\nif sys.version_info < (3, 10):\n from importlib_metadata import entry_points\nelse:\n from importlib.metadata import entry_points\n\nSince Python3.8 the importlib_metedata third-party library has been merged into Cpython and became its importlib.metada module, why is it judged here that Python is less than 3.10? Is there any special consideration\uff1f\n\nIf it is necessary, should it be added to the requirements.txt\n\n### Versions\n\ntorch 2.6.0\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/163946", "state": "closed", "labels": [ "needs reproduction", "oncall: distributed" ], "created_at": "2025-09-26T08:26:50Z", "updated_at": "2025-11-06T07:20:57Z", "comments": 6, "user": "yunyiyun" }, { "repo": "pytorch/pytorch", "number": 163900, "title": "[Maintenance] MacOS runners update", "body": "\n## Current Status\n*ongoing*.\n\n## Error looks like\nMacOS jobs might fail with infra errors\n\n## Incident timeline (all times pacific)\n*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*\n\n## User impact\n*How does this affect users of PyTorch CI?*\n\n## Root cause\n*What was the root cause of this issue?*\n\n## Mitigation\n*How did we mitigate the issue?*\n\n## Prevention/followups\n*How do we prevent issues like this in the future?*\n", "url": "https://github.com/pytorch/pytorch/issues/163900", "state": "closed", "labels": [ "ci: sev" ], "created_at": "2025-09-25T22:30:08Z", "updated_at": "2025-09-26T11:27:33Z", "comments": 3, "user": "malfet" }, { "repo": "pytorch/torchx", "number": 1130, "title": "The hosted doc server is not working", "body": "## \ud83d\udcda Documentation\n\n## Link\nWe are now redirected from https://docs.pytorch.org/torchx/main/quickstart.html to https://meta-pytorch.org/torchxmain/quickstart.html\n\n## What does it currently say?\n```\n404\n\nFile not found\n\nThe site configured at this address does not contain the requested file.\n\nIf this is your site, make sure that the filename case matches the URL as well as any file permissions.\nFor root URLs (like http://example.com/) you must provide an index.html file.\n\n[Read the full documentation](https://help.github.com/pages/) for more information about using GitHub Pages.\n```\n\nIt should redirect us to https://meta-pytorch.org/torchx/main/quickstart.html or https://meta-pytorch.org/torchx/latest/quickstart.html\n\nLooks like instead of `/torchx/main/` it gets resolved to `/torchxmain/`.\n\n## What should it say?\nShould work as before\n\n## Why?\nHosted docs are very useful\n", "url": "https://github.com/meta-pytorch/torchx/issues/1130", "state": "closed", "labels": [], "created_at": "2025-09-25T16:58:45Z", "updated_at": "2025-09-25T20:14:43Z", "comments": 2, "user": "clumsy" }, { "repo": "pytorch/pytorch", "number": 163801, "title": "[CUDA][Triton][PTXAS] Triton Wheel Missing CUDA13 PTXAS - Breakage exists for the environment where CTK is not present", "body": "### \ud83d\udc1b Describe the bug\n\nBy default triton release/3.5x ships a PTXAS version that is based on CUDA12.8. \n\n** in environments that the latest CTK is NOT installed** \n\nComparing to PTXAS from CUDA13.0, CUDA12.8 ptxas is not capable to handle THOR device (which underwent a renaming, see https://github.com/llvm/llvm-project/issues/156096 for background related issue. Note this llvm issue 156096 has been fixed in triton/3.5.x via https://github.com/triton-lang/llvm-project/pull/2, which can be verified with a CTK 13.0. Referencing here just for the renaming context) and for other newer devices. \n\nUsers on THOR would encounter: \nptxas fatal : Value 'sm_110a' is not defined for option 'gpu-name'\n\nUsers on SM_121 device (https://docs.nvidia.com/cuda/pdf/CUDA_Features_Archive.pdf) would encounter \nptxas fatal : Value 'sm_121a' is not defined for option 'gpu-name'\n\n\nSee also the report https://github.com/llvm/llvm-project/issues/156096#issuecomment-3319410046 from @[mcr-ksh](https://github.com/mcr-ksh) \n\n** in environments that has the latest CTK installed **\nUsers may still need the explicit \"export TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas\" to get Triton to pick up the right ptxas. \n\nWe have a few options: \n1. According to @ptrblck, one workaround could be to ship ptxas12 as well as ptxas13 and use the appropriate one using a runtime check for the PyTorch/CUDA version, we did this in the past for Blackwell (using ptxas_blackwell) when ptxas==12.8. \n2. PyTorch cu126/cu128/cu130 shipping a different ptxas, then triton won't need one \n3. we build triton cuda wheels separately for cu126/cu128/cu130. \n\nNo.1 seems to be doable for final v2.9RC. Thoughts? \n\ncc @seemethere @malfet @atalman @ptrblck @eqy @tinglvv @xwang233 @davidberard98 \n\n### Versions\n\nTriton release/3.5.x", "url": "https://github.com/pytorch/pytorch/issues/163801", "state": "closed", "labels": [ "module: binaries", "triaged", "module: third_party", "has workaround", "dependencies" ], "created_at": "2025-09-24T22:21:24Z", "updated_at": "2025-09-30T01:56:15Z", "user": "nWEIdia" }, { "repo": "pytorch/pytorch", "number": 163789, "title": "[docs] instructions to locally build docs are underspecified", "body": "*Note: moving the dependency conflict discussion to #164010.*\n\n### \ud83d\udcda The doc issue\n\nDocstring changes I made in #163120 caused the `linux-jammy-py3_10-gcc11-build` `docs_test` CI to fail. To debug this I had to build the docs locally, and ran into some rough edges:\n\n1. There are small discrepancies between the instructions in [`CONTRIBUTING.md`](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-documentation) and [`README.md`](https://github.com/pytorch/pytorch?tab=readme-ov-file#building-the-documentation). \n2. As written, neither set of instructions will work with Python >=3.11 - `.ci/docker/requirements-docs.txt` uses `matplotlib=3.5.3` for Python <3.13, but matplotlib 3.5.3 only has wheels for/supports Python <=3.10. It also uses `matplotlib=3.6.3` for Python >=3.13, but matplotlib 3.6.3 only has wheels for/supports Python <=3.11.\n3. ~Tools like `uv` don't support use of the editable flag `-e` with URLs (line 4 of `docs/requirements.txt`). This is also deprecated in `pip` and will be enforced in `pip 25.3`, which will be released in about a month!~ (Edit: no action required here - `setup.py develop` is being deprecated, not the entire editable mechanism. See [this issue](https://github.com/pypa/pip/issues/11457) and [PEP 660](https://peps.python.org/pep-0660/) for more context.)\n4. I wasn't able to build the docs without installing `numpy<2`. This isn't possible, right?\n\nNits:\n- ~The README docs instructions have a typo - `node@6.13.1` should presumably be `node@16.13.1`.~ (Edit: this was wrong.)\n- The `CONTRIBUTING.md` tip about removing irrelevant `.rst` files is outdated - the example command removes `docs/source/scripts/exportdb/generate_example_rst.py`, which will cause builds to error. This can be fixed with `find . -type f -iname \"*.rst\" | ...`\n\n
\nHow to build the PyTorch nightly docs on MacOS\n\nIf you come across this issue while trying to build the docs, this works as of September 2025:\n\n1. Set up the repo and a Python 3.10 env with pip.\n```bash\ngit clone https://github.com/pytorch/pytorch.git\ncd pytorch\nuv venv -p 3.10 .venv-docs\nsource .venv-docs/bin/activate\nuv pip install -U pip\n```\n\n2. Install torch.\n\nIf you're making small changes in `docs/source`, you can install [the appropriate nightly wheel](https://pytorch.org/get-started/locally/):\n```bash\npython -m pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu\n```\n\nIf you're adding new Python modules or updating Python docstrings in `torch/`, you can use `tools/nightly.py` with the prefix flag:\n```bash\n./tools/nightly.py checkout -b our-branch -p .venv-docs\n```\n\nIf you're doing something more involved you likely have to [build from source](https://github.com/pytorch/pytorch?tab=readme-ov-file#from-source):\n```bash\npip install --group dev\npython -m pip install --no-build-isolation -v -e .\n```\n\n3. Install docs-specific dependencies.\n```bash\nbrew install node\nnpm install -g katex@0.13.18\npip install -r docs/requirements.txt\npip install 'numpy<2'\n```\n\nAfterwards you should be able to `cd docs && make html`.\n\n
\n\n### Suggest a potential alternative/fix\n\nReconcile and update the instructions in `CONTRIBUTING.md` and `README.md`. In particular:\n- Recommend using a separate venv for local docs builds.\n- Explain when/why someone building the docs should install torch from source.\n- Move niche information (e.g. building a PDF of the docs) from the README to `CONTRIBUTING.md`, and add a link.\n\nAnd:\n- Pin `numpy<2` and appropriate matplotlib versions in `.ci/docker/requirements-docs.txt`. If there's some reason we can't do this, let's explicitly note that only Python 3.10 is supported for now (since we can't build torch from source on 3.9).\n- ~Remove the editable flag on `pytorch_sphinx_theme2`, and document a separate flow for those actually working on the theme.~\n\nIf all of this makes sense to reviewers, I can get started on a PR with these fixes.\n\nI can't edit the wiki, but it'd be great if a maintainer could update the [Docstring Guidelines](https://github.com/pytorch/pytorch/wiki/Docstring-Guidelines) to link directly to `https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings` and link to the instructions in `CONTRIBUTING.md`.\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/163789", "state": "open", "labels": [ "module: docs", "triaged", "actionable" ], "created_at": "2025-09-24T20:24:43Z", "updated_at": "2025-09-26T22:34:16Z", "comments": 2, "user": "filipviz" }, { "repo": "pytorch/pytorch", "number": 163785, "title": "Revisit guarding on unbacked inputs !", "body": "We generate guards on unbacked inputs now those are interesting, \n- some we do not need at all because they are side effects of torch.check calls\n- some are actually needed (striding properties that we did assert on ), shall we make them runtime assertions?\n\nThere are some examples in the tests [here](https://github.com/pytorch/pytorch/pull/163705/files), not sure yet what is the right solution. but here is some different examples\n\n### Example1 \nFor example u0<6 here should not be a guard.\n```\n\n @torch.compile(fullgraph=True, dynamic=True, backend=cnt)\n def func(a):\n torch._check(a.size()[0] < 6)\n return a * 10\n\n a = torch.rand(4, 10)\n```\n### Example2\nbut what about something like\n```\n @torch.compile(fullgraph=True, dynamic=True, backend=cnt)\n def func(a):\n return a*10\n\n # no reocmpile if we pass 9, 8\n # recompile if we pass 11\n a = torch.rand(1,2,3,4,5)\n torch._dynamo.decorators.mark_unbacked(a, 0)\n torch._dynamo.decorators.mark_unbacked(a, 1)\n torch._dynamo.decorators.mark_unbacked(a, 2)\n torch._dynamo.decorators.mark_unbacked(a, 3)\n torch._dynamo.decorators.mark_unbacked(a, 4)\n func(a)\n```\nshall we guard or runtime assert on striding properties with unabacked\nex:\n L['a'].stride()[0] == L['a'].size()[1]*L['a'].size()[2]*L['a'].size()[3]*L['a'].size()[4]\n\n### Example3\nhere is another example is my expectation of what should happen right in it?\n```\n def func(a):\n # this should generate runtime assertio and no guard.\n torch._check(a.size()[0] == a.size()[1])\n # This should generate guard\n torch._check(a.size()[0] < 10)\n return a * 10\n\n\n a = torch.rand(4,4)\n torch._dynamo.decorators.mark_unbacked(a, 0)\n torch._dynamo.mark_dynamic(a, 1)\n func(a)\n\n #should not no recompile(i think)\n try :\n func(torch.rand(4, 7))\n except:\n pass\n # recompile (should recompile)\n try :\n func(torch.rand(100, 100))\n except:\n pass\n```\nwe recompile for both now.\n\n\n\ncc @chauhang @penguinwu @ezyang @bobrenjc93", "url": "https://github.com/pytorch/pytorch/issues/163785", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamic shapes" ], "created_at": "2025-09-24T19:17:35Z", "updated_at": "2025-10-29T22:58:35Z", "comments": 2, "user": "laithsakka" }, { "repo": "pytorch/pytorch", "number": 163761, "title": "Does device mesh of (N,1) cause all_gather communication in HSDP of FSDP2?", "body": "In HSDP of FSDP2, let's say I have N GPUs, if the shape of device mesh is (N,1) (similar to DDP), will all_gather communication still happen in forward/backward? Or is this device mesh shape illegitimate?\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/163761", "state": "open", "labels": [ "oncall: distributed" ], "created_at": "2025-09-24T13:51:27Z", "updated_at": "2025-09-25T18:59:28Z", "comments": 1, "user": "EquationWalker" }, { "repo": "pytorch/pytorch", "number": 163753, "title": "Invalid __shared__ read of size 16 bytes in torch.conv_transpose3d", "body": "### \ud83d\udc1b Describe the bug\n\nWhen using `torch.nn.ConvTranspose3d` with certain parameters, a CUDA `__shared__` memory read out-of-bounds error occurs. \n\n```python\nimport torch\nimport torch.nn as nn\nimport os\n\nos.environ['CUDA_LAUNCH_BLOCKING'] = '1'\n\ndef main():\n if not torch.cuda.is_available() or not torch.backends.cudnn.is_available():\n print(\"This bug requires a CUDA-enabled GPU with cuDNN.\")\n return\n\n device = torch.device(\"cuda\")\n dtype = torch.float32\n\n try:\n in_channels = 24\n \n model = nn.ConvTranspose3d(\n in_channels=in_channels,\n out_channels=1,\n kernel_size=(15, 3, 10),\n stride=(2, 1, 1),\n padding=(23, 0, 1),\n dilation=(1, 3, 3),\n groups=1,\n bias=False\n ).to(device, dtype=dtype)\n \n model.eval()\n\n input_shape = (1, in_channels, 24, 24, 24)\n input_tensor = torch.randn(input_shape, device=device, dtype=dtype)\n\n model(input_tensor)\n except Exception as e:\n print(f\"An unexpected error occurred: {e}\")\n import traceback\n traceback.print_exc()\n\nif __name__ == \"__main__\":\n main()\n```\n\n### How to Reproduce\n\n1. Save the code above as `repro.py`.\n2. Run the script using `compute-sanitizer`. The `Invalid __shared__ read` error will be reported.\n\n```bash\ncompute-sanitizer python repro.py\n```\n\n### Observed Results\n\n```\n========= COMPUTE-SANITIZER\n========= Invalid __shared__ read of size 16 bytes\n========= at void xmma_cudnn_infer::implicit_gemm::strided_dgrad_indexed::kernel_helper_stage_1(T1)+0x2710\n========= by thread (255,0,0) in block (4,0,0)\n========= Address 0x400 is out of bounds\n========= Saved host backtrace up to driver entry point at kernel launch time\n========= Host Frame: [0x2631a7a] in libcudnn_cnn_infer.so.8\n========= Host Frame: [0x268da7a] in libcudnn_cnn_infer.so.8\n========= Host Frame: [0x21853e2] in libcudnn_cnn_infer.so.8\n========= Host Frame: [0x1928b9b] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::infer::InferNdSubEngine, cask_cudnn_infer::ConvDgradShader>::execute_internal_fprop_impl(cudnnContext*, CUstream_st*, void const*, void const*, void const*, void const*, void const*, void const*, unsigned long, void*, void*, unsigned int) [0x1215689] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::infer::InferNdSubEngine, cask_cudnn_infer::ConvDgradShader>::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) [0x1215c7a] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) [0xd8eb04] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::EngineContainer<(cudnnBackendEngineName_t)1051>::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) [0xdc94cf] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) [0xd8eb04] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::AutoTransformationExecutor::execute_pipeline(cudnn::cnn::EngineInterface&, cudnn::backend::VariantPack const&, CUstream_st*) const [0xf00e7d] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::BatchPartitionExecutor::operator()(cudnn::cnn::EngineInterface&, cudnn::cnn::EngineInterface*, cudnn::backend::VariantPack const&, CUstream_st*) const [0xf00fc6] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::GeneralizedConvolutionEngine >::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) [0xf0f7aa] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) [0xd8eb04] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnn::backend::execute(cudnnContext*, cudnn::backend::ExecutionPlan&, cudnn::backend::VariantPack&) [0xda3498] in libcudnn_cnn_infer.so.8\n========= Host Frame: cudnnBackendExecute [0xda383c] in libcudnn_cnn_infer.so.8\n========= Host Frame: at::native::run_conv_plan(cudnnContext*, at::Tensor const&, at::Tensor const&, at::Tensor const&, cudnn_fronten", "url": "https://github.com/pytorch/pytorch/issues/163753", "state": "closed", "labels": [], "created_at": "2025-09-24T11:33:17Z", "updated_at": "2025-09-26T01:22:04Z", "comments": 4, "user": "supermarkli" }, { "repo": "pytorch/torchtitan", "number": 1750, "title": "Inconsistent loss between different TP", "body": "### Bug description\n\nI have encountered different Inconsistent loss between different TP on both llama3 and llama4 moe model.\nThe toml configs are exactly the same except different tensor parallels.\nThe seed is set and deterministic is turned on.\n\ntensorboard:\n## llama4:\ngradnorm:\n\n\"Image\"\n\nloss:\n\"Image\"\n\n## llama3:\ngradnorm:\n\"Image\"\nloss:\n\"Image\"\n\n\n\n\n\n\n\n### Versions\n\ntoml:\n\n```\n# torchtitan Config.toml\n\n[job]\ndump_folder = \"./outputs\"\ndescription = \"Llama 3 debug training\"\nprint_args = false\nuse_for_integration_test = true\n\n[profiling]\nenable_profiling = false\nsave_traces_folder = \"profile_trace\"\nprofile_freq = 10\nenable_memory_snapshot = false\nsave_memory_snapshot_folder = \"memory_snapshot\"\n\n[metrics]\nlog_freq = 1\ndisable_color_printing = false\nenable_tensorboard = true\nsave_tb_folder = \"tb\"\nenable_wandb = false\n\n[model]\nname = \"llama3\"\nflavor = \"debugmodel\"\n# test folder with tokenizer.json, for debug purpose only\nhf_assets_path = \"./tests/assets/tokenizer\"\n# converters = [\"float8\"]\n\n[optimizer]\nname = \"AdamW\"\nlr = 4e-3\neps = 1e-15\n\n[lr_scheduler]\nwarmup_steps = 2 # lr scheduler warm up, normally 20% of the train steps\ndecay_ratio = 0.8 # lr scheduler decay ratio, 80% of the train steps\ndecay_type = \"linear\"\nmin_lr_factor = 0.1\n\n[training]\nlocal_batch_size = 1\nglobal_batch_size = 64\nseq_len = 2048\nmax_norm = 1.0 # grad norm clipping\nsteps = 100000\ndataset_type = \"hf\" # mmap for megatron style\ndataset = \"c4_test\" # supported datasets: c4_test (2K), c4 (177M)\ndataset_path = \"3rdparty/torchtitan/tests/assets/c4_test\"\nseed = 1234\ndeterministic = true\n\n[parallelism]\ndata_parallel_replicate_degree = 1\ndata_parallel_shard_degree = -1\nfsdp_reshard_after_forward = \"default\" # default / never / always\ntensor_parallel_degree = {1 / 4}\nenable_async_tensor_parallel = false\npipeline_parallel_degree = 1\npipeline_parallel_schedule = \"1F1B\"\ncontext_parallel_degree = 1\nexpert_parallel_degree = 1\nexpert_tensor_parallel_degree = 1\n\n[checkpoint]\nenable_checkpoint = false\nfolder = \"checkpoint\"\ninterval = 10\nlast_save_model_only = false\nexport_dtype = \"float32\"\nasync_mode = \"disabled\" # [\"disabled\", \"async\", \"async_with_pinned_mem\"]\n\n[activation_checkpoint]\nmode = \"none\" # [\"none\", \"selective\", \"full\"]\nselective_ac_option = '2' # 'int' = ac every positive int layer or 'op', ac based on ops policy\n\n[compile]\nenable=false\ncomponents = [\"model\", \"loss\"]\n\n[float8]\nenable_fsdp_float8_all_gather = false\nprecompute_float8_dynamic_scale_for_fsdp = false\nfilter_fqns = [\"output\", \"router.gate\"]\nmoe_fqns = [\"experts\"]\n```\n\ntorch version: 2.9.0+main.de744ca4b19.post20250818\ncuda: 12.4", "url": "https://github.com/pytorch/torchtitan/issues/1750", "state": "open", "labels": [ "question" ], "created_at": "2025-09-24T03:11:22Z", "updated_at": "2025-10-02T00:25:43Z", "user": "weixuansun" }, { "repo": "pytorch/torchtitan", "number": 1749, "title": "What is the benefit of using torchrun instead of python directly with slurm and other launchers ?", "body": "Is there any difference in the following two commands ? \n\nsrun torchrun --nnodes 4 --nproc_per_node 8 --rdzv_endpoint \"$head_node_ip:29500\" -m torchtitan.train ...\n\nMASTER_ADDR= ip-adress MASTER_PORT=port-number srun --nodes=4 --ntasks-per-node=8 python -m torchtitan.train ", "url": "https://github.com/pytorch/torchtitan/issues/1749", "state": "open", "labels": [], "created_at": "2025-09-23T23:35:08Z", "updated_at": "2025-09-26T18:05:51Z", "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 163699, "title": "Should we mark `TestExportOpInfo.test_fake_export` tests as distributed?", "body": "### \ud83d\udc1b Describe the bug\n\n`TestExportOpInfo.test_fake_export` calls `_test_export_helper` \n\nhttps://github.com/pytorch/pytorch/blob/8c8416b021e59a5ec58aceb38eeffc63885a28bc/test/export/test_export_opinfo.py#L125-L133\n\nwhich sends tensor to `cuda:1`\n\nhttps://github.com/pytorch/pytorch/blob/8c8416b021e59a5ec58aceb38eeffc63885a28bc/test/export/test_export_opinfo.py#L80-L90\n\nYou can verify with this command on a machine with single GPU\n\n```\n$ python test/run_test.py -i export/test_export_opinfo --exclude-distributed-tests -- -k test_fake_export___radd___cpu_float32\n\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/common_device_type.py\", line 1135, in test_wrapper\n return test(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/pytorch/pytorch/test/export/test_export_opinfo.py\", line 133, in test_fake_export\n _test_export_helper(self, dtype, op)\n File \"/opt/pytorch/pytorch/test/export/test_export_opinfo.py\", line 116, in _test_export_helper\n ep = torch.export.export(m, args)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/__init__.py\", line 311, in export\n raise e\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/__init__.py\", line 277, in export\n return _export(\n ^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 1177, in wrapper\n raise e\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 1143, in wrapper\n ep = fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/exported_program.py\", line 124, in wrapper\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 2269, in _export\n ep = _export_for_training(\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 1177, in wrapper\n raise e\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 1143, in wrapper\n ep = fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/exported_program.py\", line 124, in wrapper\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 2085, in _export_for_training\n export_artifact = export_func(\n ^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py\", line 1971, in _non_strict_export\n ) = make_fake_inputs(\n ^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_export/non_strict_utils.py\", line 402, in make_fake_inputs\n fake_args, fake_kwargs = tree_map_with_path(\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/utils/_pytree.py\", line 2056, in tree_map_with_path\n return treespec.unflatten(func(*xs) for xs in zip(*all_keypath_leaves))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/utils/_pytree.py\", line 1193, in unflatten\n leaves = list(leaves)\n ^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/utils/_pytree.py\", line 2056, in \n return treespec.unflatten(func(*xs) for xs in zip(*all_keypath_leaves))\n ^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_export/non_strict_utils.py\", line 403, in \n lambda kp, val: fakify(\n ^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_export/non_strict_utils.py\", line 232, in fakify\n fake = mode.from_tensor(t, source=source, symbolic_context=symbolic_context)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py\", line 3004, in from_tensor\n return self.fake_tensor_converter.from_real_tensor(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py\", line 404, in from_real_tensor\n out = self.meta_converter(\n ^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses/meta_utils.py\", line 1922, in __call__\n r = self.meta_tensor(\n ^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses/meta_utils.py\", line 1698, in meta_tensor\n r = callback(\n ^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py\", line 395, in mk_fake_tensor\n return FakeTensor(\n ^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py\", line 744, in __new__\n init_gpu_context(device)\n File \"/usr/local/lib/python3.12/dist-packages/torch/_subclasses", "url": "https://github.com/pytorch/pytorch/issues/163699", "state": "closed", "labels": [ "module: tests", "oncall: pt2", "oncall: export" ], "created_at": "2025-09-23T22:12:42Z", "updated_at": "2025-09-30T16:12:42Z", "comments": 2, "user": "xwang233" }, { "repo": "pytorch/pytorch", "number": 163690, "title": "Recomputed values for the following tensors have different metadata than during the forward pass.", "body": "### \ud83d\udc1b Describe the bug\n\nhi I have a model with linear layers which i wrap with LoRA layers applied as following \n\n\n\n```\n(attn): Attention(\n (q_proj): LoRALinear(\n (original_layer): Linear(in_features=4096, out_features=4096, bias=False)\n (dropout): Identity()\n )\n (k_proj): LoRALinear(\n (original_layer): Linear(in_features=4096, out_features=4096, bias=False)\n (dropout): Identity()\n )\n (v_proj): LoRALinear(\n (original_layer): Linear(in_features=4096, out_features=4096, bias=False)\n (dropout): Identity()\n )\n (proj): LoRALinear(\n (original_layer): Linear(in_features=4096, out_features=4096, bias=True)\n (dropout): Identity()\n )\n (proj_drop): Dropout(p=0.0, inplace=False)\n )\n```\n```\nclass LoRALinear(nn.Module):\n def __init__(\n self,\n original_layer: nn.Linear,\n rank: int,\n init_lora_weights=\"gaussian\",\n dropout: float = 0.0,\n ):\n super().__init__(\n original_layer=original_layer,\n rank=rank,\n init_lora_weights=init_lora_weights,\n dropout=dropout,\n )\n self.reset_weights(init_lora_weights)\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n lora_x = self.dropout(x) @ self.lora_A @ self.lora_B\n output = self.original_layer(x) + lora_x\n return output\n```\n\n\ni wrap the model's linear layers with LoRA layers and then wrap the model blocks with FSDP2 and AC. see this error when i call backwards on the model. why would there be a mismatch here?how can i debug/solve this \n\n```\n[rank0]: loss.backward()\n[rank0]: File \"/usr/local/lib/python3.12/site-packages/torch/_tensor.py\", line 648, in backward\n[rank0]: torch.autograd.backward(\n[rank0]: File \"/usr/local/lib/python3.12/site-packages/torch/autograd/__init__.py\", line 353, in backward\n[rank0]: _engine_run_backward(\n[rank0]: File \"/usr/local/lib/python3.12/site-packages/torch/autograd/graph.py\", line 824, in _engine_run_backward\n[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/site-packages/torch/utils/checkpoint.py\", line 1128, in unpack_hook\n[rank0]: frame.check_recomputed_tensors_match(gid)\n[rank0]: File \"/usr/local/lib/python3.12/site-packages/torch/utils/checkpoint.py\", line 902, in check_recomputed_tensors_match\n[rank0]: raise CheckpointError(\n[rank0]: torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.\n[rank0]: tensor at position 58:\n[rank0]: saved metadata: {'shape': torch.Size([4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}\n[rank0]: recomputed metadata: {'shape': torch.Size([1, 4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}\n```\n\n### Versions\n\n[rank0]: saved metadata: {'shape': torch.Size([4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}\n[rank0]: recomputed metadata: {'shape': torch.Size([1, 4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}\n\ncc @soulitzer", "url": "https://github.com/pytorch/pytorch/issues/163690", "state": "closed", "labels": [ "needs reproduction", "module: activation checkpointing", "triaged" ], "created_at": "2025-09-23T21:21:49Z", "updated_at": "2025-09-24T01:04:09Z", "comments": 3, "user": "asahni-sc" }, { "repo": "pytorch/pytorch", "number": 163688, "title": "[torch.distributed.pipelining] Gradients are None in first training step with ScheduleGPipe", "body": "## Bug Description\n\nWhen using `torch.distributed.pipelining` with `ScheduleGPipe`, gradients are unexpectedly `None` for parameters _in the first training step only_, and appear correctly in subsequent steps. This occurs despite the forward pass completing and losses computed. \n\nThis is leading to a significant divergence compared to non pipeline-parallel execution, beyond what is explainable by float and slicing numerical error, e.g. stalled and irrecoverable convergence.\n\n## To Reproduce\n\n1. Save the provided script as `repro.py`\n2. Run with 4 GPUs: `torchrun --nproc_per_node=4 repro.py`\n3. Observe the output showing gradients are None in step 0 but present in steps 1-2\n\n### Expected behavior\n\nGradients should be computed and available for all parameters after the backward pass in every training step, including the first one. The pipeline schedule should handle gradient accumulation consistently across all steps.\n\n### Actual behavior\n\nStep 0: All parameters have grad=None despite successful forward/backward pass\nStep 1-2: Gradients are properly computed and available (non-None)\n\n#### Example Output\n\n```\nExample output:\nRank 3, step: 0/3, losses list: [tensor(10.3931, device='cuda:3', dtype=torch.bfloat16), ...]\nRank 0 Step 0:\n{'embed_tokens.weight': None, 'layers.0.norm1.bias': None, ...}\n...\nRank 0 Step 1:\n{'embed_tokens.weight': '1.41e+02',\n 'layers.0.norm1.bias': '9.46e-01',\n 'layers.0.norm1.weight': '1.90e+00',\n ...}\n```\n\n**Minimal reproducible example:**\n\nThis example: \n\n1. Uses a simple transformer model split into 4 stages\n2. Each stage has 3 transformer layers\n3. Uses standard ScheduleGPipe with 4 microbatches\n4. Demonstrates the issue clearly with gradient norm printing: **why are first step grads None across all ranks?**\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\nfrom torch.distributed.pipelining import PipelineStage, ScheduleGPipe\nimport os\nimport pprint\n\n\nclass TransformerStage(nn.Module):\n def __init__(self, hidden_size=1536, num_layers=3, vocab_size=32000, stage_index=0, num_stages=4):\n super().__init__()\n self.stage_index = stage_index\n self.num_stages = num_stages\n \n if stage_index == 0:\n self.embed_tokens = nn.Embedding(vocab_size, hidden_size)\n \n self.layers = nn.ModuleList([\n nn.TransformerEncoderLayer(\n hidden_size, \n nhead=12,\n dim_feedforward=5440,\n batch_first=True\n )\n for _ in range(num_layers)\n ])\n \n if stage_index == num_stages - 1:\n self.lm_head = nn.Linear(hidden_size, vocab_size, bias=False)\n \n def forward(self, x):\n if hasattr(self, 'embed_tokens'):\n x = self.embed_tokens(x)\n \n for layer in self.layers:\n x = layer(x)\n \n if hasattr(self, 'lm_head'):\n x = self.lm_head(x)\n \n return x\n\n\ndef create_pipeline_loss_fn():\n def loss_fn(logits, labels):\n logits = logits.float()\n \n labels = nn.functional.pad(labels, (0, 1), value=-100)\n shift_labels = labels[..., 1:].contiguous()\n \n vocab_size = logits.size(-1)\n logits = logits.view(-1, vocab_size)\n shift_labels = shift_labels.view(-1)\n \n return nn.functional.cross_entropy(logits, shift_labels, ignore_index=-100)\n \n return loss_fn\n\n\ndef main():\n torch.manual_seed(0)\n torch.cuda.manual_seed_all(0)\n\n global_rank = int(os.environ['RANK'])\n world_size = int(os.environ['WORLD_SIZE'])\n local_rank = int(os.environ['LOCAL_RANK'])\n \n dist.init_process_group(\n backend=\"nccl\",\n rank=global_rank,\n world_size=world_size,\n device_id=torch.device(f\"cuda:{local_rank}\"),\n )\n \n pp_degree = 4\n assert world_size == pp_degree\n \n device = torch.device(f\"cuda:{local_rank}\")\n torch.cuda.set_device(device)\n \n config = {\n 'batch_size': 4,\n 'micro_batch_size': 1,\n 'sequence_length': 512,\n 'hidden_size': 1536,\n 'vocab_size': 32000,\n 'num_layers_per_stage': 3,\n }\n \n pp_rank = global_rank\n \n stage_model = TransformerStage(\n hidden_size=config['hidden_size'],\n num_layers=config['num_layers_per_stage'],\n vocab_size=config['vocab_size'],\n stage_index=pp_rank,\n num_stages=pp_degree\n ).to(device)\n \n if global_rank == 0:\n print(f\"Pipeline setup: {pp_degree} stages, {config['num_layers_per_stage']} layers per stage\")\n \n pipeline_stage = PipelineStage(\n stage_model,\n stage_index=pp_rank,\n num_stages=pp_degree,\n device=device,\n )\n \n n_microbatches = config['batch_size'] // config['micro_batch_size']\n \n pipeline_schedule = ScheduleGPipe(\n stage=pipeline_stage,\n n_microbatches=n_microbatches,\n loss_fn=create_pipeline_loss_fn(),\n scale_gr", "url": "https://github.com/pytorch/pytorch/issues/163688", "state": "open", "labels": [ "oncall: distributed", "has workaround", "module: amp (automated mixed precision)", "module: pipelining" ], "created_at": "2025-09-23T21:03:37Z", "updated_at": "2025-09-26T14:36:14Z", "comments": 2, "user": "tplr-y" }, { "repo": "pytorch/pytorch", "number": 163684, "title": "PyTorch 2.8 + CUDA 12.8 fails to initialize on RTX 5090 (WinError 1114)", "body": "### \ud83d\udc1b Describe the bug\n\nSummary\nAttempting to run a source-built PyTorch 2.8.0 against CUDA 12.8 with explicit sm_120 flags on RTX 5090 results in a DLL initialization failure:\n\nCode\nOSError: [WinError 1114] A dynamic link library (DLL) initialization routine failed.\nError loading \"torch_cpu.dll\" or one of its dependencies.\nSystem Info\nGPU: RTX 5090\n\nCUDA: 12.8 (confirmed installed and functional)\n\nPyTorch: 2.8.0 (source build)\n\nPython: 3.10.11\n\nOS: Windows 11 x64\n\nBuild flags:\n\nTORCH_CUDA_ARCH_LIST=8.6;9.0;12.0\n\nVerified sm_120 kernels are present in ptx and fatbin sections\n\nWhat\u2019s been tried\n\u2705 Verified all DLLs in torch/lib using Dependencies.exe\n\n\u2705 Rebuilt torch_cpu.dll and shm.dll from source\n\n\u2705 Manually validated libomp140.x86_64.dll and other runtime dependencies\n\n\u2705 Renamed crashing DLLs to isolate failure\n\n\u2705 Confirmed failure occurs inside DllMain or static constructor\n\n\u2705 Attempted fallback to nightly builds\u2014same result\n\nObservations\ntorch_cpu.dll loads cleanly in Dependencies.exe but crashes during runtime\n\ntorch_cuda.dll depends on torch_cpu.dll, so exclusion breaks CUDA backend\n\nNo missing dependencies reported\u2014failure is internal to DLL initialization\n\nNo exports visible in torch_cpu.dll, suggesting static init or device registration failure\n\nRequest\nLooking for:\nConfirmation of RTX 5090 support in PyTorch 2.8+\nKnown workarounds or patches for sm_120 initialization\nGuidance on isolating DllMain crash or bypassing CPU backend for CUDA-only workflows\n\n### Versions\n\n(venv) PS D:\\Projects\\python\\pytorch-src> python tools\\collect_env.py\nCollecting environment information...\nPyTorch version: N/A\nIs debug build: N/A\nCUDA used to build PyTorch: N/A\nROCM used to build PyTorch: N/A\n\nOS: Microsoft Windows 11 Pro (10.0.26100 64-bit)\nGCC version: Could not collect\nClang version: Could not collect\nCMake version: version 4.1.0\nLibc version: N/A\n\nPython version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)\nPython platform: Windows-10-10.0.26100-SP0\nIs CUDA available: N/A\nCUDA runtime version: 12.8.61\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090\nNvidia driver version: 581.29\ncuDNN version: Could not collect\nIs XPU available: N/A\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: N/A\n\nCPU:\nName: AMD Ryzen 9 9950X 16-Core Processor\nManufacturer: AuthenticAMD\nFamily: 107\nArchitecture: 9\nProcessorType: 3\nDeviceID: CPU0\nCurrentClockSpeed: 4300\nMaxClockSpeed: 4300\nL2CacheSize: 16384\nL2CacheSpeed: None\nRevision: 17408\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.6\n[pip3] optree==0.17.0\n[pip3] torch==2.10.0a0+gitunknown\n[conda] Could not collect\n(venv) PS D:\\Projects\\python\\pytorch-src>", "url": "https://github.com/pytorch/pytorch/issues/163684", "state": "closed", "labels": [], "created_at": "2025-09-23T20:31:52Z", "updated_at": "2025-09-23T22:16:59Z", "comments": 2, "user": "tsondo" }, { "repo": "pytorch/pytorch", "number": 163664, "title": "[BE] Add Linux aarch64 CUDA install and test to validation framework", "body": "### \ud83d\udc1b Describe the bug\n\nCurrently https://github.com/pytorch/test-infra/blob/main/.github/workflows/validate-aarch64-linux-binaries.yml only validates Linu aarch64 CPU builds. \nThese workflows are launched via validate-binaries. Here is an example of run: https://github.com/pytorch/test-infra/actions/runs/17628169416\nIn the past aarch64 GPU builds where not validated since we have not had any hardware for aarch64 GPU and these builds where prototype. At the moment we don't have any aarch64 GPU hardware however would be required to validate now.\n\nWe need to validate also aarch64 GPU builds so that at least install works and CPU mode works for these builds.\n\nInstallation is same as Linux x86 builds:\n```\npip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu130\npip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128\npip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu126\n```\n\nBefore running smoke test set: ``MATRIX_GPU_ARCH_TYPE=cpu``\nHere is smoke test for reference\nhttps://github.com/pytorch/pytorch/blob/main/.ci/pytorch/smoke_test/smoke_test.py\n\n### Versions\n\n2.9.0\n\ncc @seemethere @malfet @ptrblck @msaroufim @eqy @jerryzh168", "url": "https://github.com/pytorch/pytorch/issues/163664", "state": "closed", "labels": [ "module: binaries", "module: cuda", "triaged", "better-engineering", "topic: binaries" ], "created_at": "2025-09-23T17:00:27Z", "updated_at": "2025-10-01T14:19:45Z", "comments": 0, "user": "atalman" }, { "repo": "pytorch/pytorch", "number": 163659, "title": "Allow double in native_functions.yaml as a schema type", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nToday, our schemas say \"float\" but that is a lie!! Internally we pass around doubles. I'm okay with this though.\n\nMy ask: can we allow schemas to say \"double\", so for user custom ops they can put \"double\" in the schema and double in their custom kernels and be less confused?\n\nToday, custom ops writers have `double` in their kernels but put `float` in the schema cuz they have to.\n\nTriggered from https://github.com/pytorch/pytorch/pull/163505/files#r2372832280\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @malfet @zou3519 @anjali411 @chauhang @penguinwu @bdhirsh @ezyang ", "url": "https://github.com/pytorch/pytorch/issues/163659", "state": "open", "labels": [ "module: cpp-extensions", "triaged", "module: dispatch", "module: library", "oncall: pt2", "module: pt2-dispatcher" ], "created_at": "2025-09-23T16:27:39Z", "updated_at": "2025-09-24T18:45:19Z", "comments": 2, "user": "janeyx99" }, { "repo": "pytorch/pytorch", "number": 163624, "title": "[aoti] [xpu] [null-pointer-deference] potential npt issue in `sycl_runtime_wrappers.h`", "body": "### \ud83d\udc1b Describe the bug\n\nCode below in `sycl_runtime_wrappers.h` uses malloc to allocate the memory. \nhttps://github.com/pytorch/pytorch/blob/5d749ceb92c2c28bcfbdf918b4ab99b1a91fcb50/torch/csrc/inductor/aoti_runtime/sycl_runtime_wrappers.h#L45-L58\nHowever, there is a potential risk that the memory allocation fails. Then, maybe `strLog` is a `nullptr`? A possible fix is to add a `NPD check` here.\n\nTo be honest, I don't know how to draft a test to trigger this case, so I opened an issue to discuss this instead of sending a PR directly.\n\nFeel free to correct me if i am wrong.\n\n### Versions\n\nnone\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1", "url": "https://github.com/pytorch/pytorch/issues/163624", "state": "open", "labels": [ "triaged", "oncall: pt2", "oncall: export", "module: aotinductor" ], "created_at": "2025-09-23T08:24:03Z", "updated_at": "2025-09-23T16:02:18Z", "comments": 4, "user": "shaoyuyoung" }, { "repo": "pytorch/pytorch", "number": 163576, "title": "GPU Performance in Modern Computing", "body": "### Release highlight for proposed Feature\n\nCould you please review the PyTorch library and determine if performance evaluation tests would be helpful? https://github.com/pytorch/pytorch/pull/162107\n\nGPU Performance in Modern Computing\n\nIn the realm of artificial intelligence and supercomputing, GPUs play a pivotal role as accelerators driving innovation in hyperscaled data centers. But how does the computational speed of GPUs compare to CPUs? A performance test was conducted comparing the efficiency of CPU versus GPU on an Apple Mac M4, revealing intriguing insights.\n\nKey Insight:\nA significant performance shift occurs around the ~1500x1500 matrix size. An adaptive approach to the device selection could successfully build the model architecture. A hybrid approach of CPU for small operations and GPU for large operations could optimize the performance advantages of the devices.\n\nCPU Superiority (Smaller Matrices):\nFor matrix sizes up to 1000x1000, CPUs outperform GPUs by 2-4 times. This is attributed to the overhead of GPU initialization surpassing its computational benefits. For instance, in a scenario with 500x500 matrices, CPUs performed 3.5 times faster than GPUs.\n\nGPU Dominance (Medium and Larger Matrices):\nAs matrix sizes exceed 2000x2000, GPUs outperform CPUs by 2-2.3 times. The parallel computational advantages of GPUs prove superior, overcoming the initial costs. As an example for 4000x4000 matrices, GPUs were 2.2 times faster than CPUs.\n\nImplications for Business:\n\nCPU Applications: Ideal for small-scale tasks like rapid prototyping, edge devices, and small batch inference because of the cost efficiency and immediate execution without warmup. The smaller data sets cache efficiently without memory transfers.\nGPU Applications: Suited for operations that optimize the performance advantages of the devices such as training, batch processing, and production inference. The throughput advantage is 2-3 times speed.\nAdd comprehensive benchmarking tools for matrix operations and neural networks\nInclude device detection for CUDA, MPS, and CPU\nProvide proper GPU synchronization and timing\nAdd complete unit test suite with device-specific tests\nInclude automated test runner script\nAdd detailed documentation and contribution guide\nFeatures:\n\nCross-platform device support (CUDA/MPS/CPU)\nModular, extensible design with type hints\nComprehensive error handling and reporting\nEducational examples for proper GPU benchmarking\nNo breaking changes to existing PyTorch functionality\n\n### Point(s) of contact\n\n_No response_\n\n### Release Mode (pytorch/pytorch features only)\n\nIn-tree\n\n### Out-Of-Tree Repo\n\n_No response_\n\n### Description and value to the user\n\n_No response_\n\n### Link to design doc, GitHub issues, past submissions, etc\n\n_No response_\n\n### What feedback adopters have provided\n\n_No response_\n\n### Plan for documentations / tutorials\n\nTutorial exists\n\n### Additional context for tutorials\n\n_No response_\n\n### Marketing/Blog Coverage\n\nYes\n\n### Are you requesting other marketing assistance with this feature?\n\n_No response_\n\n### Release Version\n\n_No response_\n\n### OS / Platform / Compute Coverage\n\n_No response_\n\n### Testing Support (CI, test cases, etc..)\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/163576", "state": "closed", "labels": [ "triaged" ], "created_at": "2025-09-22T22:21:49Z", "updated_at": "2025-09-29T17:16:02Z", "comments": 7, "user": "alpha-investor" }, { "repo": "pytorch/torchtitan", "number": 1735, "title": "For mixed-precision training, does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?", "body": "In mixed-precision training of DDP, `amp.grad_scaler.GradScaler` is needed to dynamically scale the loss. I see that torchtitan do not use it to scale loss in FSDP2, so my question is does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?", "url": "https://github.com/pytorch/torchtitan/issues/1735", "state": "closed", "labels": [ "question" ], "created_at": "2025-09-22T15:05:37Z", "updated_at": "2025-09-24T20:12:20Z", "user": "EquationWalker" }, { "repo": "pytorch/pytorch", "number": 163519, "title": "For mixed-precision training, does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?", "body": "In mixed-precision training of DDP, `amp.grad_scaler.GradScaler` is needed to dynamically scale the loss, my question is does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/163519", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2025-09-22T15:01:43Z", "updated_at": "2025-09-29T08:19:23Z", "comments": 11, "user": "EquationWalker" }, { "repo": "pytorch/ao", "number": 3040, "title": "IntWeightonly quantized model slower than default model ( x86 machine, A100)", "body": "My Int4WeightOnly quantized model is slower and more inaccurate in OCR as compared to the default model. Why is this happening?\nHere is some info to help you guys\n\nModel - Qwen2-VL-7B-Instruct fine-tuned and saved in 16bit using unsloth\nGPU - \n\n\"Image\"\n\n\ncode - \n```\nfrom unsloth import FastVisionModel\nimport torch\nfrom PIL import Image\nimport os\nfrom jiwer import wer, cer\nimport glob\nfrom tqdm import tqdm\nimport pandas as pd\n\n\n# Model paths\nadapter_path = \"qwen2-ocr\"\nckpt_no = adapter_path.split(\"/\")[-1]\n\n# Load both models\nfrom transformers import AutoProcessor, AutoModelForVision2Seq\nfrom transformers import AutoConfig,TorchAoConfig\nfrom torchao.quantization import Int4WeightOnlyConfig\n\nconfig = AutoConfig.from_pretrained(adapter_path)\nquant_config = Int4WeightOnlyConfig(group_size=128)\nquantization_config = TorchAoConfig(quant_type=quant_config)\n\n\nquantized_model = AutoModelForVision2Seq.from_pretrained(\n adapter_path,\n config=config,\n torch_dtype=\"auto\",\n device_map=\"auto\",\n quantization_config=quantization_config \n)\n\nprocessor = AutoProcessor.from_pretrained(adapter_path)\n\n\n# Instructions\ninstruction_ft = \"Perform OCR\"\n# Directory paths\nimage_dir = \"images\"\ngolden_text_dir = \"texts\"\noutput_dir = f\"qwen-torchao-int4weight\"\n\n# Create output directory if it doesn't exist\nos.makedirs(output_dir, exist_ok=True)\n\n# Get all image files\nimage_files = glob.glob(os.path.join(image_dir, \"*.jpg\"))\n\n# Initialize results storage\nresults = []\n\n# Process images in batches\nBATCH_SIZE = 32 # Adjust based on your GPU memory\ntotal_images = len(image_files)\npbar = tqdm(total=total_images, desc=\"Processing images\")\n\nfor i in range(0, len(image_files), BATCH_SIZE):\n batch_size = min(BATCH_SIZE, len(image_files) - i) # Handle the last batch correctly\n batch_image_paths = image_files[i:i+BATCH_SIZE]\n batch_images = []\n batch_image_names = []\n batch_actuals = []\n \n # Prepare batch data\n for image_path in batch_image_paths:\n image_name = os.path.basename(image_path)\n batch_image_names.append(image_name)\n \n try:\n # Load image\n image = Image.open(image_path)\n batch_images.append(image)\n \n # Load ground truth text\n text_file = os.path.join(golden_text_dir, image_name.replace(\".jpg\", \".txt\"))\n try:\n with open(text_file, 'r', encoding='utf-8') as txt_file:\n actual = txt_file.readlines()[0].strip()\n batch_actuals.append(actual)\n except FileNotFoundError:\n print(f\"Warning: Ground truth file not found for {image_name}, skipping evaluation\")\n batch_actuals.append(None)\n except Exception as e:\n print(f\"Error loading {image_name}: {str(e)}\")\n batch_images.append(None)\n batch_actuals.append(None)\n \n # Filter out None values\n valid_indices = [idx for idx, img in enumerate(batch_images) if img is not None]\n if not valid_indices:\n continue\n \n valid_images = [batch_images[idx] for idx in valid_indices]\n valid_image_names = [batch_image_names[idx] for idx in valid_indices]\n valid_actuals = [batch_actuals[idx] for idx in valid_indices]\n \n try:\n # Prepare batch messages\n batch_messages = []\n for image in valid_images:\n messages = [\n {\"role\": \"user\", \"content\": [\n {\"type\": \"image\"},\n {\"type\": \"text\", \"text\": instruction_ft}\n ]}\n ]\n batch_messages.append(messages)\n \n # Process batch using Hugging Face's batched inference approach\n texts = [\n processor.apply_chat_template(msg, add_generation_prompt=True)\n for msg in batch_messages\n ]\n \n # Create batch inputs\n batch_inputs = processor(\n valid_images,\n texts,\n add_special_tokens=False,\n padding=True,\n return_tensors=\"pt\",\n ).to(\"cuda\")\n \n # Generate outputs in batch\n batch_outputs = quantized_model.generate(\n **batch_inputs,\n use_cache=True, \n temperature=0.5, \n min_p=0.1,\n max_new_tokens=1024\n )\n \n # Process batch outputs\n input_lengths = [batch_inputs[\"input_ids\"][i].shape[0] for i in range(len(valid_images))]\n \n for idx, (image_name, actual, input_length) in enumerate(zip(valid_image_names, valid_actuals, input_lengths)):\n generated_response = processor.tokenizer.decode(batch_outputs[idx][input_length:], skip_special_tokens=True)\n \n # Calculate metrics if ground truth is available\n if actual is not None:\n ft_word_error = wer(generated_response", "url": "https://github.com/pytorch/ao/issues/3040", "state": "open", "labels": [ "quantize_", "triaged" ], "created_at": "2025-09-22T06:53:54Z", "updated_at": "2025-10-01T11:10:42Z", "comments": 5, "user": "Rakshith12-pixel" }, { "repo": "pytorch/torchtitan", "number": 1733, "title": "Gradient accumulation broken in PP", "body": "### Bug description\n\nUsing gradient accumulation is incompatible with PipleineSchedule(..., scale_grads=True) option, which defaults to True.\n\nWhen this option is set, at each step, all gradients are scaled by the micro-batch size. This works fine for a single gradient accumulation step, but when using multiple steps, this will rescale the total gradient by this factor, not just at the end of gradient accumulation.\n\nThe result is that the accumulated gradient is an exponential moving average, rather than a sum. Overall, the resulting gradients are much smaller than they should be and using gradient accumulation with PP is not equivalent to using it without PP -- the loss curves diverge substantially, as well as the gradient-norms are way off.\n\nA secondary consequence is that at every step, it divides the gradients by n_microbatches, which is computationally expensive when applied to a large model.\n\nI identified the same issue in my own pipeline trainer implementation a week or two ago. When checking how Torch Titan addressed the issue, I discovered that Titan probably has the same bug.\n\nI had the time to confirm the presence of the issue today and have submitted https://github.com/pytorch/torchtitan/pull/1732 to resolve the issue.\n\n\n### Versions\n\ntorch 2.10.0.dev20250915+cu126\n\nFor anyone who may be interested, I have added support for Torch Titan to my configuration framework, which is what I used for reproducing the issue.\n\nhttps://github.com/jdinalt/forgather/tree/main/examples/torchtitan", "url": "https://github.com/pytorch/torchtitan/issues/1733", "state": "closed", "labels": [ "high priority", "triage review" ], "created_at": "2025-09-22T05:55:07Z", "updated_at": "2025-09-24T20:13:06Z", "comments": 8, "user": "jdinalt" }, { "repo": "pytorch/pytorch", "number": 163435, "title": "[Fuzzer][Eager/Compile Divergence] a var subtract by itself should equal 0?", "body": "### \ud83d\udc1b Describe the bug\n\n```\nimport torch\nimport sys\ntorch._dynamo.config.capture_scalar_outputs = True\ntorch._dynamo.config.capture_dynamic_output_shape_ops = True\ntorch._inductor.config.emulate_precision_casts = True\n\ndef foo(arg0, arg1, arg2, arg3):\n t0 = arg0 # size=(), stride=(), dtype=float16, device=cuda\n t1 = torch.tanh(t0) # size=(), stride=(), dtype=float16, device=cuda\n t2 = arg1 # size=(), stride=(), dtype=float16, device=cuda\n t3 = arg2 # size=(), stride=(), dtype=float16, device=cuda\n t4 = arg3 # size=(), stride=(), dtype=float16, device=cuda\n t5 = t2 + t0 + t3 + t0 + t4 # size=(), stride=(), dtype=float16, device=cuda\n t6 = t1 * t1 * t5 # size=(), stride=(), dtype=float16, device=cuda\n t7 = (t6) - t6 # size=(), stride=(), dtype=float16, device=cuda\n output = t7 # output tensor\n return output\n\narg0 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda\narg1 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda\narg2 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda\narg3 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda\nif __name__ == '__main__':\n out_eager = foo(arg0, arg1, arg2, arg3)\n out_eager.sum().backward()\n print('Eager Success! \u2705')\n compiled_foo = torch.compile(foo, fullgraph=True, dynamic=True)\n out_compiled = compiled_foo(arg0, arg1, arg2, arg3)\n out_compiled.sum().backward()\n print('Compile Success! \u2705')\n # Compare outputs (forward)\n out_eager_sum = out_eager.sum()\n out_compiled_sum = out_compiled.sum()\n diff = (out_eager_sum - out_compiled_sum).abs().item()\n rel_diff = diff / (out_eager_sum.abs().item() + 1e-12) * 100\n print(f'Relative diff (sum): {rel_diff:.6f}%')\n if rel_diff > 5:\n print(f'\u274c Forward output sums differ significantly (relative)!')\n print('out_eager_sum:', out_eager_sum.item())\n print('out_compiled_sum:', out_compiled_sum.item())\n print('Absolute diff:', diff)\n print('Relative diff (%):', rel_diff)\n sys.exit(1)\n```\n\n```\n(/home/bobren/local/a/pytorch-env) [22:16] devgpu035:/home/bobren/local/a/pytorch/torchfuzz python /tmp/torchfuzz/fuzz_d9fffb614acbd1dd.py \nEager Success! \u2705\nCompile Success! \u2705\nRelative diff (sum): 9441375732.421875%\n\u274c Forward output sums differ significantly (relative)!\nout_eager_sum: 0.0\nout_compiled_sum: -9.441375732421875e-05\nAbsolute diff: 9.441375732421875e-05\nRelative diff (%): 9441375732.421875\n```\n\n### Versions\n\nN/A\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben", "url": "https://github.com/pytorch/pytorch/issues/163435", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: inductor", "topic: fuzzer" ], "created_at": "2025-09-21T05:19:32Z", "updated_at": "2025-09-24T17:43:02Z", "comments": 3, "user": "bobrenjc93" }, { "repo": "pytorch/tutorials", "number": 3581, "title": "Feedback about Parametrizations Tutorial", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/intermediate/parametrizations.html\n\nParametrization is not a topic known to all. You could add some context about the definition of parametrization to the tutorial, why the need for it was born ? What does it solve ? The go into giving the examples. Adding references for further reading could be very helpful. Especially for beginners. ", "url": "https://github.com/pytorch/tutorials/issues/3581", "state": "open", "labels": [], "created_at": "2025-09-21T00:21:09Z", "updated_at": "2025-09-21T00:21:09Z", "comments": 0, "user": "pentanol2" }, { "repo": "pytorch/pytorch", "number": 163359, "title": "RFC: Support CUDA Stream Protocol", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHello! I am the CUDA Python tech lead and I'm filing this RFC to improve the interoperability between Python GPU libraries.\n\n`cuda.core` is an official CUDA Python project: https://nvidia.github.io/cuda-python/cuda-core/latest/index.html. It offers a pythonic, self-contained, lightweight, and official interface over the CUDA programming model. For new Python projects, we encourage them to just use `cuda.core..Stream`. \n\nFor existing Python projects such as PyTorch, transitioning to `cuda.core` may or may not be immediately feasible. As a result, we encourage projects that already expose a CUDA stream to Python to follow the CUDA Stream protocol:\nhttps://nvidia.github.io/cuda-python/cuda-core/latest/interoperability.html#cuda-stream-protocol\nand add a `__cuda_stream__` method to the stream class, so as to improve interoperability without introducing extra `ExternalStream`-like types. \n\nHere is a PyTorch example of how it'd be used interoperably with `cuda.core`, courtesy of @msaroufim \ud83d\ude42:\nhttps://github.com/NVIDIA/cuda-python/blob/c4f4ffe83d246eafb6adf1574e5a7c86bbcef944/cuda_core/examples/pytorch_example.py\n\ncc @ptrblck @msaroufim @eqy @jerryzh168 @kkraus14 @pbielak @aterrel @rparolin for vis\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/163359", "state": "closed", "labels": [ "module: cuda", "triaged", "topic: new features" ], "created_at": "2025-09-19T19:23:41Z", "updated_at": "2025-09-25T19:45:40Z", "comments": 2, "user": "leofang" }, { "repo": "pytorch/pytorch", "number": 163342, "title": "[CD] - Manywheel CUDA builds failing since Sept 18", "body": "### \ud83d\udc1b Describe the bug\n\nThis hasn't been seen in a nightly yet, but i just rebased onto `viable/strict` and i'm getting this error in the `ciflow/binaries_wheel` flow and it's happening in other people's jobs too.\n\nBroken Workflow - https://github.com/pytorch/pytorch/actions/workflows/generated-linux-binary-manywheel-nightly.yml\n\nhttps://github.com/pytorch/pytorch/actions/runs/17856217452/job/50775774205\nand\nhttps://github.com/pytorch/pytorch/actions/runs/17855187279\n\n```\nIn file included from /pytorch/c10/cuda/CUDAException.h:5,\n from /pytorch/third_party/fbgemm/fbgemm_gpu/experimental/gen_ai/src/quantize/common/utils.cpp:12:\n/pytorch/c10/cuda/CUDAMiscFunctions.h:6:10: fatal error: cuda_runtime.h: No such file or directory\n 6 | #include \n```\n\nIt looks like fbgemm was recently updated https://github.com/pytorch/pytorch/pull/162590 @pragupta can we check this error?\n\n### Versions\n\n2.10.0\n\ncc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @atalman @ptrblck @eqy @jerryzh168", "url": "https://github.com/pytorch/pytorch/issues/163342", "state": "closed", "labels": [ "high priority", "triage review", "module: binaries", "module: cuda", "triaged", "module: regression" ], "created_at": "2025-09-19T14:29:24Z", "updated_at": "2025-09-20T12:16:28Z", "comments": 5, "user": "robert-hardwick" }, { "repo": "pytorch/pytorch", "number": 163331, "title": "Support Query Bug !!", "body": "Hey Guys, \nI have been working on an ML project, so i have a GPU server (An ancient one) which is backed by CUDA 3.0 .So what is the minimum version supported for PyTorch ?\n\nThank You :) ", "url": "https://github.com/pytorch/pytorch/issues/163331", "state": "closed", "labels": [], "created_at": "2025-09-19T10:09:06Z", "updated_at": "2025-09-20T14:42:36Z", "comments": 2, "user": "Harishankar14" }, { "repo": "pytorch/pytorch", "number": 163283, "title": "RFC move to Pyrefly for Type Checking", "body": "Currently, mypy is used to typecheck PyTorch, with lint runner and dmypy. We appreciate the community\u2019s work maintaining mypy and type coverage in PyTorch and want to build on that foundation. [Pyrefly](https://pyrefly.org/) is a new standards-compliant Python type checker. The Pyrefly team has been hard at work on building a performant and backwards compatible type checker, that we think can improve the current type checking setup and the development experience for PyTorch users. \n\nFor example, the current setup can make it tricky to know which files are being typechecked and in which mode ([strict](https://github.com/pytorch/pytorch/blob/main/mypy-strict.ini) vs. [default](https://github.com/pytorch/pytorch/blob/main/mypy.ini)). Use of `type: ignore` and `# mypy: ignore-errors` to disable checks on certain files, adds to the challenge. We think we can make typechecking simpler, and improve the overall experience.\n\nWe are proposing that we add Pyrefly as a typechecker to Pytorch. \n\nThe benefits to PyTorch will be:\n\n* Whole repo checks in seconds:\n\n - Eliminates inconsistencies created by dmypy between local and CI revisions\n - Fast CI signal\n* A richer IDE experience with types that match hover and diagnostics. We support most major IDEs: https://pyrefly.org/en/docs/IDE/#other-editors \n* Modern type checking features:\n - Container inference, conformance to the typing spec for feature support\n - We\u2019ve found over 200 bugs in PyTorch just by enabling Pyrefly for testing!\n* Clear configuration and ownership:\n - The Pyrefly team will help support typing in PyTorch and respond quickly to issues on our github\n\nHere\u2019s how we would propose to get Pyrefly up and running in PyTorch:\n\nPhase 1: \n* Check in Pyrefly configs along with the suppressions needed for Pyrefly to check cleanly\n* Add a non-blocking CI linter runner job to observe changes and test the integration\n* Gather community feedback on the checker and IDE extension via:\n - Download the VSCode Extension\n - Run pyrefly through lint runner\n\nPhase 2:\n* With community buy in, we\u2019ll swap the checker from mypy to pyrefly over the weekend to be least disruptive\n* After Pyrefly has been enabled smoothly for a few days, we\u2019ll cleanup the unused `type: ignores` that remain in the code. We plan to cleanup approximately 600 unused mypy ignores\n\nPhase 3:\n* Work with the community to help add types where they are useful in PyTorch and answer questions around typing features and usage\n* Set up additional jobs like `pyrefly infer` to ease the process of adding types\n* Work with the community to better export types to consumers of PyTorch\n\n\nWe\u2019d love for you to try Pyrefly, share your experiences, and help us make it (and PyTorch) even better \ud83d\ude42\n\ncc @lolpack @ndmitchell @kinto0 @samwgoldman", "url": "https://github.com/pytorch/pytorch/issues/163283", "state": "closed", "labels": [ "module: typing", "triaged", "needs research" ], "created_at": "2025-09-18T19:52:40Z", "updated_at": "2025-11-24T19:20:08Z", "comments": 3, "user": "maggiemoss" }, { "repo": "pytorch/ao", "number": 3020, "title": "How to use FP8 training with MoE models?", "body": "\n\nI\u2019m trying to train a Mixture of Experts (MoE) model with FP8 precision. However, I couldn\u2019t find any documentation or examples that describe how to enable FP8 training for MoE in torchao.\n\nIs FP8 training for MoE models currently supported?\n\nIf yes, could you point me to a tutorial or usage guide?\n\nIf not, is there a roadmap or prototype feature under development? If this feature is still in progress, could you share the current status and future plan?\n\nThanks!", "url": "https://github.com/pytorch/ao/issues/3020", "state": "open", "labels": [ "moe" ], "created_at": "2025-09-17T12:18:14Z", "updated_at": "2025-10-02T18:20:44Z", "user": "BIGBALLON" }, { "repo": "pytorch/torchtitan", "number": 1716, "title": "float8 Grouped MM kernels", "body": "- **Is there any plan to support float8 Grouped MM for llama4 / qwen3 MoE model training?**\n- **Is this the correct way to train a MoE model with FP8?**\n\nCurrently, the available Grouped GEMM kernels only support float16, and they do not work with float8.\n\n``` python\n@expert_parallel\ndef _run_experts_grouped_mm(\n w1: torch.Tensor,\n w2: torch.Tensor,\n w3: torch.Tensor,\n x: torch.Tensor,\n num_tokens_per_expert: torch.Tensor,\n) -> torch.Tensor:\n offsets = torch.cumsum(num_tokens_per_expert, dim=0, dtype=torch.int32)\n # grouped mm between a 2D tensor and a 3D tensor\n assert x.dim() == 2\n\n h = F.silu(\n torch._grouped_mm(x.bfloat16(), w1.bfloat16().transpose(-2, -1), offs=offsets)\n )\n h = h * torch._grouped_mm(\n x.bfloat16(), w3.bfloat16().transpose(-2, -1), offs=offsets\n )\n out = torch._grouped_mm(h, w2.bfloat16().transpose(-2, -1), offs=offsets).type_as(x)\n\n return out\n```\n\nFor training large MoE models such as LLaMA4 and Qwen3, float8 is increasingly important to reduce memory footprint and improve training efficiency. However, the lack of float8 Grouped GEMM support becomes a bottleneck when scaling up MoE training.\n\n\n**Feature request:**\n\nAdd support for float8 Grouped GEMM kernels.\nEnsure compatibility with MoE training workloads (e.g., LLaMA4, Qwen3).\nThis would enable more efficient large-scale MoE training under float8 precision.", "url": "https://github.com/pytorch/torchtitan/issues/1716", "state": "open", "labels": [ "question" ], "created_at": "2025-09-17T09:57:25Z", "updated_at": "2025-09-30T02:54:53Z", "user": "BIGBALLON" }, { "repo": "pytorch/pytorch", "number": 163153, "title": "FSDP2 implicit prefetch does not work", "body": "### \ud83d\udc1b Describe the bug\n\nI'm using official [example of FSDP2](https://github.com/pytorch/examples/blob/acc295dc7b90714f1bf47f06004fc19a7fe235c4/distributed/FSDP2/example.py) with some small modifcations:\n\n\n```python\n# distributed/FSDP2/example.py\nimport argparse\nimport os\n\nimport torch\nfrom checkpoint import Checkpointer\nfrom model import ModelArgs, Transformer\nfrom torch.distributed.fsdp import fully_shard, MixedPrecisionPolicy\nfrom utils import inspect_mixed_precision, inspect_model\nfrom torch.profiler import profile, record_function, ProfilerActivity\n\ndef verify_min_gpu_count(min_gpus: int = 2) -> bool:\n \"\"\" verification that we have at least 2 gpus to run dist examples \"\"\"\n has_gpu = torch.accelerator.is_available()\n gpu_count = torch.accelerator.device_count()\n return has_gpu and gpu_count >= min_gpus\n\ndef set_modules_to_forward_prefetch(model, num_to_forward_prefetch):\n for i, layer in enumerate(model.layers):\n if i >= len(model.layers) - num_to_forward_prefetch:\n break\n layers_to_prefetch = [\n model.layers[i + j] for j in range(1, num_to_forward_prefetch + 1)\n ]\n layer.set_modules_to_forward_prefetch(layers_to_prefetch)\n\n\ndef set_modules_to_backward_prefetch(model, num_to_backward_prefetch):\n for i, layer in enumerate(model.layers):\n if i < num_to_backward_prefetch:\n continue\n layers_to_prefetch = [\n model.layers[i - j] for j in range(1, num_to_backward_prefetch + 1)\n ]\n layer.set_modules_to_backward_prefetch(layers_to_prefetch)\n\n\ndef main(args):\n _min_gpu_count = 2\n if not verify_min_gpu_count(min_gpus=_min_gpu_count):\n print(f\"Unable to locate sufficient {_min_gpu_count} gpus to run this example. Exiting.\")\n exit()\n rank = int(os.environ[\"LOCAL_RANK\"])\n if torch.accelerator.is_available():\n device_type = torch.accelerator.current_accelerator()\n device = torch.device(f\"{device_type}:{rank}\")\n torch.accelerator.set_device_index(rank)\n print(f\"Running on rank {rank} on device {device}\")\n else:\n device = torch.device(\"cpu\")\n print(f\"Running on device {device}\")\n\n backend = torch.distributed.get_default_backend_for_device(device)\n torch.distributed.init_process_group(backend=backend, device_id=device)\n\n torch.manual_seed(0)\n vocab_size = 1024\n batch_size = 4\n seq_len = 1024\n model_args = ModelArgs(\n n_layers=10,\n n_heads=8,\n dim=4096,\n vocab_size=vocab_size,\n max_seq_len=seq_len,\n dropout_p=0,\n )\n with torch.device(\"meta\"):\n model = Transformer(model_args)\n fsdp_kwargs = {}\n if args.mixed_precision:\n fsdp_kwargs[\"mp_policy\"] = MixedPrecisionPolicy(\n param_dtype=torch.bfloat16,\n reduce_dtype=torch.float32,\n )\n for layer in model.layers:\n fully_shard(layer, **fsdp_kwargs)\n fully_shard(model, **fsdp_kwargs)\n\n inspect_model(model)\n\n if args.explicit_prefetching:\n set_modules_to_forward_prefetch(model, num_to_forward_prefetch=2)\n set_modules_to_backward_prefetch(model, num_to_backward_prefetch=2)\n\n checkpointer = Checkpointer(\"checkpoints\", dcp_api=args.dcp_api)\n if checkpointer.last_training_time is None:\n model.to_empty(device=device)\n model.reset_parameters()\n else:\n checkpointer.load_model(model)\n\n if args.mixed_precision:\n inspect_mixed_precision(model)\n\n optim = torch.optim.Adam(model.parameters(), lr=1e-2)\n if checkpointer.last_training_time is not None:\n checkpointer.load_optim(model, optim)\n\n with profile(\n activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n record_shapes=True,\n with_stack=True,\n ) as prof:\n for _ in range(10):\n if args.explicit_prefetching:\n model.unshard()\n x = torch.randint(0, vocab_size, (batch_size, seq_len), device=device)\n loss = model(x).sum()\n loss.backward()\n torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)\n optim.step()\n optim.zero_grad()\n prof.export_chrome_trace(f\"fsdp2_trace_r{torch.distributed.get_rank()}.json\")\n\n # checkpointer.save(model, optim)\n torch.distributed.destroy_process_group()\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(description=\"PyTorch FSDP2 example\")\n parser.add_argument(\"--explicit-prefetching\", action=\"store_true\", default=False)\n parser.add_argument(\"--mixed-precision\", action=\"store_true\", default=True)\n parser.add_argument(\"--dcp-api\", action=\"store_true\", default=False)\n args = parser.parse_args()\n\n main(args)\n```\n\n## current behavior\n\nthe profiling result shows that the all-gather kernel is not overlapping with computation.\n\n\"Image\"\n\nFrom the t", "url": "https://github.com/pytorch/pytorch/issues/163153", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2025-09-17T09:30:31Z", "updated_at": "2025-09-17T18:04:42Z", "comments": 1, "user": "zhc7" }, { "repo": "pytorch/tutorials", "number": 3569, "title": "Feedback about What is torch.nn really?", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/nn_tutorial.html\n\nGithub URL for MNIST archive needs to change from:\n```\nURL = \"https://github.com/pytorch/tutorials/raw/main/_static/\"\n```\nto\n\n```\nURL = 'https://github.com/pytorch/tutorials/raw/refs/heads/main/_static/'\n```", "url": "https://github.com/pytorch/tutorials/issues/3569", "state": "open", "labels": [], "created_at": "2025-09-16T20:51:42Z", "updated_at": "2025-09-16T20:51:42Z", "user": "robertbcalhoun" }, { "repo": "pytorch/pytorch", "number": 163071, "title": "Lintrunner not flagging CI issues in PRs", "body": "### \ud83d\udc1b Describe the bug\n\nThe PR https://github.com/pytorch/pytorch/pull/162659 introduced some small changes in the `.github/workflows/pull.yml` workflow, changing the `linux-jammy-py3_10-clang18-asan-build` job.\n\nAfter merging, lintrunner started flagging the change as inconsistency in the workflows (https://github.com/pytorch/pytorch/actions/runs/17750680257/job/50444889451). But it did not flag in the PR itself.\n\nWe should discuss if the rule is valid and how to fix lintrunner so it will always be red in the PR if merging it could cause a lintrunner to be red in trunk.\n\n### Versions\n\nmain (trunk)", "url": "https://github.com/pytorch/pytorch/issues/163071", "state": "closed", "labels": [ "module: lint", "triaged" ], "created_at": "2025-09-16T12:56:57Z", "updated_at": "2025-09-22T15:00:35Z", "comments": 3, "user": "jeanschmidt" }, { "repo": "pytorch/pytorch", "number": 163066, "title": "PyTorch is including internal headers, leading to ODR violations", "body": "### \ud83d\udc1b Describe the bug\n\nIn [functorch/csrc/dim/dim_opcode.c](https://github.com/pytorch/pytorch/blob/e3783a9575b810f9a3f51334270668357463958e/functorch/csrc/dim/dim_opcode.c#L8-L10) and [torch/csrc/dynamo/cpython_defs.c](https://github.com/pytorch/pytorch/blob/e3783a9575b810f9a3f51334270668357463958e/torch/csrc/dynamo/cpython_defs.c#L21-L34), PyTorch is including private headers from CPython and specifically doing so in a way that includes symbols defined in those headers. This causes problems if you ever try to link both PyTorch and CPython together (which we do for static hermetic builds), because the symbols get defined more than once, leading to a violation of the One Definition Rule.\n\nIt is probably true that CPython should use `extern` for these symbols (though I'm told that this is not necessarily possible within CPython itself), but also it is definitely true that PyTorch should not be using the private interface of CPython.\n\nI am not super familiar with the reason that this needs to be done so I am not sure what the right solution is. Is this something that can be accomplished without using the internal headers? If not, what is missing from the CPython API that would make it possible to avoid this situation?\n\n### Versions\n\nThis is a more abstract issue about the code itself.\n\ncc @malfet @seemethere @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela", "url": "https://github.com/pytorch/pytorch/issues/163066", "state": "open", "labels": [ "module: build", "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2025-09-16T10:22:40Z", "updated_at": "2025-09-24T17:41:27Z", "comments": 6, "user": "pganssle-google" }, { "repo": "pytorch/pytorch", "number": 163061, "title": "GIL is not released when calling torch.compile kernels", "body": "### \ud83d\udc1b Describe the bug\n\nIn most cases, PyTorch releases GIL when calling CUDA APIs, but I found the GIL is held when calling torch.compile kernels, is this expected? Is it possible to release GIL when calling torch.compile kernels?\n\nTo reproduce, script `torch_compile.py`:\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef torch_add(x: torch.Tensor, y: torch.Tensor):\n return x + y\n\n@torch.compile\ndef torch_compile_add(x: torch.Tensor, y: torch.Tensor):\n return x + y\n\n@triton.jit\ndef add_kernel(x_ptr,\n y_ptr,\n output_ptr,\n n_elements,\n BLOCK_SIZE: tl.constexpr):\n pid = tl.program_id(axis=0)\n block_start = pid * BLOCK_SIZE\n offsets = block_start + tl.arange(0, BLOCK_SIZE)\n mask = offsets < n_elements\n x = tl.load(x_ptr + offsets, mask=mask)\n y = tl.load(y_ptr + offsets, mask=mask)\n output = x + y\n tl.store(output_ptr + offsets, output, mask=mask)\n\n\ndef triton_add(x: torch.Tensor, y: torch.Tensor):\n output = torch.empty_like(x)\n n_elements = output.numel()\n grid = lambda meta: (triton.cdiv(n_elements, meta['BLOCK_SIZE']), )\n add_kernel[grid](x, y, output, n_elements, BLOCK_SIZE=1024)\n return output\n\n\ndef main():\n x = torch.randn(4096, 4096, device='cuda')\n y = torch.randn(4096, 4096, device='cuda')\n for _ in range(10):\n torch_add(x, y)\n torch_compile_add(x, y)\n triton_add(x, y)\n \n\nif __name__ == \"__main__\":\n main()\n```\n\nRun it with \n```bash\nnsys profile -f true --wait primary -t cuda,nvtx,python-gil --cudabacktrace=all --python-backtrace=cuda --python-sampling=true -o torch_compile python torch_compile.py\n```\n\n\"Image\"\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.8.0a0+34c6371d24.nv25.08\nIs debug build: False\nCUDA used to build PyTorch: 13.0\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.2 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: 18.1.3 (1ubuntu1)\nCMake version: version 4.0.3\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-5.15.0-1086-nvidia-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 13.0.48\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H200\nGPU 1: NVIDIA H200\n\nNvidia driver version: 575.57.08\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.12.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.12.0\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8480C\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 8\nCPU(s) scaling MHz: 33%\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_windo", "url": "https://github.com/pytorch/pytorch/issues/163061", "state": "closed", "labels": [ "module: performance", "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-09-16T09:00:22Z", "updated_at": "2025-09-30T06:49:01Z", "comments": 7, "user": "syuoni" }, { "repo": "pytorch/xla", "number": 9646, "title": "Correct behavior of `torch.ops.xla.write_mlir_debuginfo`", "body": "## \u2753 Correct behavior of `torch.ops.xla.write_mlir_debuginfo`\n\nWhat is the correct behavior of `torch.ops.xla.write_mlir_debuginfo`? Seems it adds debug info all upstream operations not just a direct upstream op. Is it expected behavior?\n\n```python\nimport torch\nimport torch_xla\nimport torch_xla.experimental.xla_mlir_debuginfo\nfrom torch_xla.stablehlo import (StableHLOExportOptions,\n exported_program_to_stablehlo)\n\nclass SampleModel(torch.nn.Module):\n\n def forward(self, x, y):\n x = x + y\n x = x - y\n x = torch.ops.xla.write_mlir_debuginfo(x, \"MY_SUB\")\n return x\n\nmodel = SampleModel()\nexported_program = torch.export.export(model,\n (torch.rand(10), torch.rand(10)))\nmlir_text = exported_program_to_stablehlo(\n exported_program).get_stablehlo_text()\n\nprint(mlir_text)\n```\n\n```\n#loc1 = \"MY_SUBxla__device_data\"\nmodule @IrToHlo.12 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\n func.func @main(%arg0: tensor<10xf32> \"MY_SUBxla__device_data\", %arg1: tensor<10xf32> \"MY_SUBxla__device_data\") -> tensor<10xf32> {\n %0 = stablehlo.add %arg1, %arg0 : tensor<10xf32> \"MY_SUBaten__add\"\n %1 = stablehlo.subtract %0, %arg0 : tensor<10xf32> \"MY_SUBaten__sub\"\n return %1 : tensor<10xf32> [unknown]\n } [unknown]\n} [unknown]\n#loc = [unknown]\n#loc2 = \"MY_SUBaten__add\"\n#loc3 = \"MY_SUBaten__sub\"\n```", "url": "https://github.com/pytorch/xla/issues/9646", "state": "open", "labels": [ "question", "stablehlo" ], "created_at": "2025-09-16T00:20:05Z", "updated_at": "2025-09-16T14:01:06Z", "user": "tlsdmstn56" }, { "repo": "pytorch/pytorch", "number": 162971, "title": "[CD] Reasonable time constraint for binary builds", "body": "### \ud83d\udc1b Describe the bug\n\nIt looks like both CUDA+aarch64, Win+XPU and ROCM build are close towards exceeding 6h threshold\n\n- Could we have some sort of a plan on how to deal with those. I.e. can some build dependencies be cached and build ahead of time as part of the docker image?\n- Is there a matrix somewhere on what types of runners are currently used, and should we switch to a bigger ones?\n\n### Versions\n\nCI\n\ncc @seemethere @atalman @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/162971", "state": "open", "labels": [ "module: binaries", "module: ci", "triaged" ], "created_at": "2025-09-15T16:21:01Z", "updated_at": "2025-09-23T20:23:03Z", "comments": 1, "user": "malfet" }, { "repo": "pytorch/pytorch", "number": 162957, "title": "torch.linalg.eigh uses a large amount of memory in pytorch 2.8.0", "body": "### \ud83d\udc1b Describe the bug\n\nRunning torch.linalg.eigh spikes allocated GPU memory in pytorch 2.8.0. For repeated calls on tensors of different batch dimensions the allocated memory increases successively until reaching a plateau. In 2.7.0 the code below consistently uses ~200 MB, in 2.8.0 2-5 GB were allocated for different runs. Memory usage was monitored with nvidia-smi.\n```python\nimport torch\nfor i in range(100):\n N = torch.randint(4000, 4100, (1,)).item()\n cov = torch.randn((N, 3, 3), device=\"cuda\")\n cov = cov @ cov.transpose(-1, -2)\n cov = cov + torch.eye(3, device=\"cuda\")[None, :, :] * 0.01\n\n val, vec = torch.linalg.eigh(cov)\n```\nAmong the system specified in Versions below, I reproduced the same issue on an RTX 6000 Ada.\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.1 LTS (x86_64)\nGCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0\nClang version: Could not collect\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.6.77\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA RTX A3000 Laptop GPU\nNvidia driver version: 573.24\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 16\nOn-line CPU(s) list: 0-15\nVendor ID: GenuineIntel\nModel name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz\nCPU family: 6\nModel: 141\nThread(s) per core: 2\nCore(s) per socket: 8\nSocket(s): 1\nStepping: 1\nBogoMIPS: 4992.01\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves vnmi avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities\nVirtualization: VT-x\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 384 KiB (8 instances)\nL1i cache: 256 KiB (8 instances)\nL2 cache: 10 MiB (8 instances)\nL3 cache: 24 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-15\nVulnerability Gather data sampling: Unknown: Dependent on hypervisor status\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; Enhanced IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affec", "url": "https://github.com/pytorch/pytorch/issues/162957", "state": "open", "labels": [ "needs reproduction", "module: cuda", "module: memory usage", "triaged", "module: linear algebra" ], "created_at": "2025-09-15T12:18:40Z", "updated_at": "2025-09-16T08:08:01Z", "comments": 2, "user": "fjneumann" }, { "repo": "pytorch/pytorch", "number": 162952, "title": "The FSDPModule.set_requires_gradient_sync should control reduce-scatter sync and all-reduce sync separately", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe current `FSDPModule.set_requires_gradient_sync` implementation controls both `reduce-scatter` and `all-reduce` together. For the multi-node HSDP scenario (replication between nodes, intra-node parameter sharing), in gradient accumulation periods, turning `reduce-scatter` on but `all-reduce` off reduces unnecessary network communication between nodes without increasing GPU peak memory usage. If `reduce-scatter ` is also turned off, `FSDPParam.unsharded_accumulated_grad` maintains a unshared gradient, which causes GPU peak memory to increase.\n\n## Why does disabling all-reduce sync not increase GPU memory usage\nIf `all-reduce` is turn off, `FSDPParamGroup` maintains `_partial_reduce_output` representing all the shared gradient of `FSDPParam.sharded_param` maintained by the current group and `FSDPParam.sharded_param.grad` is None. If `all-reduce` is turn on, the gradient is assigned to each `FSDPParam.sharded_param.grad` . So there is no extra GPU memory footprint.\n\nSee more code detail at [FSDPParamGroup.post_backward](https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_fully_shard/_fsdp_param_group.py#L478) and [foreach_reduce](https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_fully_shard/_fsdp_collectives.py#L447).\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci", "url": "https://github.com/pytorch/pytorch/issues/162952", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2025-09-15T09:01:29Z", "updated_at": "2025-09-21T03:01:33Z", "comments": 3, "user": "EquationWalker" }, { "repo": "pytorch/pytorch", "number": 162908, "title": "new sparse tensor format implementation: tips", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi,\nI'm currently working on implementing a new sparse tensor format. I wish to implement a method for the tensor object, such that i can do `A.to_new_format()`, where `A` is a tensor object. \nCan someone point me on how to implement this kind of feature directly as a method of the tensor object?\n\nThanks\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/162908", "state": "closed", "labels": [], "created_at": "2025-09-14T11:31:49Z", "updated_at": "2025-09-14T22:07:20Z", "comments": 1, "user": "ricvigi" }, { "repo": "pytorch/pytorch", "number": 162898, "title": "Script ./export/unflaten.py has some bugs.", "body": "### \ud83d\udc1b Describe the bug\n\nI'm using torch.distributed.pipelining to implement Pipeline Parallelism for my model, but I'm encountering the following error:\n\n\"Image\"\nAfter reviewing the source code, I found what appears to be a bug in the run_outer() function. The code handles a node.op == \"placeholder\" and immediately calls run_from(). However, inside run_from(), there's an assert node.op != \"placeholder\". If the graph has multiple placeholder nodes, this assertion will definitely cause the program to crash. I believe this is a bug, so I've filed this issue. \n\n\"Image\"\n\nIf my assessment is wrong, I would appreciate any advice from the team on how to resolve my error.\n\nNote: My development environment is using PyTorch version 2.6.0, but I've checked your latest version, 2.8.0, and this section of the code appears to be the same.\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.6.0+cu124\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.3 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.35\n\nPython version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.2.91\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA A100-PCIE-40GB\nGPU 1: NVIDIA A100-PCIE-40GB\nGPU 2: NVIDIA A100-PCIE-40GB\nGPU 3: NVIDIA A100-PCIE-40GB\nGPU 4: NVIDIA A100-PCIE-40GB\nGPU 5: NVIDIA A100-PCIE-40GB\nGPU 6: NVIDIA A100-PCIE-40GB\nGPU 7: NVIDIA A100-PCIE-40GB\n\nNvidia driver version: 535.261.03\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 80\nOn-line CPU(s) list: 0-79\nVendor ID: GenuineIntel\nModel name: Intel Xeon Processor (Skylake, IBRS)\nCPU family: 6\nModel: 85\nThread(s) per core: 2\nCore(s) per socket: 20\nSocket(s): 2\nStepping: 4\nBogoMIPS: 5985.39\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ibrs ibpb fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat\nL1d cache: 2.5 MiB (80 instances)\nL1i cache: 2.5 MiB (80 instances)\nL2 cache: 160 MiB (40 instances)\nL3 cache: 32 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-39\nNUMA node1 CPU(s): 40-79\nVulnerability Gather data sampling: Unknown: Dependent on hypervisor status\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\nVulnerability L1tf: Mitigation; PTE Inversion\nVulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Meltdown: Vulnerable\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Retbleed: Mitigation; IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\n\nVersions of relevant libraries:\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.4.5.8\n[pip3] nvidia-cuda-cupti-cu12==12.4.127\n[pip3] nvidia-cuda-nvrtc-cu12==12.4.127\n[pip3] nvidia-cuda-runtime-cu12==12.4.127\n[pip3] nvidia-cudnn-cu12==9.1.0.70\n[pip3] nvidia-cufft-cu12==11.2.1.3\n[pip3] nvid", "url": "https://github.com/pytorch/pytorch/issues/162898", "state": "open", "labels": [ "oncall: distributed", "module: pipelining" ], "created_at": "2025-09-14T05:19:54Z", "updated_at": "2025-10-05T13:33:18Z", "comments": 1, "user": "lileicaca" }, { "repo": "pytorch/vision", "number": 9215, "title": "MixUp and CutMix transforms for semantic segmentation", "body": "Is there any way to use the MixUp and CutMix transforms for semantic segmentation masks? I could not find any documentation on it.\n\nIf this functionality does not exist, I will be happy to submit a PR for the same.\n\nMotivation - CutMix is used in SOTA semi-supervised semantic segmentation methods such as [UniMatch](https://arxiv.org/abs/2410.10777) and MixUp is used in knowledge distillation methods such as [\"Knowledge distillation: A good teacher is patient and consistent\"](https://arxiv.org/abs/2106.05237)", "url": "https://github.com/pytorch/vision/issues/9215", "state": "open", "labels": [], "created_at": "2025-09-13T11:23:35Z", "updated_at": "2025-09-19T18:52:48Z", "comments": 1, "user": "vedantdalimkar" }, { "repo": "pytorch/pytorch", "number": 162870, "title": "[RFC] library function with 64+ arguments", "body": "### Custom op support with 64+ arguments\n\nIs there any plan to support 64+ argument? I have a custom kernel that takes 64+ arguments.\n\n```python\nimport torch\nfrom torch.library import Library, impl, register_fake\n\nnum_args = 65\n\n# Create a new custom namespace\nmy_lib = Library(\"my_ops\", \"LIB\")\n\n# Define a custom operator with a list of tensors as input\nargs = \", \".join([f\"Tensor t{i}\" for i in range(num_args)])\nmy_lib.define(f\"a_func({args}) -> Tensor\")\n```\n\n```\nTraceback (most recent call last):\n File \"/test/torch_test.py\", line 18, in \n my_lib.define(f\"a_func({args}) -> Tensor\")\n File \"/miniconda3/envs/test-venv/lib/python3.11/site-packages/torch/library.py\", line 172, in define\n result = self.m.define(schema, alias_analysis, tuple(tags))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nRuntimeError: The function schema has 65 arguments but this PyTorch build only supports 64\n```\n\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/162870", "state": "open", "labels": [ "triaged", "module: custom-operators", "module: library" ], "created_at": "2025-09-13T04:03:09Z", "updated_at": "2025-09-15T23:25:57Z", "comments": 1, "user": "tlsdmstn56" }, { "repo": "pytorch/pytorch", "number": 162859, "title": "[RFC] support symmetric memory in torch.compile", "body": "The proposal originally came up in vLLM-compile sync with @ProExpertProg, @Chillee, and @Amir-19 and was also discussed with @ngimel @kwen2501. Recording it here to make sure we're all on the same page.\n\n## Pitch\n\nFor any collective operator (built-in or custom), a user can specify which input must have symmetric memory.\n\ntorch.compile (Inductor) will figure out where the input is coming from and ensure that it is allocated with symmetric memory.\n\nThere are two cases for what type of operator produced the input.\n\n1) built-in operator. Inductor might already preallocate the buffer that is the output of the operator (via memory planning) and it just needs to allocate it with symmetric memory.\n\n```py\nrequires_symmetric_memory(collective, input=0)\n\ndef user_code(x):\n y = x.sin()\n z = y.cos()\n return collective(z)\n\ndef inductor_generated_code(x):\n with symmetric_memory():\n buffer = torch.empty()\n triton_inplace_sin_cos_fused(buffer)\n return collective(buffer)\n```\n\n\n2) custom operator. Inductor just needs to run the custom operator underneath the symmetric memory context manager. The main risk of this is that more buffers than are needed get allocated with symmetric memory (all tensors produced by the custom op get allocated with symmetric memory), but the user can just re-write their custom op to optimize this\n\n```py\nrequires_symmetric_memory(collective, input=0)\n\ndef user_code(x):\n y = custom_op(x)\n return collective(z)\n\ndef inductor_generated_code(x):\n with symmetric_memory():\n y = custom_op(x)\n return collective(y)\n```\n\n## What about eager-mode?\n\nThe API to specify which input needs symmetric memory only applies to torch.compile. So a user would end up writing code that looks like:\n```py\nrequires_symmetric_memory(collective, input=0)\n\ndef user_code(x):\n if torch.compiler.is_compiling():\n with symmetric_memory():\n y = custom_op(x)\n else:\n y = custom_op(x)\n return collective(z)\n```\n\n## What is the API to specify which input needs symmetric memory?\n\n@kwen2501 noted that the choice of which input needs symmetric memory is specific to the collective operator. So one design is just during operator registration, specify that the input needs symmetric memory.\n\n1. torch.library.define(\"my_collective(SymmMemTensor x) -> Tensor\")\n2. torch.library.define(\"my_collective(Tensor x) -> Tensor\", symm_mem_hint=\"x\")\n\nAnother design is a torch.compiler API:\n\ntorch.compiler.specify_symmetric_memory(my_collective, \"x\").\n\nIf we think the choice is actually dynamic (or that some collectives may accept both symmetric and non-symmetric memory?) then this could instead be a context manager:\n```py\n@torch.compile\ndef user_code(y):\n x = custom_op(y)\n with torch.compiler.specify_symmetric_memory(my_collective, \"x\"):\n my_collective(x)\n```\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben", "url": "https://github.com/pytorch/pytorch/issues/162859", "state": "open", "labels": [ "oncall: distributed", "triaged", "oncall: pt2", "module: inductor", "vllm-compile", "module: vllm", "module: symm_mem" ], "created_at": "2025-09-12T22:27:49Z", "updated_at": "2025-12-16T18:19:59Z", "comments": 26, "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 162854, "title": "Move test_quantization tests to run weekly", "body": "Currently test_quantization is running on every commit / PR, it's not necessary since we are deprecating the flow: https://docs.pytorch.org/docs/main/quantization.html\n\nAlthough the API is still used, so we want to reduce the cadence the tests are running to weekly.\n\nMain test file: https://github.com/pytorch/pytorch/blob/0dcd9304aa0ea404c2807cb058660e49c9810c20/test/test_quantization.py#L4\n\n1. We need to find how it is called in CI and remove the run, e.g. remove https://github.com/pytorch/pytorch/blob/0dcd9304aa0ea404c2807cb058660e49c9810c20/tools/testing/modulefinder_determinator.py#L43\n2. We need to find how to run weekly jobs, and add test_quantization.py run there\n\nWill likely need dev-infra's help on both of the above.\n \n\ncc @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry", "url": "https://github.com/pytorch/pytorch/issues/162854", "state": "closed", "labels": [ "oncall: quantization", "module: ci", "module: tests" ], "created_at": "2025-09-12T21:56:16Z", "updated_at": "2025-09-24T11:31:14Z", "comments": 1, "user": "jerryzh168" }, { "repo": "pytorch/torchtitan", "number": 1708, "title": "FSDP + compiled autograd", "body": "Hi! I was trying out some debug runs using FSDP with compile enabled and found out that compiled autograd doesn't seem to work well with FSDP. (a single gpu run without FSDP seems to work)\n\nIs it possible to make such a setup work or is it just not supported as of now?\n\nLaunching a train run with the arguments below\n```python\ntorchrun \\\n --standalone \\\n --nproc-per-node 2 \\\n --role rank \\\n --tee 3 \\\n --local-ranks-filter 0 \\\n -m torchtitan.train \\\n --job.config_file torchtitan/models/llama3/train_configs/debug_model.toml \\\n --training.compile \\\n --parallelism.enable_compiled_autograd \\\n --activation-checkpoint.mode none\n```\nfails with an error\n```\n loss.backward()\n File \"/usr/local/lib/python3.12/dist-packages/torch/_tensor.py\", line 648, in backward\n torch.autograd.backward(\n File \"/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py\", line 354, in backward\n _engine_run_backward(\n File \"/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py\", line 829, in _engine_run_backward\n return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/compiled_autograd.py\", line 1354, in set_node_origin\n raise RuntimeError(\n RuntimeError: This compiled backward function was saved by AOTAutogradCache, which does not support\n compiled autograd. Please turn off AOTAutogradCache using `TORCHINDUCTOR_AUTOGRAD_CACHE=0`.\n```\n\nSetting `TORCHINDUCTOR_AUTOGRAD_CACHE=0` doesn't seem to help much\n\n```\n loss.backward()\n File \"/usr/local/lib/python3.12/dist-packages/torch/_tensor.py\", line 648, in backward\n torch.autograd.backward(\n File \"/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py\", line 354, in backward\n _engine_run_backward(\n File \"/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py\", line 829, in _engine_run_backward\n return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/compiled_autograd.py\", line 1041, in runtime_wrapper\n out = compiled_fn(\n ^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py\", line 372, in __call__\n return super().__call__(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1767, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1778, in _call_impl\n return forward_call(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py\", line 699, in compile_wrapper\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py\", line 840, in call_wrapped\n return self._wrapped_call(self, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py\", line 416, in __call__\n raise e\n File \"/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py\", line 403, in __call__\n return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1767, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1778, in _call_impl\n return forward_call(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \".8\", line 4, in forward\n def forward(self, inputs, sizes, scalars, hooks, packed_data):\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py\", line 893, in _fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/utils.py\", line 4381, in wrapper\n return compiled_fn(flat_args)\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py\", line 893, in _fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py\", line 1214, in boxed_forward\n return compiled_fn(flat_args)\n ^^^^^^^^^^^^", "url": "https://github.com/pytorch/torchtitan/issues/1708", "state": "open", "labels": [ "module: fsdp", "module: torch.compile" ], "created_at": "2025-09-12T20:42:31Z", "updated_at": "2025-09-15T16:24:02Z", "comments": 3, "user": "antony-frolov" }, { "repo": "pytorch/pytorch", "number": 162820, "title": "[CI][CUDA][Distributed] test_ring_flex_attention failed on 8xB200 Runner", "body": "### \ud83d\udc1b Describe the bug\n\nTracked in umbrella https://github.com/pytorch/pytorch/issues/162178 \nJob link: https://github.com/pytorch/pytorch/actions/runs/17660052730/job/50193312091 \n\nFailure message: \n\n`2025-09-12T05:47:07.8805304Z expect_out, expect_lse = compiled_flex_attention(\n2025-09-12T05:47:07.8805570Z File \"/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 841, in compile_wrapper\n2025-09-12T05:47:07.8805776Z raise e.with_traceback(None) from e.__cause__ # User compiler error\n2025-09-12T05:47:07.8806030Z torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped\n2025-09-12T05:47:07.8806214Z Explanation: Dynamo does not know how to trace the Python builtin `_warnings.warn`.\n2025-09-12T05:47:07.8806549Z Hint: If you are attempting to call a logging function (e.g. `_warnings.warn`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.\n2025-09-12T05:47:07.8806723Z Hint: Please file an issue on GitHub so the PyTorch team can add support for it. \n2025-09-12T05:47:07.8806754Z \n2025-09-12T05:47:07.8806953Z Developer debug context: module: _warnings, qualname: warn, skip reason: \n2025-09-12T05:47:07.8806957Z \n2025-09-12T05:47:07.8807256Z For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0007.html\n2025-09-12T05:47:07.8807260Z \n2025-09-12T05:47:07.8807348Z from user code:\n2025-09-12T05:47:07.8807707Z File \"/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py\", line 1613, in flex_attention\n2025-09-12T05:47:07.8807824Z _warn_once(\n2025-09-12T05:47:07.8808100Z File \"/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py\", line 65, in _warn_once\n2025-09-12T05:47:07.8808250Z warnings.warn(message, category, stacklevel=2)\n2025-09-12T05:47:07.8808254Z \n2025-09-12T05:47:07.8808676Z Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS=\"+dynamo\"\n2025-09-12T05:47:07.8808680Z \n2025-09-12T05:47:07.8808683Z \n2025-09-12T05:47:07.8808831Z To execute this test, run the following from the base repo dir:\n2025-09-12T05:47:07.8809086Z python test/distributed/tensor/test_attention.py RingFlexAttentionTest.test_ring_flex_attention\n2025-09-12T05:47:07.8809090Z \n2025-09-12T05:47:07.8809286Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0\n2025-09-12T05:47:07.8809421Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!\n2025-09-12T05:47:07.8809584Z ================== 1 failed, 5 deselected, 2 rerun in 40.40s ===================`\n\n\n\n### Versions\n\nTOT\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng", "url": "https://github.com/pytorch/pytorch/issues/162820", "state": "open", "labels": [ "oncall: distributed", "module: ci", "module: tests", "triaged", "module: higher order operators", "module: pt2-dispatcher", "module: flex attention" ], "created_at": "2025-09-12T16:29:01Z", "updated_at": "2025-09-22T22:23:48Z", "comments": 3, "user": "nWEIdia" }, { "repo": "pytorch/ao", "number": 2989, "title": "Quantized model is slower than original model!", "body": "Hello,\nI have put together this benchmark and I am wondering why the quantised version is so much slower. Is there something that I have missed or is it simply that the model is small and the overhead of quantization is not worth it in this case? \n\nThe results are the following.\n\n```\nBenchmarking: model_fp32.onnx\nWarming up (100 iterations)...\nRunning benchmark (1000 iterations)...\nAverage: 0.014 ms\nMedian: 0.014 ms\nStd Dev: 0.002 ms\nMin/Max: 0.012/0.063 ms\nThroughput: 70994.7 samples/sec\n```\n\n```\nBenchmarking: model_quantized.onnx\nWarming up (100 iterations)...\nRunning benchmark (1000 iterations)...\nAverage: 0.045 ms\nMedian: 0.044 ms\nStd Dev: 0.007 ms\nMin/Max: 0.042/0.144 ms\nThroughput: 22114.3 samples/sec\n```\n\nhere is the code\n\n```\nimport torch\nfrom torchao.quantization import quantize_, Int8DynamicActivationInt4WeightConfig\nfrom torchao.quantization.qat import QATConfig\nfrom torchvision.ops import MLP\nimport onnxruntime as ort\nimport numpy as np\nimport time\nimport statistics\nfrom typing import Dict, Tuple\ninput_dims = 512\ngroup_size = 64\n\ndef get_model():\n return MLP(\n in_channels=input_dims,\n hidden_channels=[256, 128, 64, 1]\n )\n\ndef train_loop(m: torch.nn.Module):\n optimizer = torch.optim.SGD(m.parameters(), lr=0.001, momentum=0.9, weight_decay=1e-5)\n loss_fn = torch.nn.CrossEntropyLoss()\n for i in range(10):\n example = torch.randn(32,input_dims)\n target = torch.randn(32,1)\n output = m(example)\n loss = loss_fn(output, target)\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n\ndef benchmark_onnx_inference(\n model_path: str, \n input_shapes: Dict[str, Tuple], \n num_warmup: int = 10,\n num_iterations: int = 100\n) -> Dict:\n \"\"\"Benchmark ONNX model inference speed\"\"\"\n \n print(f\"Benchmarking: {model_path}\")\n \n # Create inference session\n session = ort.InferenceSession(model_path)\n \n # Get input/output info\n input_names = [inp.name for inp in session.get_inputs()]\n output_names = [out.name for out in session.get_outputs()]\n \n # Prepare inputs with exact shapes provided\n inputs = {}\n for name, shape in input_shapes.items():\n inputs[name] = np.random.randn(*shape).astype(np.float32)\n \n # Warmup\n print(f\" Warming up ({num_warmup} iterations)...\")\n for _ in range(num_warmup):\n _ = session.run(output_names, inputs)\n \n # Actual benchmark\n print(f\" Running benchmark ({num_iterations} iterations)...\")\n times = []\n \n for i in range(num_iterations):\n start_time = time.perf_counter()\n outputs = session.run(output_names, inputs)\n end_time = time.perf_counter()\n \n times.append((end_time - start_time) * 1000) # Convert to ms\n \n # Calculate statistics\n avg_time = statistics.mean(times)\n median_time = statistics.median(times)\n std_time = statistics.stdev(times) if len(times) > 1 else 0\n min_time = min(times)\n max_time = max(times)\n \n # Determine batch size from first input shape\n batch_size = list(input_shapes.values())[0][0] if input_shapes else 1\n \n results = {\n 'avg_ms': avg_time,\n 'median_ms': median_time,\n 'std_ms': std_time,\n 'min_ms': min_time,\n 'max_ms': max_time,\n 'throughput_samples_per_sec': batch_size * 1000 / avg_time,\n 'all_times': times,\n 'batch_size': batch_size,\n 'input_shapes': input_shapes\n }\n \n print(f\" Average: {avg_time:.3f} ms\")\n print(f\" Median: {median_time:.3f} ms\")\n print(f\" Std Dev: {std_time:.3f} ms\") \n print(f\" Min/Max: {min_time:.3f}/{max_time:.3f} ms\")\n print(f\" Throughput: {batch_size * 1000 / avg_time:.1f} samples/sec\")\n \n return results\n\ndef comprehensive_quantization_test():\n \"\"\"Complete test to verify quantization is working\"\"\"\n print(\"=== Comprehensive Quantization Verification ===\\n\")\n \n # Create models\n model_fp32 = get_model()\n model_quantized = get_model()\n \n # Apply quantization\n base_config = Int8DynamicActivationInt4WeightConfig(group_size=group_size)\n quantize_(model_quantized, QATConfig(base_config, step=\"prepare\"))\n \n # Train quantized model\n train_loop(model_quantized)\n \n quantize_(model_quantized, QATConfig(base_config, step=\"convert\"))\n # save models to onnx\n torch.onnx.export(model_fp32, torch.randn(1, input_dims), \"model_fp32.onnx\", dynamo=True)\n torch.onnx.export(model_quantized, torch.randn(1, input_dims), \"model_quantized.onnx\", dynamo=True)\n \n input_shapes = {\"input\": (1, input_dims)} \n results_fp32 = benchmark_onnx_inference(\n \"model_fp32.onnx\", \n input_shapes,\n num_warmup=100,\n num_iterations=1000\n )\n results_quantized = benchmark_onnx_inference(\n \"model_quantized.onnx\", \n input_shapes,\n num_warmup=100,\n num_iterations=1000\n )\n \ncomprehensive_quantization_test()\n```", "url": "https://github.com/pytorch/ao/issues/2989", "state": "open", "labels": [], "created_at": "2025-09-12T05:00:23Z", "updated_at": "2025-09-12T18:31:28Z", "comments": 8, "user": "timpiperseek" }, { "repo": "pytorch/pytorch", "number": 162782, "title": "Is `torch.nn.functional.gumbel_softmax` going to be deprecated?", "body": "Is this function really going to be deprecated going forward? If so I will write my own version. Thanks! \n\nThere is the following issue on this page: https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html\n\ncc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki", "url": "https://github.com/pytorch/pytorch/issues/162782", "state": "open", "labels": [ "module: nn", "triaged", "module: deprecation" ], "created_at": "2025-09-12T00:48:11Z", "updated_at": "2025-09-19T17:36:22Z", "comments": 1, "user": "michaelfortunato" }, { "repo": "pytorch/pytorch", "number": 162719, "title": "linalg.eig does not get parallelized on CPU", "body": "### \ud83d\udc1b Describe the bug\n\nI have a lengthy calculation that relies on eigendecomposition of non-Hermitian matrices in one place. The reason I picked PyTorch is the straightforward parallel nature of its ops, however that does not seem to be the case with `eig`. While I know it calls a BLAS routine under the hood, I am actually calculating batches of matrices, so there is potential for a speedup there. However, looking at the code below:\n```python\nimport torch\nimport timeit\nimport psutil\nimport matplotlib.pyplot as plt\nimport numba\nimport numpy as np\n\nstmt = \"torch.linalg.eig(e)\"\nruntimes = []\nthreads = [1] + [t for t in range(2, 30, 2)]\nfor t in threads:\n torch.set_num_threads(t)\n try:\n numba.set_num_threads(t)\n except ValueError:\n pass\n\n r = timeit.timeit(\n setup=\"e = torch.randn(200, 25, 25, dtype=torch.cdouble)\",\n stmt=stmt,\n number=100,\n globals=globals(),\n )\n runtimes.append(r)\n\nplt.plot(threads, runtimes)\nplt.xlabel(\"Number of Threads\")\nplt.ylabel(\"Runtime (seconds)\")\nplt.title(stmt)\nnum_cores = psutil.cpu_count(logical=False)\nnum_threads = psutil.cpu_count()\nif num_threads is not None and num_cores is not None:\n plt.axvline(x=num_cores, color='g', linestyle='--', label='Physical Cores')\n plt.axvline(x=num_threads, color='r', linestyle='--', label='Logical Cores')\n plt.legend()\nplt.grid()\nplt.show()\n```\n\nI get the following relation:\n\n\"Image\"\n\nSo not only is there no speedup, there is even a slowdown caused by threads!\n\nSince BLAS routines may utilise multiple threads, I compared it with a custom numba based torch op:\n\n```python\n@numba.jit(nopython=True, parallel=True, cache=True)\ndef batch_eig(batch):\n shape = batch.shape\n batch_dims = shape[:-2] # All dimensions except the last two (matrix dimensions)\n n = shape[-1]\n\n total_matrices = 1\n for dim in batch_dims:\n total_matrices *= dim\n \n flat_batch = batch.reshape(total_matrices, n, n)\n\n eigvecs = np.zeros_like(flat_batch)\n eigvals = np.zeros((total_matrices, n), dtype=np.complex128)\n\n for i in numba.prange(total_matrices):\n eigvals[i], eigvecs[i] = np.linalg.eig(flat_batch[i])\n\n return eigvals.reshape(batch_dims + (n,)), eigvecs.reshape(batch_dims + (n, n))\n\n@torch.library.custom_op(\"mylib::eig\", mutates_args=())\ndef eig(pic: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:\n E, U = batch_eig(pic.numpy())\n return torch.from_numpy(E), torch.from_numpy(U)\n\n@eig.register_fake\ndef _(pic):\n eigvals = torch.empty(pic.shape[:-1], dtype=torch.cdouble)\n eigvecs = torch.empty(pic.shape, dtype=torch.cdouble)\n return eigvals, eigvecs\n```\n\nand the result can be seen below:\n\"Image\"\n\nUnfortunately, I'm no expert in torch internals, but I also need autograd and don't want to rely on my `numba` based implementation. Perhaps it is connected to the fact that `eig` does not even compile: #159445\n\n### Versions\n\nPyTorch version: 2.8.0+cpu\nIs debug build: False\nCUDA used to build PyTorch: Could not collect\nROCM used to build PyTorch: N/A\n\nOS: Microsoft Windows 11 Pro (10.0.26100 64-bit)\nGCC version: (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r2) 14.2.0\nClang version: 19.1.1\nCMake version: version 3.30.4\nLibc version: N/A\n\nPython version: 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)\nPython platform: Windows-10-10.0.26100-SP0\nIs CUDA available: False\nCUDA runtime version: 12.8.61\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060\nNvidia driver version: 576.80\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nName: 13th Gen Intel(R) Core(TM) i5-13600KF\nManufacturer: GenuineIntel\nFamily: 205\nArchitecture: 9\nProcessorType: 3\nDeviceID: CPU0\nCurrentClockSpeed: 3500\nMaxClockSpeed: 3500\nL2CacheSize: 20480\nL2CacheSpeed: None\nRevision: None\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.6\n[pip3] torch==2.8.0+cpu\n[conda] Could not collect\n\ncc @jerryzh168 @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano", "url": "https://github.com/pytorch/pytorch/issues/162719", "state": "open", "labels": [ "module: performance", "module: cpu", "triaged", "module: linear algebra" ], "created_at": "2025-09-11T12:09:57Z", "updated_at": "2025-10-02T12:03:49Z", "comments": 5, "user": "krokosik" }, { "repo": "pytorch/pytorch", "number": 162638, "title": "Gradient Clipping in Pipeline Parallelism Schedules", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe current PP schedules like `Schedule1F1B` don't seem to have built-in gradient clipping support. \nIs there a recommended approach for implementing gradient clipping in pipeline parallelism, and what would be the most efficient way to compute global gradient norms across sharded parameters? \nWould it be possible to add gradient clipping as a built-in feature to the PP schedule classes with parameters like `grad_clip_norm` and `clip_interval`? \nThis would be really helpful for users who need gradient clipping in their PP training workflows, especially for scenarios where training stability is important.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan", "url": "https://github.com/pytorch/pytorch/issues/162638", "state": "open", "labels": [ "oncall: distributed", "module: autograd" ], "created_at": "2025-09-10T20:48:20Z", "updated_at": "2025-09-11T15:12:36Z", "comments": 0, "user": "nvlas" }, { "repo": "pytorch/pytorch", "number": 162630, "title": "[RFC] Intrusive Caching DLPack for Fast Conversion", "body": "Currently DLPack is being used for Tensor data exchange. This conversion, which involves populating metadata such as shape, data pointer, and strides, can introduce a small but non-negligible overhead, typically in the range of 40-80 nanoseconds on the C++ side. While this latency is already quite low, frequent tensor exchanges\u2014such as those involving model weights or intermediate values used multiple times\u2014can accumulate this overhead. \n\nThis RFC addresses the question of whether this overhead can be further reduced, particularly in scenarios where the same tensor is converted to DLPack multiple times during its lifetime. \n\nIt does involve change to the c10::TensorImpl data structure, so likely needs to be done with care. This post first put the high-level idea out to the community to seek feedbacks.\n\n## Proposal\n\nWe propose an approach that integrates a caching mechanism directly into the framework's tensor object. The high-level concept is as follows:\n\n- **Cache Storage**: A std::unique_ptr will be added as a member field to the framework's tensor object (e.g., TensorImpl). This modification requires a change to the framework's internal tensor structure.\n- **On-Demand Population**: When the ToDLPack conversion method is called for the first time on a given tensor, the DLManagedTensorVersioned object will be created and populated. The framework's internal metadata will be transferred, and the manager_ctx of the DLManagedTensorVersioned will be set to point back to the TensorImpl itself. The deleter will also be configured at this time.\n- **Ref counting integration** \n - To prevent the TensorImpl from being deallocated while a DLPack consumer holds a reference, a new reference will be added to the TensorImpl intrusive reference counter each time a DLManagedTensorVersioned is returned. \n - The DLManagedTensorVersioned's deleter function will be configured to simply decrement the TensorObj's reference counter. This ensures that the TensorImpl and its cached DLManagedTensorVersioned are not deallocated until all DLPack and internal references are released.\n- **Cache Reuse**: For subsequent calls to ToDLPack on the same tensor object, the cached DLManagedTensorVersioned will be directly returned. The only overhead will be a pointer lookup and a reference count increment, which is an extremely fast operation, measured to be **as low as 3.8 nanosecond** in preliminary benchmarks.\n\n## Thread Safety\nIn C++ environment, different thread may concurrent write to the cached field and it is important to consider thread-safety, so only one cached value is written and returned to the user. Here is an updated version, at high-level:\n\n- Different thread can race to create their own DLManagedTensorVersioned when they find cached field is nullptr\n- Use atomic_compare_exchange_strong_explicit to ensure one of the value get stored and only store it when the cached field is nullptr\n- Always return the stored value, and if the value is created by another thread, delete the current one and return the value created by another thread\n\n\n## Expected Benefits and Tradeoffs\n\n- **Significant Performance Improvement**: This caching strategy can reduce the DLPack conversion overhead from 40-80ns to a mere 1ns for repeated conversions.\n- **Reduced Redundancy**: Avoids repeated allocation and population of DLManagedTensorVersioned objects for the same tensor.\n- **Minimal Cost**: The overhead of this approach is limited to one extra pointer field per tensor object, which is negligible given the typical size of tensor metadata and data.\n\n## Example Implementation\n\nThe following C++ code snippet illustrates the proposed mechanism within a hypothetical TensorImpl class that uses intrusive reference counting .\n\n```c++\n#include \n\n// TensorImpl is a target of an intrusive ptr that contains a reference counter.\n// in the context of PyTorch, based on my understanding,\n// it would be c10::TensorImpl or something c10::TensorImpl holds like ExtraMeta\nclass TensorImpl : public intrusive_ptr_target {\n public:\n ~TensorImpl() {\n // deleting the cached dl managed tensor versioned\n // We need to acquire the value in case it is released by another thread\n // However, because this destructor is triggered as part of the intrusive pointer deletion\n // there is already a memory fence in intrusive pointer deleter triggering to ensure\n // all fields of the TensorImpl are visible here, so we do not have to do acquire, actually \n // we can even do a non-atomic load here\n DLManagedTensorVersioned* cached = cached_dl_managed_tensor_.load(\n std::memory_order_relaxed);\n if (cached != nullptr) {\n delete cached;\n }\n }\n /*!\n * \\brief Converts the current Tensor to a DLPack Tensor.\n * \\return The converted DLManagedTensorVersioned pointer.\n */\n DLManagedTensorVersioned* ToDLPack() const {\n // this function holds a strong reference to the TensorImpl\n TensorImpl* self =", "url": "https://github.com/pytorch/pytorch/issues/162630", "state": "closed", "labels": [ "triaged", "enhancement", "module: dlpack" ], "created_at": "2025-09-10T20:00:53Z", "updated_at": "2025-09-12T20:26:48Z", "comments": 15, "user": "tqchen" }, { "repo": "pytorch/pytorch", "number": 162606, "title": "Tensorpipe - ROCm support", "body": "Raising this issue to discuss on the path forward to enable tensorpipe feature on ROCm.\n\nWhy it is required\n- UT gap, currently tensorpipe related UTs are skipped on ROCm but executed for CUDA.\n\nTensorpipe repo was archived few year back and no changes were accepted. Recently https://github.com/pytorch/tensorpipe/commits/main/ it was open back.\n\nAs far as I know discussing with @atalman, CI for tensorpipe is removed.\nSo we want to discuss how to push changes to support it on ROCm.\n\nOld PR, which tried to enable it for ROCm, but got dropped for different reasons.\n- https://github.com/pytorch/tensorpipe/pull/398\n- https://github.com/pytorch/tensorpipe/pull/401\n\ncc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @osalpekar @jiayisuse @lw @beauby @pritamdamania87 @mrshenli @jjlilley @gqchen @malfet @atalman @pragupta @dwiddows", "url": "https://github.com/pytorch/pytorch/issues/162606", "state": "open", "labels": [ "module: rocm", "triaged", "module: tensorpipe", "rocm" ], "created_at": "2025-09-10T16:03:52Z", "updated_at": "2025-12-17T02:56:09Z", "comments": 8, "user": "pruthvistony" }, { "repo": "pytorch/ao", "number": 2967, "title": "Deprecation for IntxWeightOnlyConfig/Int8DynamicActivationIntxWeightConfig (version 1) and the models", "body": "This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.\n\nWhat is deprecated:\n* IntxWeightOnlyConfig/Int8DynamicActivationIntxWeightConfig with version=1 is now deprecated. Please use version=2 (current default).\n* Quantized checkpoints quantized with version 1 config previously are deprecated as well, and we plan to remove the support to load these checkpoints after pytorch 2.11 release (around 9 months from now)\n\nTimeline:\n0.14.0: annouce deprecation for version 1 config\nafter all tensors are migrated: remove support for version 1 config\nafter pytorch 2.11 release: remove support for version 1 checkpoints", "url": "https://github.com/pytorch/ao/issues/2967", "state": "open", "labels": [], "created_at": "2025-09-09T20:35:13Z", "updated_at": "2025-10-02T20:50:10Z", "comments": 0, "user": "metascroy" }, { "repo": "pytorch/pytorch", "number": 162512, "title": "Default Google Search to Off in docs", "body": "\"Image\"\n\nTwo comments on the search bar in the new UI:\n1. It is inconvenient that the search bar is not on the same screen as the search results, so I cannot see both at the same time.\n2. I searched \"quantile\", which in the new search bar yields no obvious results. Looking a little harder encourages me to click on the .diag result, which then is one more sidebar click away from what I'm actually looking for. Contrast this to the old experience, which directly suggested the right page. \n\n\"Image\"\n\nI'm slowly realizing that maybe this poor experience is just cuz the toggle in the search bar that says \"Search Google\" is on, and turning it off has been better. Should we turn Google Search off by default then?\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/162512", "state": "open", "labels": [ "module: docs", "triaged" ], "created_at": "2025-09-09T18:14:06Z", "updated_at": "2025-09-09T18:24:50Z", "comments": 1, "user": "janeyx99" }, { "repo": "pytorch/pytorch", "number": 162481, "title": "Incosistent tracking of device activities when calling profiler.step() in torch profiler", "body": "### \ud83d\udc1b Describe the bug\n\nHere is a simple example of using profiler's scheduling functionality:\n\n```python\nimport torch\n\ndef bench_kineto(fn, num_tests: int):\n flush_l2_size = int(8e9 // 4)\n\n schedule = torch.profiler.schedule(wait=0, warmup=1, active=1, repeat=1)\n profiler = torch.profiler.profile(activities=[torch.profiler.ProfilerActivity.CUDA], schedule=schedule)\n with profiler:\n for i in range(2):\n for _ in range(num_tests):\n torch.empty(flush_l2_size, dtype=torch.int, device='cuda').zero_()\n fn()\n profiler.step()\n\n print(num_tests)\n print(profiler.key_averages().table(sort_by='cuda_time_total', max_name_column_width=50))\n\n@torch.inference_mode()\ndef main():\n torch.set_default_device(\"cuda\")\n torch.set_default_dtype(torch.bfloat16)\n\n a = torch.randn(1024, 1024)\n b = torch.randn(1024, 1024)\n\n def func():\n return a @ b\n \n bench_kineto(func, 10)\n bench_kineto(func, 10)\n\nif __name__ == \"__main__\":\n main()\n```\n\nThe output is:\n\n```text\n10\n-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \n Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls \n-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \nvoid at::native::vectorized_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 45.480ms 99.71% 45.480ms 606.404us 75 \n nvjet_tst_128x64_64x8_1x2_h_bz_NNT 0.00% 0.000us 0.00% 0.000us 0.000us 130.304us 0.29% 130.304us 6.858us 19 \n cudaLaunchKernel 0.26% 118.601us 0.26% 118.601us 2.965us 0.000us 0.00% 0.000us 0.000us 40 \n cuLaunchKernelEx 0.08% 35.264us 0.08% 35.264us 3.526us 0.000us 0.00% 0.000us 0.000us 10 \n cudaDeviceSynchronize 99.66% 45.347ms 99.66% 45.347ms 45.347ms 0.000us 0.00% 0.000us 0.000us 1 \n-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \nSelf CPU time total: 45.501ms\nSelf CUDA time total: 45.611ms\n\n10\n-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \n Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls \n-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \nvoid at::native::vectorized_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 47.279ms 99.71% 47.279ms 606.140us 78 \n nvjet_tst_128x64_64x8_1x2_h_bz_NNT 0.00% 0.000us 0.00% 0.000us 0.000us 137.343us 0.29% 137.343us 6.867us 20 \n cudaLaunchKernel 0.26% 121.269us 0.26% 121.269us 3.032us 0.000us 0.00% 0.000us 0.000us 40 \n cuLaunchKernelEx 0.08% 36.090us 0.08% 36.090us 3.609us 0.000us 0.00% 0.000us 0.000us 10 \n cudaDeviceSynchronize 99.67% 47.243ms 99.67% 47.243ms 47.243ms 0.000us 0.00% 0.000us 0.000us 1 \n-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \nSelf CPU time total: 47.401ms\nSelf CUDA time total: 47.416ms\n```\n\nWhat's problematic:\n\n`nvjet_tst_128x64_64x8_1x2_h_bz_NNT` kernel is recorded 19 times or 20 times, while it should be 10 times by design.\n\nFurther analysis shows that, if I add `torch.cuda.synchronize()` before calling `profiler.step()`, it works as expected.\n\nNote: the code is adapted from https://docs.pytorch.org/tutorials/reci", "url": "https://github.com/pytorch/pytorch/issues/162481", "state": "open", "labels": [ "oncall: profiler" ], "created_at": "2025-09-09T11:58:11Z", "updated_at": "2025-12-01T18:41:45Z", "comments": 5, "user": "youkaichao" }, { "repo": "pytorch/ao", "number": 2948, "title": "Deprecation for Int4WeightOnlyConfig (version 1) and the models", "body": "This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.\n\nWhat is deprecated:\n* We added version 2 Int4WeightOnlyConfig in various PRs in https://github.com/pytorch/ao/issues/2752 and switched the default version to 2 in https://github.com/pytorch/ao/pull/2949, the version 1 config is now deprecated, please use version 2 config to quantize the model\n* the quantized checkpoints quantized with version 1 config previously is deprecated as well, and we plan to remove the support to load these checkpoints after pytorch 2.11 release (around 9 months from now)\n\nTimeline:\n0.14.0: annouce deprecation for version 1 config\nafter all tensors are migrated: remove support for version 1 config\nafter pytorch 2.11 release: remove support for version 1 checkpoints", "url": "https://github.com/pytorch/ao/issues/2948", "state": "open", "labels": [ "tracker" ], "created_at": "2025-09-05T23:31:36Z", "updated_at": "2025-10-02T20:49:44Z", "comments": 0, "user": "jerryzh168" }, { "repo": "pytorch/torchtitan", "number": 1680, "title": "How is SDPA TP parallelized ?", "body": "In llama3, the TransformerBlock is TP parallelized [here](https://github.com/pytorch/torchtitan/blob/21799393c3e6dc710e694ef1a65852f2136ba58d/torchtitan/models/llama3/infra/parallelize.py#L204 ). However, I do not see any specific TP parallelization for scaled_dot_product . How is SDPA TP parallelized then ? ", "url": "https://github.com/pytorch/torchtitan/issues/1680", "state": "open", "labels": [], "created_at": "2025-09-04T03:23:27Z", "updated_at": "2025-09-04T22:11:08Z", "comments": 2, "user": "githubsgi" }, { "repo": "pytorch/vision", "number": 9202, "title": "torch thread yield after launch nccl kernel", "body": "### \ud83d\udc1b Describe the bug\n\nI'm using torch to benchmark nccl performance. The default nccl version that torch uses is 2.21.5. With default setting, the performance looks normal. \nThen I use LD_PRELOAD to use the latest nccl version 2.27.7 instead, and the performance degrades drastically.\nnsys shows that with nccl 2.27.7, the thread yield after every nccl call, very close to kernel launch. The yield of torch thread induces launch skew, which causes performance to drop.\n\n\"Image\"\n\nbut with nccl 2.21.5 the thread won't yield, and the benchmark performance looks normal.\n\n\"Image\"\n\nI've check the torch source code in ProcessGroupNCCL.cpp and distributed_c10d.py, but I haven't find a clue.\nHow can I get the right benchmark performance with nccl 2.27.7?\n\nsource code: \n\n[bench.py](https://github.com/user-attachments/files/22094686/bench.py)\ncommand:\n```\n# bench.py\n/usr/local/bin/mpirun --allow-run-as-root -np 8 \\\n\t-x LD_PRELOAD=/root/nccl/build/lib/libnccl.so.2.27.7 \\\n\tpython3 ./bench.py -b 8k -e 1024m -f 2 -n 100 -w 5 --op all_reduce\n\n# nccl-tests\n/usr/local/bin/mpirun --allow-run-as-root -np 8 \\\n\t-x LD_PRELOAD=/root/nccl/build/lib/libnccl.so.2.27.7 \\\n\t/root/nccl-tests/build/all_reduce_perf -b 8k -e 1024m -f 2 -n 100 -w 5 \n```\n\n\n### Versions\n\n\nVersions\n```\nPyTorch version: 2.6.0+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.28.6\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, May 27 2025, 17:12:29) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-5.14.0-3.0.3.kwai.x86_64-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.6.85\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration:\nGPU 0: NVIDIA H800\nGPU 1: NVIDIA H800\nGPU 2: NVIDIA H800\nGPU 3: NVIDIA H800\nGPU 4: NVIDIA H800\nGPU 5: NVIDIA H800\nGPU 6: NVIDIA H800\nGPU 7: NVIDIA H800\n\nNvidia driver version: 535.129.03\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nBIOS Vendor ID: Intel\nModel name: Intel(R) Xeon(R) Platinum 8468\nBIOS Model name: Intel(R) Xeon(R) Platinum 8468\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 8\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166", "url": "https://github.com/pytorch/vision/issues/9202", "state": "closed", "labels": [], "created_at": "2025-09-02T13:09:26Z", "updated_at": "2025-09-02T13:44:52Z", "comments": 1, "user": "tobi1031" }, { "repo": "pytorch/TensorRT", "number": 3803, "title": "Performance Issue when using tools/llm", "body": "## \u2753 Question\n\n\n\n## What you have already tried\n\n\n\n## Environment\n\n> Build information about Torch-TensorRT can be found by turning on debug messages\n\n - PyTorch Version (e.g., 1.0): 2.8.0\n - CPU Architecture: amd\n - OS (e.g., Linux): ubuntu 22.04\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\n - Build command you used (if compiling from source): NO\n - Are you using local sources or building from archives: NO\n - Python version: 3.10\n - CUDA version: 12.8\n - GPU models and configuration: NVIDIA\n - Any other relevant information: directly use torch-tensorrt 2.8.0 wheel with github 2.8.0 tag to run tools/llm\n\n## Additional context\n\nHi there, I tried to use tools/llm with static_cache_v2 to run qwen2.5 model, and I use such script to run:\n\npython run_llm.py --model Qwen/Qwen2.5-0.5B-Instruct --prompt \"What is parallel programming?\" --precision FP16 --num_tokens 128 --cache static_v2 --benchmark\n\nwhen i use nsight system to profiling, I found that using static_cache_v2 would bring launch overhead to tensorrt engine in each prefill / decode block, do you have this problem too? thought this overhead is too much, almost make torch-tensorrt the same speed compared to just enable torch.compile\n\nhere is the nsys profiling result: the red line shows there is approximately 1.7ms overhead and no gpu activities at all (when disabling static_cache_v2 there is no such bubbles, thought maybe because shape copy or other operators with static_cache_v2?)\n\n\"Image\"\n\nlooking forward to your reply, thanks a lot!\n", "url": "https://github.com/pytorch/TensorRT/issues/3803", "state": "open", "labels": [ "question" ], "created_at": "2025-09-01T17:10:38Z", "updated_at": "2025-09-04T08:43:24Z", "user": "ChiikawaSama" }, { "repo": "pytorch/audio", "number": 4076, "title": "[STABLE ABI] Porting rir/rir.cpp rir/ray_tracing.cpp", "body": "This issue collects tasks that block porting [rir/rir.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/rir/rir.cpp) and [rir/ray_tracing.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/rir/ray_tracing.cpp) to use torch stable ABI.\n\n- [ ] implement `mutable_data_ptr()` and `const_data_ptr()` in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions like `tensor.data_ptr()`. Currently, one needs to rewrite this as `reinterpret_cast(tensor.data_ptr())` where tensor is a `torch::stable::Tensor`. Not really a blocker but would be nice to have.\n Fix available: https://github.com/pytorch/pytorch/pull/161891\n- [ ] import `arange` as a stable/ops.h factory function\n- [ ] implement `torch::fft::fftshift` and `torch::fft::irfft` as a stable/ops.h operation\n Resolution: delete rir/ray_tracing.cpp as unused\n- [ ] implement `index` as a `torch::stable::Tensor` method. Can we use torch::indexing::Slice() in torch stable ABI code?\n- [ ] expose `AT_DISPATCH_FLOATING_TYPES_AND_HALF` and `AT_DISPATCH_FLOATING_TYPES` to stable ABI. Not really a blocker but would be nice to have.\n For a workaround, see https://github.com/pytorch/audio/issues/4078\n- [ ] implement `zeros` and `full` as a `stable/ops.h` factory functions. Currently, one can use `new_empty` and `fill_` to mimic these functions. Not really a blocker but would be nice to have.\n- [ ] implement `tensor` as a `stable/ops.h` factory function. Currently, one can use `new_empty` but it is really clumsy to mimic `tensor`, especially for CUDA tensors.\n- [ ] implement `dot`, `norm`, and `max` as a `torch::stable::Tensor` method or a `stable/ops.h` operation\n- [ ] implement `item()` as a `torch::stable::Tensor` template method\n For a workaround, see https://github.com/pytorch/audio/issues/4078\n\n^ @NicolasHug @scotts @janeyx99", "url": "https://github.com/pytorch/audio/issues/4076", "state": "closed", "labels": [], "created_at": "2025-08-30T19:46:50Z", "updated_at": "2025-11-04T11:34:21Z", "comments": 2, "user": "pearu" }, { "repo": "pytorch/audio", "number": 4075, "title": "[STABLE ABI] Porting overdrive.cpp", "body": "This issue collects tasks that block porting [overdrive.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/overdrive.cpp) to use torch stable ABI.\n\n- [x] implement `accessor` template as a `torch::stable::Tensor` template method\n Fix available: https://github.com/pytorch/pytorch/pull/161967\n- [x] can we use `at::parallel_for` in torch stable ABI code?\n- [x] expose `AT_DISPATCH_FLOATING_TYPES` to stable ABI, currently one need to implement the dispatch logic using `switch` block. Not a blocker but would nice to have.\n For a workaround, see https://github.com/pytorch/audio/issues/4078\n\n^ @NicolasHug @scotts @janeyx99", "url": "https://github.com/pytorch/audio/issues/4075", "state": "closed", "labels": [], "created_at": "2025-08-30T19:23:39Z", "updated_at": "2025-11-20T14:17:04Z", "comments": 0, "user": "pearu" }, { "repo": "pytorch/audio", "number": 4074, "title": "[STABLE ABI] Porting lfilter.cpp", "body": "This issue collects tasks that block porting [lfilter.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/lfilter.cpp) to use torch stable ABI.\n\n- [x] implement `mutable_data_ptr()` and `const_data_ptr()` in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions like `tensor.data_ptr()`. Currently, one needs to rewrite this as `reinterpret_cast(tensor.data_ptr())` where `tensor` is a `torch::stable::Tensor`.\n Fix available: https://github.com/pytorch/pytorch/pull/161891\n- [x] can we use `at::parallel_for` in torch stable ABI code?\n- [x] implement `unsqueeze` as a `stable/ops.h` operation\n- [x] implement `select` as a `stable/ops.h` operation\n- [x] implement `at::matmul` as a `stable/ops.h` operation\n- [x] implement `index_put_` as `torch::stable::Tensor` method or a `stable/ops.h` operation. Can we use `torch::indexing::Slice()` in torch stable ABI code?\n\n\n^ @NicolasHug @scotts @janeyx99", "url": "https://github.com/pytorch/audio/issues/4074", "state": "closed", "labels": [], "created_at": "2025-08-30T19:13:55Z", "updated_at": "2025-12-01T09:41:54Z", "comments": 4, "user": "pearu" }, { "repo": "pytorch/ao", "number": 2914, "title": "Support for LR-QAT", "body": "Qualcomm research proposed a technique LR-QAT in their paper \"Low-Rank Quantization-Aware Training for LLMs\".\n\nThe core idea is that the low-rank weights are placed within the quantization grid of the model's weights using a custom downcasting operator.\n\nThe unique advantage of this is that it allows for a low rank adapter to control for the impact of quantization while still being absorbed into the main weights at inference, meaning that there's no inference overhead of the technique (and no lossy upcast to merge a LoRA adapter with, for example, NF4 weights in something like QLoRA).\n\nOnce a language model has been optimized under this framework, it's still suitable for further fine tuning, meaning that if one self distills a single target model using LR-QAT, it can be trained for a variety of downstream applications.\n\nThe memory use is quite favorable (relatively comparable to Q-LoRA), but has a variety of advantages to downstream inference usage.\n\nNow, the good things out of the way, there's a few problems:\n\n- The upcasting operator is a bit of development overhead. It requires a completely bespoke LoRA implementation that's not, to my eye, suitable for integration with existing tools.\n\n- While a lot of logic is shared, I don't think the quantization grid logic will cleanly map into existing code.\n\n- There's also some extra fixed point operators that are going to be a bit of a migraine to deal with.\n\n- While memory-cheap, there is some computational overhead to the technique. I still think it's interesting, and has a lot of really favorable properties, but it's worth bearing in mind.\n\nSo, is there any possibility of or interest in adopting this technique within TorchAO? It's a fairly accessible recipe (particularly for end developers) and its inclusion could mean a fairly rich library of accessible model checkpoints up to about 24B parameters in size (as that's around the limit of what I think most developers will be able to optimize on GPUs at home), and my intuition is that even up to around 70B dense models should be accessible on fairly cheap GPU instances, as well.\n\nSo far as MoE, I think it'd take a lot of consideration because there's already a lot of ecosystem growing pains surrounding them (see: ongoing issues with expert dispatch in the Huggingface Transformers ecosystem which has been inherited by most finetuning frameworks with the notable exception of Torchtune), and many existing implementations have poor support / prospects for funky operators (particularly LoRA, etc).", "url": "https://github.com/pytorch/ao/issues/2914", "state": "open", "labels": [], "created_at": "2025-08-30T18:16:10Z", "updated_at": "2025-09-04T01:14:35Z", "comments": 1, "user": "Juahyori" }, { "repo": "pytorch/torchtitan", "number": 1661, "title": "CPU Mode Request", "body": "Hi all, just getting in to using Torch Titan and have really loved it! One thing I personally would find useful is the ability to do small day to day development on my laptop in a CPU mode. I realize that TorchTitan is a distributed training repo, but I think a lot of researchers would still find a CPU dev/debug mode useful (VMs are expensive for just tracking down my latest brand of random bugs that I introduce to my code base \ud83d\ude05 ) \n\nWould there be an appetite for cpu only compatibility? Happy to make a PR as I will be doing this for my own fork. \n\nThanks,\n\nDonal ", "url": "https://github.com/pytorch/torchtitan/issues/1661", "state": "open", "labels": [ "question" ], "created_at": "2025-08-29T07:48:31Z", "updated_at": "2025-08-29T17:32:46Z", "user": "djbyrne" }, { "repo": "pytorch/executorch", "number": 13787, "title": "How to enable XNN_ENABLE_SPARSE in Executorch", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI would like to ask if there is any plan to support XNN_ENABLE_SPARSE in Executorch.\n\nI am working on a model that contains a significant amount of sparse operations, and I believe enabling XNN_ENABLE_SPARSE could lead to a substantial performance improvement.\n\nIs this feature currently supported? If not, are there any plans to add this in the future roadmap? Any guidance on how to enable it or potential workarounds would be greatly appreciated.\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_\n\ncc @digantdesai @mcr229 @cbilgin", "url": "https://github.com/pytorch/executorch/issues/13787", "state": "open", "labels": [ "module: xnnpack" ], "created_at": "2025-08-29T04:04:39Z", "updated_at": "2025-09-08T16:32:36Z", "user": "HKLee2040" }, { "repo": "pytorch/torchtitan", "number": 1653, "title": "Interleaved 1F1B weight-gradient computation decoupling", "body": "Hi torchtitan team,\n\nThe kimi K2 reports apparently do not use dualpipe, and instead use interleaved 1F1B and \"decouple the weight-gradient computation from each micro-batch\u2019s backward pass and execute it in parallel with the corresponding PP communication\" to mitigate the PP communication overhead. I am curious how hard it is to implement this with torchtitan.\n\n\n\"Image\"\n\n\nI tried out interleaved 1F1B in the other thread, but there appear to be significant bubbles:\n\n\"Image\"\n\nSee https://drive.google.com/drive/folders/1F-d-ETeHbRbkAtuTkgApaWiOoGYotSXj?usp=sharing.\n\nNot sure if it's possible to try out kimi K2 style interleaved 1F1B with DeepSeek v3.\n\nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/1653", "state": "open", "labels": [ "question", "module: pipelining" ], "created_at": "2025-08-28T18:21:15Z", "updated_at": "2025-09-05T20:19:24Z", "user": "vwxyzjn" }, { "repo": "pytorch/ao", "number": 2896, "title": "[CPU][FP8][Inductor] How to support fp8 quant for inductor on CPU", "body": "What we want to do is to enable FP8 quantization in PyTorch. Similar to INT8 quantization, this requires inserting quantize and dequantize operations into the computational graph. In order to reuse pattern matching logic of int8, we need register FP8 quant and dequant.\n\nTo address this, we attempted to register quant in [#2379](https://github.com/pytorch/ao/pull/2379), but the PR was reverted in [#2672](https://github.com/pytorch/ao/pull/2672) because it caused performance regression on H100 GPUs.\n\nIt will take a lot of effort to find the root cause of GPU regression.\nMaybe we can register quant specifically for CPU, but this requires defining and registering a separate function for CPU.\n@jerryzh168 @vkuzo Do you have some suggestions about it?\ncc @Xia-Weiwen \n\nI create following test to show the issue.\n```python\nimport os\n\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\nos.environ[\"TORCHINDUCTOR_FREEZING\"] = \"1\"\nos.environ[\"TORCH_COMPILE_DEBUG\"] = \"1\"\nos.environ[\"TORCHDYNAMO_PRINT_GUARD_FAILS\"] = \"1\"\n\nimport torch\nimport torchao\n\ndtype = torch.float\nqtype = torch.float8_e4m3fn\n\ndef dequantize_per_tensor(\n tensor: torch.Tensor,\n scale: float,\n output_dtype: torch.dtype\n) -> torch.Tensor:\n res = torchao.quantization.quant_primitives._dequantize_affine_float8(\n tensor=tensor,\n scale=torch.tensor([scale]),\n output_dtype=torch.float\n )\n return res\n\ndef quantize_per_tensor(\n tensor: torch.Tensor,\n scale: float,\n) -> torch.Tensor:\n return torchao.quantization.quant_primitives._quantize_affine_float8(\n tensor=tensor,\n scale=torch.tensor([scale]),\n float8_dtype=torch.float8_e4m3fn,\n )\n\n\nclass FP8QDQLinear(torch.nn.Module):\n def __init__(self, in_features, out_features):\n super().__init__()\n self.weight = torch.randn((out_features, in_features),).to(qtype)\n self.weight_scale = 1.0\n self.scale = 1.0\n self.bias = None\n\n def forward(self, input):\n weight = dequantize_per_tensor(\n self.weight.data,\n self.weight_scale,\n dtype,\n )\n q_input = quantize_per_tensor(\n input,\n self.scale,\n )\n\n dq_input = dequantize_per_tensor(\n q_input,\n self.scale,\n dtype\n )\n out = torch.nn.functional.linear(dq_input, weight, self.bias)\n\n return out\n\nfrom torch._inductor import config as inductor_config\nfrom torch._dynamo import config\n\nconfig.error_on_recompile = True\n#inductor_config.cpp_wrapper = True\ninductor_config.max_autotune = False\ninductor_config.freezing = True\n\ninductor_config.aot_inductor.debug_compile = False\n\n\nmodel = FP8QDQLinear(13, 16)\nexample_inputs = (torch.randn(128, 13),)\n\nwith torch.no_grad():\n refe = model(*example_inputs)\n test_eager = model(*example_inputs)\n model = torch.compile(model)\n model(*example_inputs)\n test = model(*example_inputs)\n```\nOutputting log on [freezing_patterns.py](https://github.com/pytorch/pytorch/blob/a7c949089af218f71daf3ad25f409f75794e6830/torch/_inductor/fx_passes/freezing_patterns.py#L70) shows that the quant has been decomposed to clamp_min, clamp_max and convert_element_type.\n```python\n# print(gm)\n()\n\n\n\ndef forward(self, arg1_1):\n arg0_1 = self._frozen_param0\n full_default = torch.ops.aten.full.default([1], 1.0, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'), pin_memory = False)\n dequantize_affine_float8 = torch.ops.torchao.dequantize_affine_float8.default(arg0_1, full_default); arg0_1 = None\n clamp_min = torch.ops.aten.clamp_min.default(arg1_1, -448.0); arg1_1 = None\n clamp_max = torch.ops.aten.clamp_max.default(clamp_min, 448.0); clamp_min = None\n convert_element_type = torch.ops.prims.convert_element_type.default(clamp_max, torch.float8_e4m3fn); clamp_max = None\n dequantize_affine_float8_1 = torch.ops.torchao.dequantize_affine_float8.default(convert_element_type, full_default); convert_element_type = full_default = None\n permute = torch.ops.aten.permute.default(dequantize_affine_float8, [1, 0]); dequantize_affine_float8 = None\n mm = torch.ops.aten.mm.default(dequantize_affine_float8_1, permute); dequantize_affine_float8_1 = permute = None\n return (mm,)\n```\n\nFor comparison, here are the results of int8. Quant will be used as a separate operator(torch.ops.quantized_decomposed.quantize_per_tensor.default).\n```python\ndef forward(self, arg4_1):\n arg0_1 = self._frozen_param0\n arg1_1 = self._frozen_param1\n arg2_1 = self._frozen_param2\n arg3_1 = self._frozen_param3\n dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(arg3_1, arg1_1, arg2_1, 0, -128, 127, torch.int8); arg3_1 = arg1_1 = arg2_1 = None\n quantize_per_tensor = torch.ops.quantized_decomposed.quantize_per_tensor.default(arg4_1, 0.027873406186699867, 128, 0, 255, torch.uint8); arg4_1 = None\n dequantize_per_tensor = torch.ops.quantized_decomposed.dequant", "url": "https://github.com/pytorch/ao/issues/2896", "state": "closed", "labels": [], "created_at": "2025-08-28T06:07:47Z", "updated_at": "2025-09-21T09:53:16Z", "user": "shiyang-weng" }, { "repo": "pytorch/vision", "number": 9196, "title": "Why am I getting a discrepency between SSDLite Scores and the Full Probability Vector?", "body": "I am noticitng a slight discrepency between the scores output by the SSDLite model and the Full Probability Vector you get from feeding the features extracted from the backbone through the model head. While the difference is slight, around .004, I find the behavior peculiar and cant find an explanation. Please see the code below:\n\n```\nimport torch\nfrom torchvision.models.detection import ssdlite320_mobilenet_v3_large\nfrom torchvision.transforms import functional as F\nfrom PIL import Image\nimport requests \n\nmodel = ssdlite320_mobilenet_v3_large(weights=True)\nmodel.eval();\n\nmodel_categories_url = 'https://raw.githubusercontent.com/levan92/coco-classes-mapping/refs/heads/master/coco91.names'\nmodel_categories = requests.get(model_categories_url).text.split('\\n')\nmodel_categories.insert(0, '')\n\nimage_url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg'\nimage_label = 'dog'\nimg = Image.open(requests.get(image_url, stream = True).raw)\nimg_tensor = F.to_tensor(img)\nimg_tensor = F.resize(img_tensor, [320, 320])\nimg_tensor = img_tensor.unsqueeze(0)\n\nwith torch.no_grad():\n # 1. Pass through backbone and head\n backbone_output = model.backbone(img_tensor)\n \n #2 Convert OrderedDict to list of feature maps\n features = list(backbone_output.values())\n head_outputs = model.head(features)\n\n # 3. Compute class logits (before NMS)\n class_logits = head_outputs['cls_logits'] # shape: [batch_size, num_anchors, num_classes]\n bbox_regression = head_outputs['bbox_regression']\n\n # 4. Apply softmax to get probabilities\n class_probs = torch.softmax(class_logits, dim=-1) # shape: [1, num_anchors, num_classes]\n\nclass_index = model_categories.index(image_label)\nclass_detections = class_probs[0, (class_probs[0].argmax(dim = 1) == class_index)]\nsorted_indices = torch.argsort(class_detections[:, class_index], descending = True)\nprint(f\"Full Probability Vector Max Value: {class_detections[sorted_indices][0].max(): .4f}\")\n\nimg_tensor = img_tensor.detach()\nmodel_output = model(img_tensor)\nprint(f\"Model Output Max Sore: {model_output[0]['scores'][0]: .4f}\")\n\n>>> Full Probability Vector Max Value: 0.9851\n>>> Model Output Max Sore: 0.9890\n```", "url": "https://github.com/pytorch/vision/issues/9196", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-28T04:24:56Z", "updated_at": "2025-09-06T14:58:19Z", "user": "Aneesh-Sandhir" }, { "repo": "pytorch/ao", "number": 2862, "title": "Duplicated tests in test_mx_tensor.py and test_nvfp4_tensor.py?", "body": "seems like there are some duplicated tests, e.g. https://github.com/pytorch/ao/blob/27f4d7581f8fc6bab4ef37d54b09b6fa76c1ffe6/test/prototype/mx_formats/test_mx_tensor.py#L610 and https://github.com/pytorch/ao/blob/27f4d7581f8fc6bab4ef37d54b09b6fa76c1ffe6/test/prototype/mx_formats/test_nvfp4_tensor.py#L47", "url": "https://github.com/pytorch/ao/issues/2862", "state": "open", "labels": [], "created_at": "2025-08-23T03:26:13Z", "updated_at": "2025-08-23T03:26:25Z", "comments": 0, "user": "jerryzh168" }, { "repo": "pytorch/executorch", "number": 13607, "title": "\"How to Support a Custom Model in HTP Backend\" example code is out of date", "body": "### \ud83d\udcda The doc issue\n\nIn the \"How to Support a Custom Model in HTP Backend\" section of the QNN backend docs, there are a few imports that do not work. It looks like they might have moved in code, but missed in the docs. Specifically, the imports under `executorch.backends.qualcomm.compiler` and for `to_edge_transform_and_lower_to_qnn` need to be updated in the example code.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @mergennachin @byjlw @cccclai @cbilgin", "url": "https://github.com/pytorch/executorch/issues/13607", "state": "closed", "labels": [ "module: doc", "module: qnn" ], "created_at": "2025-08-22T20:53:38Z", "updated_at": "2025-09-30T22:34:54Z", "user": "GregoryComer" }, { "repo": "pytorch/xla", "number": 9578, "title": "API for disabling SPMD?", "body": "The side effects of use_spmd() do not seem reversible through any obvious APIs. \n\nhttps://github.com/pytorch/xla/blob/6b6ef5c7d757f955565b2083c48d936bfd758dcd/torch_xla/runtime.py#L191-L231\n\nIs there some mechanism to do this? \n\n", "url": "https://github.com/pytorch/xla/issues/9578", "state": "open", "labels": [ "enhancement", "distributed" ], "created_at": "2025-08-22T19:28:18Z", "updated_at": "2025-08-23T13:49:31Z", "comments": 1, "user": "jameszianxuTT" }, { "repo": "pytorch/torchtitan", "number": 1612, "title": "PP doesn't work with FlexAttention", "body": "Today PP doesn't work with FlexAttention block causal masking, because PP can't receive `eos_id` as a non-Tensor input (nor can it receive a mask function).\nhttps://github.com/pytorch/torchtitan/blob/main/torchtitan/train.py#L433\n\nThis regression is coming from a recent refactor https://github.com/pytorch/torchtitan/pull/1424 to move `eos_id` out of `ModelArgs`, to remove dependency from model to tokenizer.\n\nThis is blocking optimizations from https://github.com/pytorch/torchtitan/pull/1610.", "url": "https://github.com/pytorch/torchtitan/issues/1612", "state": "closed", "labels": [ "module: pipelining", "high priority", "module: flex attention", "triage review" ], "created_at": "2025-08-21T07:25:15Z", "updated_at": "2025-08-22T15:35:06Z", "comments": 0, "user": "tianyu-l" }, { "repo": "pytorch/ao", "number": 2828, "title": "[fp8 blockwise training] add benchmarking scripts comparing triton quantization kernels vs torch.compile", "body": "## Summary\n- We currently have benchmarking scripts comparing bf16 GEMMs vs Triton fp8 groupwise/blockwise GEMMs vs torch.compile generated fp8 groupwise/blockwise GEMMs [here](https://github.com/pytorch/ao/tree/main/benchmarks/prototype/blockwise_fp8_training)\n- However, we have no benchmarks mentioning the quantization kernels and doing memory bandwidth calculations on them. \n- We need isolated perf benchmarking for these, in order to (1) evaluate options, such as torch.compile vs handwritten kernels, and (2) measure perf improvements/regeressions from changes\n\n## Example\n- An example of a benchmarking script for quantization kernel (with mem bw calcs) can be found [here](https://github.com/pytorch/ao/blob/main/benchmarks/prototype/moe_training/benchmark_rowwise_3d_quant_kernels.py). This can be used as a starting point. For consistency with other benchmarking tooling, please use the same generic infra (`ExperimentConfig`, `ExperimentResult` etc).\n\n## Kernels to benchmark\n- [fp8_blockwise_act_quant_lhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L307C5-L307C32)\n- [fp8_blockwise_act_quant_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L387C5-L387C32)\n- [fp8_blockwise_act_quant_transposed_lhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L486)\n- [fp8_blockwise_weight_quant_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L571)\n- [fp8_blockwise_weight_quant_transposed_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L672)\n- [torch_blockwise_scale_act_quant_lhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L713C5-L713C40) (pytorch reference implementation, bench with torch.compile)\n- [torch_blockwise_scale_act_quant_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L744C5-L744C40) (pytorch reference implementation, bench with torch.compile)\n- [torch_blockwise_scale_weight_quant](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L803) (pytorch reference implementation, bench with torch.compile)", "url": "https://github.com/pytorch/ao/issues/2828", "state": "open", "labels": [], "created_at": "2025-08-21T00:35:55Z", "updated_at": "2025-08-21T00:37:13Z", "comments": 0, "user": "danielvegamyhre" }, { "repo": "pytorch/torchtitan", "number": 1605, "title": "How could I run the DeepSpeed-Megatron gpt_model in TorchTitan ?", "body": "Here is the model I would like to run with TorchTitan\nhttps://github.com/deepspeedai/Megatron-DeepSpeed/blob/main/megatron/model/gpt_model.py#L188 .\n\nAny recommendation will be appreciated. \n\n\n", "url": "https://github.com/pytorch/torchtitan/issues/1605", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-20T18:55:50Z", "updated_at": "2025-08-21T02:34:22Z", "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 161060, "title": "[Question] How to robustly prevent operator fusion in Inductor to workaround a compilation bug?", "body": "### \ud83d\udc1b Describe the bug\n\nI've encountered a Triton compilation failure when using torch.compile with the AOT Inductor backend. The issue appears in a model that uses a computation pattern similar to Rotary Position Embeddings (RoPE).\n\nI'm opening this issue in advance while I work on creating a minimal, self-contained reproducer for a compiler bug, as that process may take some time. My immediate goal is to seek advice on how to effectively workaround the issue.\n\nSource code just belike:\n```Python\n _to_copy_default_17 = torch.ops.aten._to_copy.default(detach_default_8, dtype = torch.int64, layout = torch.strided, device = device(type='cuda', index=0))\n unsqueeze_default_86 = torch.ops.aten.unsqueeze.default(_to_copy_default_17, 1); _to_copy_default_17 = None\n unsqueeze_default_87 = torch.ops.aten.unsqueeze.default(unsqueeze_default_86, -1); unsqueeze_default_86 = None\n _tensor_constant12 = self._tensor_constant12\n mul_tensor_5 = torch.ops.aten.mul.Tensor(unsqueeze_default_87, _tensor_constant12); unsqueeze_default_87 = _tensor_constant12 = None\n cos_default_1 = torch.ops.aten.cos.default(mul_tensor_5)\n sin_default_1 = torch.ops.aten.sin.default(mul_tensor_5); mul_tensor_5 = None\n split_tensor_1 = torch.ops.aten.split.Tensor(transpose_int_1, 64, -1); transpose_int_1 = None\n getitem_6 = split_tensor_1[0]\n getitem_7 = split_tensor_1[1]; split_tensor_1 = None\n mul_tensor_6 = torch.ops.aten.mul.Tensor(getitem_6, cos_default_1)\n mul_tensor_7 = torch.ops.aten.mul.Tensor(getitem_7, sin_default_1)\n return mul_tensor_7 \n```\n\nAnd my demo code is:\n```\n with torch.inference_mode():\n with torch.amp.autocast(\n device_type=\"cuda\", enabled=True, dtype=torch.float16\n ):\n exported_model = torch.export.export(\n mod = model,\n args = (),\n kwargs = inputs_dict,\n dynamic_shapes = {k: {0:torch.export.Dim.STATIC} for k in inputs_dict.keys()}\n )\n \n inductor_configs = {\n \"max_autotune\": False, \n }\n aoti_package_path = torch._inductor.aoti_compile_and_package(\n exported_model, \n package_path=os.path.join(os.path.dirname(__file__), \"wenqi_ele_0820.pt2\"),\n inductor_configs=inductor_configs\n )\n```\n\nThe compiler attempts to create a large fused kernel, but the generated Triton code is invalid, leading to a NameError: 'zuf0' is not defined during compilation. I am working on creating a minimal, self-contained reproducer and will provide it as soon as it's ready.\n\n```\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] module = src.make_ir(options, codegen_fns, module_map, context)\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] File \"/home/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/triton/compiler/compiler.py\", line 81, in make_ir\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] triton.compiler.errors.CompilationError: at 21:12:\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp3 = tl.load(in_ptr1 + (0))\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp4 = tl.broadcast_to(tmp3, [XBLOCK])\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp16 = tl.load(in_ptr2 + (x1), xmask, eviction_policy='evict_last')\nE0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp22 = tl.load(in_ptr3 + (x0), xmask, eviction_policy='evict_last'", "url": "https://github.com/pytorch/pytorch/issues/161060", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2025-08-20T15:44:48Z", "updated_at": "2025-08-21T10:10:23Z", "user": "sujuyu" }, { "repo": "pytorch/torchrec", "number": 3298, "title": "apply 2d parallel but how to save and restore weights", "body": "how to save and restore weights when applying 2d parallel ?", "url": "https://github.com/meta-pytorch/torchrec/issues/3298", "state": "closed", "labels": [], "created_at": "2025-08-20T10:42:19Z", "updated_at": "2025-08-21T01:21:42Z", "comments": 0, "user": "zxr888" }, { "repo": "pytorch/ao", "number": 2811, "title": "NVFP4Tensor to_copy is wrong?", "body": "```\n>>> from torchao.prototype.mx_formats.nvfp4_tensor import NVFP4Tensor\n>>> import torch\n>>> torch.ops.aten._to_copy(NVFP4Tensor.to_nvfp4(torch.randn((32, 128))), dtype=torch.bfloat16)\n\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/home/andrewor/local/pytorch/torch/_ops.py\", line 1254, in __call__\n return self._op(*args, **kwargs)\n File \"/home/andrewor/local/ao/torchao/prototype/mx_formats/nvfp4_tensor.py\", line 137, in __torch_dispatch__\n return NVFP4_OPS_TABLE[func](func, types, args, kwargs)\n File \"/home/andrewor/local/ao/torchao/prototype/mx_formats/nvfp4_tensor.py\", line 316, in nvfp4_to_copy\n tensor._data,\nAttributeError: 'NVFP4Tensor' object has no attribute '_data'\n```\n\nSeems like this should be `tensor.qdata`, and also it should be the [first argument](https://github.com/pytorch/ao/blob/083361bc3f7addc505a0f994a923f4ae9f54388e/torchao/prototype/mx_formats/nvfp4_tensor.py#L93)?\n\nhttps://github.com/pytorch/ao/blob/083361bc3f7addc505a0f994a923f4ae9f54388e/torchao/prototype/mx_formats/nvfp4_tensor.py#L311-L322", "url": "https://github.com/pytorch/ao/issues/2811", "state": "closed", "labels": [], "created_at": "2025-08-19T23:39:31Z", "updated_at": "2025-08-22T21:59:15Z", "comments": 0, "user": "andrewor14" }, { "repo": "pytorch/xla", "number": 9569, "title": "Remove excessive warn message in maybe_get_jax as it creates too many log lines during training", "body": "## \ud83d\udc1b Bug\n\nThe maybe_get_jax() function in torch_xla/_internal/jax_workarounds.py merged in #9521 currently emits a warning message when JAX is not installed. While informative, this warning results in an excessive number of log lines during training workloads, cluttering the logs and making it difficult to spot genuinely important debug messages.\n\n## To Reproduce\n\nSteps to reproduce the behavior:\n\n1. Create Python Virtual Environment (python3 -m venv ptxla_28) on Ubuntu 22.04\n2. pip install torch==2.8.0 torchvision; pip install torch_xla==2.8.0\n3. Create small python script(let's call it trigger_warning.py) \n``` \nimport sys\nsys.path.insert(0, 'ptxla_28/lib/python3.10/site-packages')\nfrom torch_xla._internal.jax_workarounds import maybe_get_jax\nmaybe_get_jax() \n```\n5. execute the script `bash -c \"source ptxla_28/bin/activate && python trigger_warning.py\"`\n6. You should be able to see the warning message like below\n\n```\nWARNING:root:Defaulting to PJRT_DEVICE=CPU\nWARNING:root:You are trying to use a feature that requires jax/pallas.You can install Jax/Pallas via pip install torch_xla[pallas]\n```\n\n## Expected behavior\n\nRemove or suppress this warning message, or limit it to display only once per process/session instead of for every invocation.\n\n## Environment\n\n - Reproducible on XLA backend [CPU/TPU/CUDA]: CPU\n - torch_xla version: 2.8.0\n - Relevant Code:\nhttps://github.com/pytorch/xla/blob/0f56dec9a33a993d4c14cb755bdd25490cabba21/torch_xla/_internal/jax_workarounds.py#L61\n\n\n## Additional context\n\nThe current behavior results in thousands of lines of repeated warnings when running workloads that do not require JAX, negatively impacting developer experience. Reducing or removing this warning will significantly clean up logs for users running long or large-scale training jobs, improving usability without sacrificing relevant error reporting.\n", "url": "https://github.com/pytorch/xla/issues/9569", "state": "open", "labels": [ "performance", "usability", "2.8 release" ], "created_at": "2025-08-19T20:27:24Z", "updated_at": "2025-10-11T02:52:17Z", "comments": 10, "user": "rajkthakur" }, { "repo": "pytorch/TensorRT", "number": 3786, "title": "How to convert a AMP trained model to get best performance and speed?", "body": "According to the doc: https://docs.pytorch.org/TensorRT/user_guide/mixed_precision.html We can convert model with this project where the param precision are explicitly said in the code. But when I train a model with torch AMP GradScaler where no value precision tagged in model code, Can we use this method to get a conerted chackpoint with best performance and inference speedup?\n\n\nIn fect, we had tried the torch pt->onnx-> tensorrt fp16 pipeline to convert pytorch AMP trained checkpoint into trt model format, but the inference results are noisey. while pt->onnx-> tensorrt fp32 pipeline will get a trt fp32 model the inference slower then what we need. ", "url": "https://github.com/pytorch/TensorRT/issues/3786", "state": "open", "labels": [], "created_at": "2025-08-19T07:30:31Z", "updated_at": "2025-10-23T00:20:02Z", "user": "JohnHerry" }, { "repo": "pytorch/pytorch", "number": 160833, "title": "How to address the bug 'unwaited collective calls' when using DTensor?", "body": "### \ud83d\udc1b Describe the bug\n\n\nI have called .wait() like this:\n\n```\ndef custom_wait(_dtensor):\n _local_t = _dtensor.to_local()\n if isinstance(_local_t, AsyncCollectiveTensor):\n _local_t.wait()\n```\n\nBut it still has a BUG:\n\n```\n[W817 11:39:12.975673267 ProcessGroup.cpp:266] Warning: At the time of process termination, there are still 348 unwaited collective calls. Please review your program to ensure that:\n1. c10d_functional.wait_tensor() is invoked on all tensors returned from c10d_functional collective,\n2. c10d_functional.wait_tensor() is invoked on all output tensors of async_op=True torch.distributed collective called under `with allow_inflight_collective_as_graph_input_ctx():`,\nbefore the output tensors of the collective are used. (function ~WorkRegistry)\n\n```\n\n\nSince Dtensor does not have a method like `DTensor.wait()`, I have no idea how to handle it or how to safely use or delete it.\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.8.0+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.1 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (64-bit runtime)\nPython platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.0.140\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti\nNvidia driver version: 550.127.05\ncuDNN version: Could not collect\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\n\u67b6\u6784\uff1a x86_64\nCPU \u8fd0\u884c\u6a21\u5f0f\uff1a 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\n\u5b57\u8282\u5e8f\uff1a Little Endian\nCPU: 8\n\u5728\u7ebf CPU \u5217\u8868\uff1a 0-7\n\u5382\u5546 ID\uff1a GenuineIntel\n\u578b\u53f7\u540d\u79f0\uff1a Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz\nCPU \u7cfb\u5217\uff1a 6\n\u578b\u53f7\uff1a 158\n\u6bcf\u4e2a\u6838\u7684\u7ebf\u7a0b\u6570\uff1a 1\n\u6bcf\u4e2a\u5ea7\u7684\u6838\u6570\uff1a 8\n\u5ea7\uff1a 1\n\u6b65\u8fdb\uff1a 13\nCPU(s) scaling MHz: 93%\nCPU \u6700\u5927 MHz\uff1a 4900.0000\nCPU \u6700\u5c0f MHz\uff1a 800.0000\nBogoMIPS\uff1a 7200.00\n\u6807\u8bb0\uff1a fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities\n\u865a\u62df\u5316\uff1a VT-x\nL1d \u7f13\u5b58\uff1a 256 KiB (8 instances)\nL1i \u7f13\u5b58\uff1a 256 KiB (8 instances)\nL2 \u7f13\u5b58\uff1a 2 MiB (8 instances)\nL3 \u7f13\u5b58\uff1a 12 MiB (1 instance)\nNUMA \u8282\u70b9\uff1a 1\nNUMA \u8282\u70b90 CPU\uff1a 0-7\nVulnerability Gather data sampling: Mitigation; Microcode\nVulnerability Itlb multihit: KVM: Mitigation: VMX disabled\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; Enhanced IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop\nVulnerability Srbds: Mitigation; Microcode\nVulnerability Tsx async abort: Mitigation; TSX disabled\n\nVersions of relevant libraries:\n[pip3] numpy==2.1.3\n[pip3] nvidia-cublas-cu12==12.6.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.6.80\n[pip3] nvidia-cuda-nvrtc-cu12==12.6.77\n[pip3] nvidia-cuda-runtime-cu12==12.6.77\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.0.4\n[pip3] nvidia-curand-cu12==10.3.7.77\n[p", "url": "https://github.com/pytorch/pytorch/issues/160833", "state": "open", "labels": [ "high priority", "triage review", "needs reproduction", "oncall: distributed" ], "created_at": "2025-08-17T03:54:50Z", "updated_at": "2026-01-03T06:31:42Z", "user": "arminzhu" }, { "repo": "pytorch/data", "number": 1506, "title": "v0.12.0 (or 0.11.1?) release timeline", "body": "Hi!\n\nIs there a timeline for the next stable release?", "url": "https://github.com/meta-pytorch/data/issues/1506", "state": "open", "labels": [], "created_at": "2025-08-16T21:39:05Z", "updated_at": "2026-01-02T22:27:59Z", "comments": 3, "user": "mirceamironenco" }, { "repo": "pytorch/torchtitan", "number": 1576, "title": "API for custom metric reporting?", "body": "It would be nice if it were easier to report custom metrics for particular models more easily, but currently this seems to require changing `train.py` and/or modifying `MetricsProcessor` in some invasive way.\n\nCould we introduce an easier mechanism for reporting additional metrics for specific models? A specific use case is to log EP token routing metrics, like [shown here](https://github.com/pytorch/torchtitan/issues/1467#issuecomment-3130249678) and whose custom implementation is [here](https://github.com/rakkit/torchtitan/blob/95732cac15e3c48983328961210b9e0b61e02b1d/torchtitan/train.py?plain=1#L581-L585). I also sometimes want to track activation magnitudes at various layers.\n\nOne idea I had is to leverage the `extra_metrics` arg of the `MetricsProcessor.log` method, which is currently only used to log the lr and number of toks seen:\nhttps://github.com/pytorch/torchtitan/blob/6fc499f6f5b32151a799188be2208cfb09faed30/torchtitan/train.py?plain=1#L517-L527\n\nWe could do something like give `ModelProtocol` a `get_extra_metrics` method:\n```py\nclass ModelProtocol(Protocol):\n [...]\n def get_extra_metrics(self) -> None | dict:\n return None\n```\n\nand modify the reporting code to something like:\n```py\nextra_metrics = {\n \"n_tokens_seen\": global_ntokens_seen,\n \"lr\": lr,\n}\ncustom_metrics = [mp.get_custom_metrics() for mp in self.model_parts]\nfor cm in custom_metrics:\n if cm is not None:\n extra_metrics.update(custom_metrics)\nself.metrics_processor.log(\n self.step,\n global_avg_loss,\n global_max_loss,\n grad_norm.item(),\n extra_metrics=extra_metrics,\n)\n```\nThis can get a bit confusing in complex PP cases, but it's a start. \n\nThoughts? CC @tianyu-l @rakkit", "url": "https://github.com/pytorch/torchtitan/issues/1576", "state": "open", "labels": [], "created_at": "2025-08-15T01:30:37Z", "updated_at": "2025-08-16T00:32:18Z", "comments": 4, "user": "garrett361" }, { "repo": "pytorch/torchtitan", "number": 1574, "title": "Will Dinov3 be included as a model in torchtitan?", "body": "Newly released models from Meta dropped for Dino, will this be included for torchtitan?\n\nhttps://github.com/facebookresearch/dinov3", "url": "https://github.com/pytorch/torchtitan/issues/1574", "state": "open", "labels": [], "created_at": "2025-08-14T21:08:05Z", "updated_at": "2025-08-21T03:23:59Z", "comments": 1, "user": "kmccaffr2023" }, { "repo": "pytorch/TensorRT", "number": 3779, "title": "Announcement: PyTorch org (and TensorRt) will be offered to PyTorch Foundation", "body": "Hey folks, heads up that as part of PyTorch [moving to the PyTorch Foundation](https://pytorch.org/blog/PyTorchfoundation/). Meta will be handing ownership of the PyTorch github organization over to the PyTorch Foundation, along with all the repos in it. \n\n**What's the impact?** \nTechnical ownership of the repos given (roadmap, dev work, etc) will continue to be driven by the same people doing it today, and business ownership (marketing efforts, trademark protection, etc) will be given to the foundation.\n\nMeta will be moving out any repos that Meta or LF doesn\u2019t think are a good fit for the foundation custodianship (based largely around the [foundation requirements](https://github.com/pytorch-fdn/foundation-hosted/blob/main/governance/foundation-hosted-project-process.md#eligibility-criteria)) and placing them in the [meta-pytorch](https://github.com/meta-pytorch) github org (previously called `pytorch-labs`)\n\n**What will happen to this repo?**\nAs a community project, we\u2019ll be letting `pytorch/TensorRt` go to the PyTorch Foundation (so it\u2019ll stay at github.com/pytorch/TensorRt) as [foundation project](https://pytorch.org/blog/pt-foundation-expands/). If the PyTorch Foundation decides not to accept this repo (for not meeting the [foundation requirements](https://github.com/pytorch-fdn/foundation-hosted/blob/main/governance/foundation-hosted-project-process.md#eligibility-criteria)) then we'll default to moving this repo to [meta-pytorch](https://github.com/meta-pytorch).\n\nPlease let me know if you have any concerns", "url": "https://github.com/pytorch/TensorRT/issues/3779", "state": "open", "labels": [ "question" ], "created_at": "2025-08-14T17:15:12Z", "updated_at": "2025-08-14T19:53:22Z", "user": "ZainRizvi" }, { "repo": "pytorch/pytorch", "number": 160648, "title": "How to Use Pipeline Parallelism in Multi-input Models", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI am developing a multimodal model and would like to use the pipeline feature of torch. However, I found that the samples in the introductory docs are rather simple, and they all have only single-input, single-output scenarios. I would like to know how to use the pipeline function for multi-input, single-output models. How to cut the model, can you help me to provide a complete sample or related documents.\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta", "url": "https://github.com/pytorch/pytorch/issues/160648", "state": "open", "labels": [ "oncall: distributed", "module: pipelining" ], "created_at": "2025-08-14T15:41:44Z", "updated_at": "2025-08-20T03:05:32Z", "user": "Bin1024" }, { "repo": "pytorch/tutorials", "number": 3518, "title": "[BUG] - ", "body": "### Add Link\n\nnone ...\n\n### Describe the bug\n\ni use a same .pt model, test is in a same computer, but libtorch is slower than pytorch 30~40%.\nin python, 30 times inference only 18 ms AVG , but in C++ libtorch needs 24ms AVG.\ni am using CUDA 12.8 \uff0c CUDNN 9.5.1 and libtorch 2.8\nmy codes are below..\n\n`\n\n#include \n#include \n#include \n#include \n#include \nint main() {\n // 1. choose device..\n torch::Device device = torch::kCPU;\n if (torch::cuda::is_available()) {\n device = torch::kCUDA;\n std::cout << \"CUDA is available! Using GPU.\" << std::endl;\n\n if (torch::cuda::cudnn_is_available()) {\n std::cout << \"\u2705 cuDNN is available and will be used.\" << std::endl;\n } else {\n std::cout << \"\u274c cuDNN is NOT available. Performance may be suboptimal.\" << std::endl;\n }\n }\n\n // 2. load model\n torch::jit::Module module;\n try {\n module = torch::jit::load(\"/home/bingyu/profile_model/rf202508011_74.pt\", device);\n module.eval();\n std::cout << \"Model loaded successfully.\" << std::endl;\n } catch (const c10::Error& e) {\n std::cerr << \"Error loading model: \" << e.what() << std::endl;\n return -1;\n }\n\n // 3. defination shapes\n const int64_t BATCH_SIZE = 1;\n const int64_t JOINT_NUM = 14;\n const int64_t STATES_HORIZON = 12;\n const int64_t SEQ_LEN = 50;\n const int64_t NUM_CAMERAS = 4;\n const int64_t IMG_C = 3;\n const int64_t IMG_H = 480;\n const int64_t IMG_W = 640;\n\n // 4. create input tensor\n auto qpos = torch::randn({BATCH_SIZE, STATES_HORIZON, JOINT_NUM}, device);\n auto image = torch::randn({BATCH_SIZE, NUM_CAMERAS, IMG_C, IMG_H, IMG_W}, device);\n auto noise = torch::randn({BATCH_SIZE, SEQ_LEN, JOINT_NUM}, device);\n std::vector inputs = {qpos, image, noise};\n\n // 5. warm up ...\n std::cout << \"\\nWarming up model...\" << std::endl;\n for (int i = 0; i < 5; ++i) {\n torch::NoGradGuard no_grad;\n module.forward(inputs);\n }\n std::cout << \"Warm-up completed.\" << std::endl;\n\n // 6. testing..\n const int total_times = 10;\n double total_elapsed = 0.0;\n\n std::cout << \"\\nRunning inference...\" << std::endl;\n for (int i = 0; i < total_times; ++i) {\n torch::NoGradGuard no_grad; // \u5728\u8fd9\u4e2a\u4f5c\u7528\u57df\u5185\uff0c\u4e0d\u8ba1\u7b97\u68af\u5ea6\n\n auto start = std::chrono::high_resolution_clock::now();\n\n auto output = module.forward(inputs).toTensor();\n\n auto end = std::chrono::high_resolution_clock::now();\n\n auto duration = std::chrono::duration_cast(end - start);\n total_elapsed += duration.count();\n\n std::cout << \"Inference \" << i << \" time: \" << duration.count() << \" \u03bcs\" << std::endl;\n }\n\n double avg_time = total_elapsed / total_times;\n std::cout << \"\\nAverage inference time: \" << avg_time << \" \u03bcs (\"\n << avg_time / 1000.0 << \" ms)\" << std::endl;\n\n return 0;\n}\n`\n\n\n\npython code is below..\n`\n\nimport torch\nimport os\nimport time\n\n\n\nMODEL_PATH = \"/home/bingyu/profile_model/rf202508011_74.pt\"\n\nINPUT_SHAPES = [\n (1, 12, 14), # qpos\n (1, 4, 3, 480, 640), # image\n (1, 50, 14) # noise\n]\n\nWARMUP_ITER = 10\nINFERENCE_RUNS = 10\n\n\ndef main():\n\n if not os.path.exists(MODEL_PATH):\n return\n\n device_str = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n device = torch.device(device_str)\n print(f\"\ud83d\ude80 usisng device: {device_str.upper()}\")\n print(f\"\ud83d\udcc2 loading model : {MODEL_PATH}\")\n\n try:\n model = torch.jit.load(MODEL_PATH)\n model.to(device)\n model.eval()\n print(\"\u2705 model success!\")\n except Exception as e:\n print(f\"\u274c model load failed\u3002\\n {e}\")\n return\n\n try:\n inputs = [torch.randn(shape, device=device) for shape in INPUT_SHAPES]\n except Exception as e:\n print(f\"\u274c error INPUT_SHAPES\u3002\\n {e}\")\n return\n\n\n with torch.no_grad():\n for _ in range(WARMUP_ITER):\n model(*inputs)\n # ==================================================================\n\n if device.type == 'cuda':\n torch.cuda.synchronize()\n\n\n timings_ms = []\n\n with torch.no_grad():\n for i in range(INFERENCE_RUNS):\n if device.type == 'cuda':\n start_event = torch.cuda.Event(enable_timing=True)\n end_event = torch.cuda.Event(enable_timing=True)\n\n start_event.record()\n model(*inputs)\n end_event.record()\n\n torch.cuda.synchronize()\n\n elapsed_time = start_event.elapsed_time(end_event)\n timings_ms.append(elapsed_time)\n\n else:\n start_time = time.perf_counter()\n model(*inputs)\n end_time = time.perf_counter()\n\n elapsed_time = (end_time - start_time) * 1000\n timings_ms.append(elapsed_time)\n # =========================================================", "url": "https://github.com/pytorch/tutorials/issues/3518", "state": "closed", "labels": [ "bug", "question" ], "created_at": "2025-08-13T13:25:44Z", "updated_at": "2025-09-03T21:32:30Z", "user": "Sukidesyo" }, { "repo": "pytorch/xla", "number": 9558, "title": "Performance of Torchax", "body": "## \u2753 Questions and Help\n\nHello Community,\n\nWill using torchAx be slower than native PyTorch? Is there any transformation layer of tensors which makes it slower ? ", "url": "https://github.com/pytorch/xla/issues/9558", "state": "open", "labels": [ "question", "torchxla2" ], "created_at": "2025-08-13T10:05:08Z", "updated_at": "2025-08-15T21:29:25Z", "user": "yuanfz98" }, { "repo": "pytorch/pytorch", "number": 160405, "title": "[Expandable block] how to get the best-fit free block", "body": "To get free expandable block, the algorithm selects a locally optimal solution instead of the globally best-fit block, since the expandable sizes are not sorted. The best-fit block is the block that meets the requirements and has the smallest expandable size. The original code is\n\n```\n auto expandable_size = [](Block* b) {\n return b->size + (b->next && !b->next->mapped ? b->next->size : 0);\n };\n auto next = it;\n next++;\n while ((*it)->expandable_segment_ && next != pool.blocks.end() &&\n (*next)->stream == p.stream() &&\n expandable_size(*next) < expandable_size(*it)) {\n it = next++;\n }\n```\n\nI have a proposition regarding that\n\n```\n auto expandable_size = [](Block* b) {\n return b->size + (b->next && !b->next->mapped ? b->next->size : 0);\n };\n auto min_expandable_block = it;\n auto min_expandable_size = expandable_size(*it);\n while ((*it)->expandable_segment_ && it != pool.blocks.end() &&\n (*it)->stream == p.stream() &&\n expandable_size(*it) != (*it)->size) {\n if ((*it)->size < min_expandable_size) {\n min_expandable_block = it;\n min_expandable_size = expandable_size(*it);\n }\n it++;\n }\n // it: the first non-expandable block or the last block of given stream\n // min_expandable_block: the expandable block with the smallest\n // expandable size or the first block found\n if ((*it)->size > min_expandable_size) {\n it = min_expandable_block;\n }\n```\n\nCompare the size of the first non-expandable block (if it exists) with the smallest expandable size before the first non-expandable block to determine the best-fit block.", "url": "https://github.com/pytorch/pytorch/issues/160405", "state": "open", "labels": [ "triaged", "module: CUDACachingAllocator" ], "created_at": "2025-08-12T08:38:38Z", "updated_at": "2025-08-14T05:24:01Z", "user": "HU-qingqing" }, { "repo": "pytorch/torchtitan", "number": 1554, "title": "The ordering of fsdp, ac, tp, pp and complie etc.", "body": "Based on the code, the ordering of parallelization and optimization appears to be: PP \u2192 TP \u2192 AC \u2192 Compile \u2192 FSDP/DDP. \nIs it possible to modify this ordering? If not, could you explain the rationale for this specific sequence?", "url": "https://github.com/pytorch/torchtitan/issues/1554", "state": "open", "labels": [ "documentation", "question" ], "created_at": "2025-08-12T04:35:02Z", "updated_at": "2025-12-12T10:56:00Z", "user": "aoyulong" }, { "repo": "pytorch/torchtitan", "number": 1553, "title": "Inquiry about torchtitan v0.1.0 compatibility with CUDA 12.3", "body": "Hello,\n\nI would like to inquire about the compatibility of torchtitan with CUDA 12.3.\n\nI am trying to use torchtitan v0.1.0, but I am facing some challenges due to my environment constraints. My computing resources are equipped with CUDA 12.3, and I am unable to upgrade the CUDA version at this moment.\n\nWhen I attempted to install torchtitan v0.1.0 following the official instructions, I noticed that the required dependencies are built for CUDA 12.6:\n```\ntorch version: torch-2.8.0.dev20250617+cu126\ntorchao version: torchao-0.12.0.dev20250617+cu126\n```\nThis leads to an incompatibility with my current setup.\nFurthermore, I tried using torch v2.7+cu118 to see if it would resolve the issue, but this resulted in import errors.\nCould you please provide guidance on how I can successfully install and use torchtitan v0.1.0 in an environment with CUDA 12.3?\n\nThank you for your time and assistance.", "url": "https://github.com/pytorch/torchtitan/issues/1553", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-12T04:17:26Z", "updated_at": "2025-08-15T14:34:55Z", "user": "Sun2018421" }, { "repo": "pytorch/torchtitan", "number": 1552, "title": "Any example for vpp scheduler for Deepseek/llama", "body": "I'm learning VPP 1F1B recently and want to figure out different implementation between tortitan and megatron, but i don't know how to build Vpp-1f1b schedule thus i cannot figure out how it works in titan. Is there any example to helpl me build vpp-1f1b example ?", "url": "https://github.com/pytorch/torchtitan/issues/1552", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-12T01:40:12Z", "updated_at": "2025-08-28T22:33:58Z", "user": "YingLaiLin" }, { "repo": "pytorch/TensorRT", "number": 3766, "title": "\u2753 [Question] C++ Windows runtime error", "body": "## \u2753 Question\nHow can I fix this error?\n```\nUnknown type name '__torch__.torch.classes.tensorrt.Engine':\n File \"code/__torch__/torch_tensorrt/dynamo/runtime/_TorchTensorRTModule.py\", line 6\n training : bool\n _is_full_backward_hook : Optional[bool]\n engine : __torch__.torch.classes.tensorrt.Engine\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\n def forward(self: __torch__.torch_tensorrt.dynamo.runtime._TorchTensorRTModule.TorchTensorRTModule,\n x: Tensor) -> Tensor:\n```\n\nRun script\n```\ntorch::jit::Module trt_ts_mod;\ntry {\n // Deserialize the ScriptModule from a file using torch::jit::load().\n\tstd::cout << \"Loading TRT engine from: \" << trt_ts_module_path << std::endl;\n trt_ts_mod = torch::jit::load(trt_ts_module_path);\n\tstd::cout << \"TRT engine loaded successfully.\" << std::endl;\n}\ncatch (const c10::Error& e) {\n std::cerr << \"c10::Error loading the model from : \" << trt_ts_module_path << std::endl;\n return -1;\n}\ncatch (const std::exception& e) {\n std::cerr << \"std::exception occurred while loading the model: \" << e.what() << std::endl;\n return -1;\n}\n```\n\n\n## Environment\n\nCMakeListst.txt\n```\ncmake_minimum_required(VERSION 3.17)\nproject(torchtrt_runtime_example LANGUAGES CXX)\n\nfind_package(Torch REQUIRED)\nfind_package(torchtrt REQUIRED)\n\nset(SRCS\n main.cpp\n)\n\ninclude_directories(\"${PRJ_ROOT}/TensorRT/out/install/x64-Release/include\")\n\nadd_executable(${CMAKE_PROJECT_NAME} ${SRCS})\ntarget_link_libraries(${CMAKE_PROJECT_NAME} PRIVATE torch \"-Wl,--no-as-needed\" torchtrt_runtime \"-Wl,--as-needed\")\ntarget_compile_features(${CMAKE_PROJECT_NAME} PRIVATE cxx_std_17)\n```\nI build self TensorRT and Torch-TensorRT both\n\n - PyTorch Version (e.g., 1.0): libtorch-win-shared-with-deps-2.8.0+cu126\n - CPU Architecture: ryzen 2700\n - OS (e.g., Linux): Windows 11\n - Python version: 3.12\n - CUDA version: 12.6\n - GPU models and configuration: RTX 3070", "url": "https://github.com/pytorch/TensorRT/issues/3766", "state": "open", "labels": [ "question" ], "created_at": "2025-08-08T07:56:17Z", "updated_at": "2025-08-15T14:32:30Z", "user": "zsef123" }, { "repo": "pytorch/ao", "number": 2713, "title": "[fp8 blockwise training] try using torch._scaled_mm instead of Triton kernels for fp8 gemms", "body": "We have an initial prototype of DeepSeekV3 style fp8 blockwise training done [here](https://github.com/pytorch/ao/blob/main/torchao/prototype/blockwise_fp8_training/linear.py). Numerics are accurate but performance has not been optimized yet.\n\nInitial tests with a local torchtitan integration on my H100 devgpu show the blockwise GEMM kernels are slower than expected. NCU analysis shows uncoalesced global accesses causing major slowdowns, but rather than optimize these kernels, it's probably a better idea to use `torch._scaled_mm` instead, which recently added support for DSV3 style fp8 GEMMs using a CUTLASS kernel which is likely much more performant than the Triton kernels. This will also be more consistent with our other float8 tensorwise and rowwise training recipes, which use torch._scaled_mm.\n\nWe should do the following:\n\n1. Add benchmarking script(s) that compares runtime of:\n - Performance of [blockwise_fp8_gemm_1x128_128x128](https://github.com/pytorch/ao/blob/143c3a60451727f9fba56289b6fa74cfdb04b440/torchao/prototype/blockwise_fp8_training/kernels.py#L106) vs torch._scaled_mm\n - Performance of [blockwise_fp8_gemm_1x128_128x1](https://github.com/pytorch/ao/blob/143c3a60451727f9fba56289b6fa74cfdb04b440/torchao/prototype/blockwise_fp8_training/kernels.py#L214C5-L214C35) vs torch._scaled_mm\n - (see [here](https://github.com/pytorch/ao/blob/143c3a60451727f9fba56289b6fa74cfdb04b440/torchao/prototype/blockwise_fp8_training/linear.py#L26) for context on how/where these gemms are used for context)\n - Here is an [example](https://github.com/pytorch/ao/blob/main/benchmarks/float8/bench_grouped_mm.py) benchmark script for something similar that can be used as a starting point.\n2. If microbenchmarks show torch._scaled_mm is faster, update the blockwise fp8 linear to use this gemm. \n\nNote torch._scaled_mm has some slightly different stride/mem layout requirements for the inputs. You will see this in the error message that it throws if you try to directly swap it out with the triton gemms.\n", "url": "https://github.com/pytorch/ao/issues/2713", "state": "open", "labels": [ "good first issue", "float8" ], "created_at": "2025-08-07T20:15:10Z", "updated_at": "2025-08-07T20:26:11Z", "comments": 0, "user": "danielvegamyhre" }, { "repo": "pytorch/torchtitan", "number": 1543, "title": "Minimum number of GPUs needed to pretrain llama4_17bx16e - 8 ?", "body": "Going by the config files it would be 8 H100 class GPUs, Is 8 a reasonable number ? ", "url": "https://github.com/pytorch/torchtitan/issues/1543", "state": "closed", "labels": [], "created_at": "2025-08-06T23:54:05Z", "updated_at": "2025-08-07T20:32:35Z", "comments": 3, "user": "githubsgi" }, { "repo": "pytorch/tutorials", "number": 3512, "title": "Redirect for prototype/ -> unstable/", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nWhen I search [\"flight recorder pytorch\" on Google](https://www.google.com/search?q=pytorch+flight+recorder&sca_esv=56a8724cb68766c6&ei=_7yTaKLqN4ra5NoP38nhqAg&oq=pytorch+flight+recorder&gs_lp=Egxnd3Mtd2l6LXNlcnAiF3B5dG9yY2ggZmxpZ2h0IHJlY29yZGVyKgIIADIIEAAYgAQYsAMyCRAAGLADGAgYHjILEAAYsAMYCBgKGB4yDhAAGIAEGLADGIYDGIoFMg4QABiABBiwAxiGAxiKBTIOEAAYgAQYsAMYhgMYigUyCBAAGLADGO8FMggQABiwAxjvBTILEAAYgAQYsAMYogRIngtQAFgAcAF4AJABAJgBAKABAKoBALgBAcgBAJgCAaACA5gDAIgGAZAGCZIHATGgBwCyBwC4BwDCBwMyLTHIBwM&sclient=gws-wiz-serp)\n\nThe top link is https://docs.pytorch.org/tutorials/prototype/flight_recorder_tutorial.html\n\nwhereas now these tutorials are under https://docs.pytorch.org/tutorials/unstable/flight_recorder_tutorial.html\n\nCan we add a redirect? I knew that the tutorial should be there since i actively work on PyTorch, but for new users this will be confusing and also when `prototype/` is in URL path, the site doesn't have any CSS and is scary-looking\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/tutorials/issues/3512", "state": "closed", "labels": [], "created_at": "2025-08-06T20:40:07Z", "updated_at": "2025-08-07T18:07:35Z", "comments": 2, "user": "H-Huang" }, { "repo": "pytorch/torchtitan", "number": 1527, "title": "Any model fp8 training", "body": "### Bug description\n\nDo you have a further plan to extend training models from llama and deepseek to any model from huggingface transformers library? I've seen an issue where a user asked about qwen but in recent days other companies have announced their excellent MOE models with weights and configs on huggingface, and it would be great to train them using torchtitan\n\n### Versions\n\nLatest versions", "url": "https://github.com/pytorch/torchtitan/issues/1527", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-05T00:21:01Z", "updated_at": "2025-08-05T22:46:15Z", "user": "pizzaball" }, { "repo": "pytorch/torchtitan", "number": 1525, "title": "Transformer is running with float32 instead of bfloat16 !", "body": "### Bug description\n\nModified the Llama3 modle.py to print dtype as follows and ran just 1 rank. The \n\n```\n def forward(\n self,\n tokens: torch.Tensor,\n eos_id: int | None = None,\n input_batch: torch.Tensor | None = None,\n ):\n \"\"\"\n Perform a forward pass through the Transformer model.\n\n Args:\n tokens (torch.Tensor): Input token indices if pipeline parallelism is not enabled.\n If pipeline parallelism is enabled, this will be the input token indices\n for the ranks on the first pipeline stage. This will be the activation of the\n previous pipeline stage if the current rank is not on the first stage.\n input_batch (torch.Tensor): The input batch read from the dataloader.\n This will always be the input batch regardless of the pipeline stage.\n This field is required for non-first PP stages to perform document\n masking attention (to analyze the boundary of the document).\n\n Returns:\n torch.Tensor: Output logits after applying the Transformer model.\n\n \"\"\"\n if self.model_args.use_flex_attn:\n init_attention_mask(\n input_batch if input_batch is not None else tokens, eos_id=eos_id\n )\n\n print (f\"tokens.dtype {tokens.dtype}\")\n # passthrough for nonexistent layers, allows easy configuration of pipeline parallel stages\n h = self.tok_embeddings(tokens) if self.tok_embeddings else tokens\n print (f\"h.dtype {h.dtype}\")\n\n for layer in self.layers.values():\n h = layer(h, self.freqs_cis)\n print (f\"h.dtype {h.dtype}\")\n\n h = self.norm(h) if self.norm else h\n print (f\"h.dtype {h.dtype}\")\n output = self.output(h) if self.output else h\n print (f\"output.dtype {h.dtype}\")\n return output\n\n```\nSeeing only float32 datatypes as follows.\n\n```\ntokens.dtype torch.int64\nh.dtype torch.float32\nh.dtype torch.float32\nh.dtype torch.float32\nh.dtype torch.float32\nh.dtype torch.float32\nh.dtype torch.float32\nh.dtype torch.float32\nh.dtype torch.float32\noutput.dtype torch.float32\n\n```\n\nThe config is:\n\n`model.toml', 'dump_folder': './outputs', 'description': 'Llama 3 debug training', 'use_for_integration_test': True, 'print_args': True}, 'profiling': {'enable_profiling': False, 'save_traces_folder': 'profile_trace', 'profile_freq': 10, 'enable_memory_snapshot': False, 'save_memory_snapshot_folder': 'memory_snapshot'}, 'metrics': {'log_freq': 1, 'enable_tensorboard': False, 'disable_color_printing': False, 'save_tb_folder': 'tb', 'save_for_all_ranks': False, 'enable_wandb': False}, 'model': {'name': 'llama3', 'flavor': 'debugmodel', 'tokenizer_path': './tests/assets/tokenizer', 'converters': [], 'print_after_conversion': False}, 'optimizer': {'name': 'AdamW', 'lr': 0.0008, 'beta1': 0.9, 'beta2': 0.95, 'eps': 1e-08, 'weight_decay': 0.1, 'implementation': 'fused', 'early_step_in_backward': False}, 'lr_scheduler': {'warmup_steps': 2, 'decay_ratio': 0.8, 'decay_type': 'linear', 'min_lr_factor': 0.0}, 'training': {'dataset': 'c4_test', 'dataset_path': None, 'local_batch_size': 8, 'global_batch_size': -1, 'seq_len': 2048, 'max_norm': 1.0, 'steps': 10, 'enable_cpu_offload': False, 'mixed_precision_param': 'bfloat16', 'mixed_precision_reduce': 'float32', 'compile': False, 'gc_freq': 50, 'gc_debug': False, 'seed': None, 'deterministic': False}, 'parallelism': {'data_parallel_replicate_degree': 1, 'enable_compiled_autograd': False, 'data_parallel_shard_degree': -1, 'fsdp_reshard_after_forward': 'default', 'tensor_parallel_degree': 1, 'disable_loss_parallel': False, 'enable_async_tensor_parallel': False, 'pipeline_parallel_degree': 1, 'pipeline_parallel_split_points': [], 'module_fqns_per_model_part': None, 'pipeline_parallel_first_stage_less_layers': 1, 'pipeline_parallel_last_stage_less_layers': 1, 'pipeline_parallel_layers_per_stage': None, 'pipeline_parallel_schedule': '1F1B', 'pipeline_parallel_schedule_csv': '', 'pipeline_parallel_microbatch_size': 1, 'context_parallel_degree': 1, 'context_parallel_rotate_method': 'allgather', 'expert_parallel_degree': 1}, 'checkpoint': {'enable_checkpoint': False, 'folder': 'checkpoint', 'interval': 10, 'initial_load_path': None, 'initial_load_model_only': True, 'initial_load_in_hf': False, 'last_save_model_only': False, 'last_save_in_hf': False, 'export_dtype': 'float32', 'async_mode': 'disabled', 'keep_latest_k': 10, 'load_step': -1, 'exclude_from_loading': [], 'enable_first_step_checkpoint': False, 'create_seed_checkpoint': False}, 'activation_checkpoint': {'mode': 'selective', 'selective_ac_option': '2', 'per_op_sac_force_recompute_mm_shapes_by_fqns': ['moe.router.gate']}, 'float8': {'enable_fsdp_float8_all_gather': False, 'precompute_float8_dynamic_scale_for_fsdp': False, 'recipe_name': None, 'filter_fqns': ['output'], 'emulate': False, 'moe_fqns_prototype': []}, 'mx': {'mxfp8_dim1_cast_kernel_choice': '", "url": "https://github.com/pytorch/torchtitan/issues/1525", "state": "open", "labels": [ "question" ], "created_at": "2025-08-04T22:37:20Z", "updated_at": "2025-08-14T21:25:04Z", "user": "githubsgi" }, { "repo": "pytorch/tutorials", "number": 3507, "title": "Feedback about Optimizing Model Parameters Page", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/optimization_tutorial.html\n\nWithin the section [Full implementation](https://docs.pytorch.org/tutorials/beginner/basics/optimization_tutorial.html#full-implementation), the loop does not contain the `zero_grad` function on top of the backward propagation block as is recommended in the paragraph preceding this section.\n\nActual code:\n```python\n# Backpropagation\nloss.backward()\noptimizer.step()\noptimizer.zero_grad()\n```\nRecommended code:\n```python\noptimizer.zero_grad()\nloss.backward()\noptimizer.step()\n```\n\nIf you could instruct me how to make this change on the documentation, I would be glad to do that.", "url": "https://github.com/pytorch/tutorials/issues/3507", "state": "open", "labels": [], "created_at": "2025-08-04T14:50:13Z", "updated_at": "2025-08-04T14:50:13Z", "comments": 0, "user": "madhaven" }, { "repo": "pytorch/xla", "number": 9537, "title": "What are some large model use cases for torch-xla\uff1f", "body": "## \u2753 Questions and Help\nI\u2019ve observed that torch-xla has been actively developed for GPU support recently. Are there any benchmark comparisons between torch-xla and standard PyTorch, particularly for large-scale model training? Additionally, regarding frameworks such as Megatron-LM, is there any plan for official support within torch-xla moving forward?", "url": "https://github.com/pytorch/xla/issues/9537", "state": "closed", "labels": [ "question", "xla:gpu" ], "created_at": "2025-08-04T09:04:32Z", "updated_at": "2025-08-06T08:24:30Z", "user": "south-ocean" }, { "repo": "pytorch/tutorials", "number": 3506, "title": "Feedback about \u5728 Google Colab \u4e2d\u8fd0\u884c\u6559\u7a0b", "body": "There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/colab.html\nThe content in this page clearly shows how to upload or download your dataset into your Google Drive or your Desktop", "url": "https://github.com/pytorch/tutorials/issues/3506", "state": "open", "labels": [], "created_at": "2025-08-03T03:51:11Z", "updated_at": "2025-12-09T19:11:27Z", "comments": 1, "user": "KevinAllen66" }, { "repo": "pytorch/tutorials", "number": 3505, "title": "Why am I 2:4 sparse slower than dense in the decode stage of LLaMA2\u20117B?", "body": "## Description\nHi\n\n\"Image\"\n\nAs shown in the figure, during the decoding phase, the 2:4 sparsity model is about 12% slower than the dense model, the questions are as follows:\n\n- Is the decode phase dominated by GEMV / small\u2011N GEMM operations, which therefore cannot trigger the 2:4 sparse Tensor Core path?\n\n- Even so, why is the 2:4 sparsity model slower than the dense model?\n\n- If we increase N>1 (e.g., batch multiple requests or generate multiple tokens at once so it becomes a GEMM), can we observe measurable 2:4 sparsity speed\u2011up?\n\n- Are there any sparse kernels or recommended practices for GEMV (matrix\u2011vector) that can take advantage of 2:4 sparsity?\n\n## Environment\nNVIDIA GeForce RTX 4090, 8.9, P2`\n\n=== Python / OS ===\n3.11.13 Linux-6.5.0-18-generic-x86_64-with-glibc2.35\n\n=== PyTorch / CUDA / cuDNN ===\ntorch: 2.2.2+cu121\ncuda: 12.1\ncudnn: 8902\ndevice: NVIDIA GeForce RTX 4090\nsm capability: (8, 9)\n\n=== cuBLASLt ===\ncuBLASLt version: 0\n\n=== TensorRT ===\nTensorRT not installed\n\n\n[2to4_sparsity.zip](https://github.com/user-attachments/files/21557839/2to4_sparsity.zip)\n\nThanks!", "url": "https://github.com/pytorch/tutorials/issues/3505", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-02T03:44:06Z", "updated_at": "2025-08-09T03:14:49Z", "user": "wang-qitong" }, { "repo": "pytorch/torchtitan", "number": 1515, "title": "MiCS (Mixture of Communicators for Scaling)", "body": "Wondering if MiCS (Mixture of Communicators for Scaling) has been considered as a feature in TorchTitan. Would appreciate thoughts on the topic. ", "url": "https://github.com/pytorch/torchtitan/issues/1515", "state": "closed", "labels": [ "question" ], "created_at": "2025-08-01T22:15:13Z", "updated_at": "2025-08-05T19:55:36Z", "user": "githubsgi" }, { "repo": "pytorch/ao", "number": 2649, "title": "Deprecation for Float8DynamicActivationFloat8WeightConfig (version 1) and Float8WeightOnlyConfig (version 1) and the models", "body": "This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.\n\nWhat is deprecated:\n1. We added version 2 config in https://github.com/pytorch/ao/pull/2463, and switched the default version to 2 in https://github.com/pytorch/ao/pull/2650, the version 1 config is now deprecated, please use version 2 config to quantize the model\n2. the quantized checkpoints quantized with version 1 config previously is deprecated as well, and we plan to remove the support to load these checkpoints after pytorch 2.11 release (around 9 months from now)\n\nTimeline:\n0.13.0: annouce deprecation for version 1 config\nafter we migrated all tensor subclasses: remove support for version 1 config\nafter pytorch 2.11 release: remove support for version 1 checkpoints\n", "url": "https://github.com/pytorch/ao/issues/2649", "state": "open", "labels": [ "tracker" ], "created_at": "2025-07-31T22:45:07Z", "updated_at": "2025-10-02T20:48:54Z", "comments": 0, "user": "jerryzh168" }, { "repo": "pytorch/torchtitan", "number": 1506, "title": "Correct MoE auxiliary-loss-free load balancing?", "body": "A very small question: why is the second `expert_bias_delta` assignment used here?\n\nhttps://github.com/pytorch/torchtitan/blob/cf30b2902718790cbe91900414c3201b6d7680b0/torchtitan/experiments/llama4/optimizer.py#L39-L43\n\nThis looks different than Algorithm 1 of https://arxiv.org/pdf/2408.15664, which would instead just be (IIUC):\n```py\nexpert_bias_delta = moe.load_balance_coeff * torch.sign(\n moe.tokens_per_expert.mean() - moe.tokens_per_expert\n)\nmoe.expert_bias.add_(expert_bias_delta)\n```\n\nCC @tianyu-l , who implemented this, I think. ", "url": "https://github.com/pytorch/torchtitan/issues/1506", "state": "closed", "labels": [], "created_at": "2025-07-31T20:24:47Z", "updated_at": "2025-08-01T15:34:42Z", "comments": 2, "user": "garrett361" }, { "repo": "pytorch/ao", "number": 2631, "title": "What is the intention of \"NF4WeightOnlyConfig\" ?", "body": "Hi guys, I confuse about how this class is structured in project.\n\n1. Why \"NF4WeightOnlyConfig\" does not work the same way like others config? Such as:\n```python\nfrom torchao.dtypes._nf4tensor_api import NF4WeightOnlyConfig\nfrom torchao import quantize_\nconfig = NF4WeightOnlyConfig()\nquantize_(model,config)\n```\n I actually can do that, but why you place it in [private module](https://github.com/pytorch/ao/blob/4b119edb6d1e04b7d2cf98856a5366e28f75d6f7/torchao/dtypes/_nf4tensor_api.py#L15) ?\n\n\n2. Why it's not `dataclass` ? So it means the default ` block_size: int = 64` and `scaler_block_size: int = 256` should be fixed ?\n\n3. I want to train QLora with native pytorch model. And it would be great if i can use NF4. But the structure make me confuse, so what is the correct way/ best practice to use this?\n\nThank you.", "url": "https://github.com/pytorch/ao/issues/2631", "state": "closed", "labels": [], "created_at": "2025-07-30T13:57:44Z", "updated_at": "2025-07-31T16:05:50Z", "user": "hieubnt235" }, { "repo": "pytorch/xla", "number": 9519, "title": "Behaviour of xm.all_gather() in SPMD mode", "body": "## \u2753 Questions and Help\nI would like to confirm whether my MLIR compiler's handling of `xm.all_gather()` when running Torch-XLA in SPMD mode is correct.\n\nSay I have the following:\n- A tensor `t` with shape [8192, 784]\n- A 2D named mesh `(batch, model)` of 8 devices in a [2, 4] configuration:\n```\nDevice Mesh:\n0 1 2 3\n4 5 6 7\n```\nNow I do the following steps:\n1. Move the tensor to the XLA device: `t = t.to(torch_xla.device())`\n2. Shard dim 0 of t across the batch dimension and replicate dim 1: `xs.mark_sharding(t, mesh, (\"batch\", None))`\n3. Perform an all-gather operation across dim 0:\n```python\n# Pair devices across batch rows\ngroups = [[0, 4], [1, 5], [2, 6], [3, 7]]\ny = xm.all_gather(t, 0, groups=groups, pin_layout=False)\ny = y.to(\"cpu\")\n```\nThe shape of the final `y` tensor is [16384, 784] where `y[:8192] == y[8192:] == t`. Is this the correct behaviour?", "url": "https://github.com/pytorch/xla/issues/9519", "state": "open", "labels": [ "question", "distributed" ], "created_at": "2025-07-29T19:05:08Z", "updated_at": "2025-07-30T17:40:04Z", "user": "hshahTT" }, { "repo": "pytorch/torchtitan", "number": 1482, "title": "Is there documentation on what exactly are 'dp_shard_mod_ep' and 'dp_shard_in_ep'] ?", "body": "Wondering where I can find detail on 'dp_shard_mod_ep', 'dp_shard_in_ep'] ? \nhttps://github.com/pytorch/torchtitan/blob/5bab356c29dfababd8f16ab7d8e3d50cba6326e5/torchtitan/distributed/parallel_dims.py#L70\n", "url": "https://github.com/pytorch/torchtitan/issues/1482", "state": "open", "labels": [ "documentation" ], "created_at": "2025-07-29T06:48:43Z", "updated_at": "2025-08-21T03:24:48Z", "user": "githubsgi" }, { "repo": "pytorch/helion", "number": 392, "title": "ImportError: cannot import name 'triton_key' from 'torch._inductor.runtime.triton_compat'", "body": "Does Helion require nightly PyTorch? (I'm using 2.7.1)", "url": "https://github.com/pytorch/helion/issues/392", "state": "closed", "labels": [ "question" ], "created_at": "2025-07-29T04:34:20Z", "updated_at": "2025-08-25T21:20:54Z", "user": "HanGuo97" }, { "repo": "pytorch/torchtitan", "number": 1478, "title": "Is FSDP+TP+EP supported for Llama4 ?", "body": "Wondering if FSDP+TP+EP is supported for pre-training LLama4 ? ", "url": "https://github.com/pytorch/torchtitan/issues/1478", "state": "closed", "labels": [ "question" ], "created_at": "2025-07-28T22:55:43Z", "updated_at": "2025-08-21T02:36:59Z", "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 159295, "title": "Invalid onnx model is exported for model where data is assigned using a mask and index", "body": "### \ud83d\udc1b Describe the bug\n\nExporting a model to onnx which assigns data with a mask and index produces a model which does not work.\n\nExporting the model:\n```python\nimport torch\nimport torch.nn as nn\n\n\nclass TestModel(nn.Module):\n def __init__(self):\n super().__init__()\n\n def forward(self, R):\n B = R.shape[0]\n r = torch.zeros((B, 2), dtype=R.dtype, device=R.device)\n mask = R > 0\n r[mask, 0] = R[mask]\n return r\n\n\ndevice = torch.device(\"cpu\")\nmodel = TestModel()\n\ndummy_input = torch.ones((2,)).to(device)\n\ntorch.onnx.export(\n model,\n dummy_input,\n \"test_model.onnx\",\n export_params=True,\n opset_version=11,\n do_constant_folding=True,\n input_names=['input'],\n output_names=['output'],\n)\n```\n\nUsing the model:\n```python\nimport onnxruntime as ort\nimport numpy as np\n\nwith open(\"test_model.onnx\", \"rb\") as f:\n session = ort.InferenceSession(f.read(), providers=[\"CPUExecutionProvider\"])\n_ = session.run(None, {\"input\": np.array([0, 1], dtype=np.float32)})\n```\nYou will get an error\n```\n2025-07-28 15:31:24.7808412 [E:onnxruntime:, sequential_executor.cc:572 onnxruntime::ExecuteKernel] Non-zero status code returned while running Reshape node. Name:'/Reshape' Status Message: D:\\a\\_work\\1\\s\\onnxruntime\\core\\providers\\cpu\\tensor\\reshape_helper.h:47 onnxruntime::ReshapeHelper::ReshapeHelper input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1}, requested shape:{2,1}\n```\n\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.7.1+cpu\nIs debug build: False\nCUDA used to build PyTorch: Could not collect\nROCM used to build PyTorch: N/A\n\nOS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)\nGCC version: Could not collect\nClang version: Could not collect\nCMake version: version 3.30.2\nLibc version: N/A\n\nPython version: 3.12.5 (tags/v3.12.5:ff3bc82, Aug 6 2024, 20:45:27) [MSC v.1940 64 bit (AMD64)] (64-bit runtime)\nPython platform: Windows-11-10.0.22631-SP0\nIs CUDA available: False\nCUDA runtime version: 12.9.86\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070\nNvidia driver version: 576.88\ncuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.9\\bin\\cudnn_ops64_9.dll\nIs XPU available: False\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nName: Intel(R) Xeon(R) W-2255 CPU @ 3.70GHz\nManufacturer: GenuineIntel\nFamily: 179\nArchitecture: 9\nProcessorType: 3\nDeviceID: CPU0\nCurrentClockSpeed: 3696\nMaxClockSpeed: 3696\nL2CacheSize: 10240\nL2CacheSpeed: None\nRevision: 21767\n\nVersions of relevant libraries:\n[pip3] numpy==1.26.4\n[pip3] onnx==1.16.2\n[pip3] onnxruntime==1.22.1\n[pip3] torch==2.7.1\n[pip3] torchaudio==2.6.0+cu126\n[pip3] torchvision==0.21.0+cu126\n\ncc @justinchuby", "url": "https://github.com/pytorch/pytorch/issues/159295", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2025-07-28T20:54:13Z", "updated_at": "2025-09-03T20:13:32Z", "user": "cgaudreau-ubisoft" }, { "repo": "pytorch/TensorRT", "number": 3722, "title": "\u2753 [Question] Exporting models using FlashAttention package", "body": "I'd love to export a PyTorch model to TensorRT. In this model I use flash-attn package to speed-up attention. Is this supported? ", "url": "https://github.com/pytorch/TensorRT/issues/3722", "state": "open", "labels": [ "question" ], "created_at": "2025-07-28T11:54:07Z", "updated_at": "2025-07-28T17:42:23Z", "user": "s1ddok" }, { "repo": "pytorch/pytorch", "number": 159249, "title": "[ONNX] How to export RMS Norm", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI'm converting a Pytorch model to ONNX format, but I got this error:\n```\ntorch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::rms_norm' to ONNX opset version 20 is not supported\n```\n\n### Alternatives\n\nI have read the ONNX documentation. They said that this operator is only supported by the opset version >= 23\n\n### Additional context\n\n_No response_\n\ncc @justinchuby", "url": "https://github.com/pytorch/pytorch/issues/159249", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2025-07-28T09:34:51Z", "updated_at": "2025-07-30T14:22:46Z", "user": "HuynhNguyenPhuc" }, { "repo": "pytorch/tutorials", "number": 3488, "title": "\ud83d\udca1 [REQUEST] - tutorial on torchrl LLM API", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nI\u2019d like to write a tutorial about TorchRL LLM post-training API including data formatting for RL, multi-turn conversation handling, tool usage etc\n\n@svekars what\u2019s the policy on open-source models usage? Can I load and use a small model (0.5B) freely?\n\n### Existing tutorials on this topic\n\nI don\u2019t think there are any\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/tutorials/issues/3488", "state": "open", "labels": [ "tutorial-proposal" ], "created_at": "2025-07-23T21:23:25Z", "updated_at": "2025-07-23T22:26:43Z", "comments": 3, "user": "vmoens" }, { "repo": "pytorch/executorch", "number": 12756, "title": "How to get ExecuTorch version in C++?", "body": "I using ExecuTorch in my C++ application and I want to get ExecuTorch version at compile time or runtime. \nBut I haven't found some `#define` or `const std::string` like `EXECUTORCH_VERSION` or function like `get_version()`.\n\nFor example, PyTorch has [`TORCH_VERSION`](https://github.com/pytorch/pytorch/blob/fe8f556006b3397b7bdf844ba9a6cf329c0c1846/torch/csrc/api/include/torch/version.h.in#L16) and TFLite has [`TFLITE_VERSION_STRING`](https://github.com/tensorflow/tensorflow/blob/56a01a65e8055a234cd2198eefaef1ef4f7b087f/tensorflow/lite/version.h#L27). Is something like this available in ExecuTorch?\n\n\n\ncc @larryliu0820 @JacobSzwejbka @lucylq @mergennachin @byjlw", "url": "https://github.com/pytorch/executorch/issues/12756", "state": "open", "labels": [ "module: runtime", "module: user experience" ], "created_at": "2025-07-23T19:17:40Z", "updated_at": "2025-09-16T21:45:11Z", "user": "eltimen" }, { "repo": "pytorch/executorch", "number": 12749, "title": "How to run a executorch model directly from memory instead of saving it as a disk file", "body": "### \ud83d\udcda The doc issue\n\nHi, \n\nI wanted to know if there is any ExecuTorch runtime API that can accept a *.pte model available in the memory (in some sort of a buffer format) and use it to do load and infer?\n\nSo far, I could only find a few which require the model to be passed as a disk file.\n\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/executorch/issues/12749", "state": "open", "labels": [ "module: extension" ], "created_at": "2025-07-23T11:09:04Z", "updated_at": "2025-09-02T06:22:00Z", "user": "vikasbalaga" }, { "repo": "pytorch/torchtitan", "number": 1439, "title": "Duplicate definition of vocab_size?", "body": "Hi @wwwjn @H-Huang @tianyu-l thanks for the amazing work on deepseek v3\n\nHave a minor question: why is there a definition of vocab size here\n\nhttps://github.com/pytorch/torchtitan/blob/4e73af3e2c5f99ad3cb5a21612e615a64b0b75e7/torchtitan/models/deepseek_v3/__init__.py#L50-L51C9\n\nwhich then gets overridden by the tokenizer's vocab size here?\n\nhttps://github.com/pytorch/torchtitan/blob/4e73af3e2c5f99ad3cb5a21612e615a64b0b75e7/torchtitan/models/deepseek_v3/model/args.py#L96-L100", "url": "https://github.com/pytorch/torchtitan/issues/1439", "state": "closed", "labels": [], "created_at": "2025-07-21T22:59:11Z", "updated_at": "2025-07-23T04:09:56Z", "comments": 1, "user": "vwxyzjn" }, { "repo": "pytorch/tutorials", "number": 3481, "title": "[BUG] - Broken links of PyTorch Libraries(torchao, torchrec etc) on the right side of the tutorial index page", "body": "### Add Link\n\nhttps://docs.pytorch.org/tutorials/index.html\n\n### Describe the bug\n\n\nThose links to the \"PyTorch Libraries\" section on the side bar are broken, they should pointed to `https://docs.pytorch.org/ao` instead of `https://docs.ppytorch.org/ao`, same for other libraries. I searched the codebase and seems these broken links come from cppdocs auto compilation. Is there a pointer to how I can start to get a fix PR? Thank you!\n\n\"Image\"\n\n\n[cppdocs repo:]( https://github.com/search?q=repo%3Apytorch%2Fcppdocs%20ppytorch&type=code)\n\n\"Image\"\n\n### Describe your environment\n\nMacOS, \nGoogle Chrome\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/tutorials/issues/3481", "state": "closed", "labels": [ "bug", "website" ], "created_at": "2025-07-21T06:56:26Z", "updated_at": "2025-07-22T15:46:47Z", "comments": 2, "user": "sniper35" }, { "repo": "pytorch/torchtitan", "number": 1422, "title": "[Gemma3] Support?", "body": "Hi Authors,\n\nIs there a plan for Gemme3 series?\n\nBest,\nPeter", "url": "https://github.com/pytorch/torchtitan/issues/1422", "state": "open", "labels": [], "created_at": "2025-07-20T03:22:02Z", "updated_at": "2025-08-21T03:25:09Z", "comments": 1, "user": "YHPeter" }, { "repo": "pytorch/executorch", "number": 12659, "title": "Fix bug in export recipe logic where quantization output is not being forwarded and reexport if quantized.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI've found couple of issues with the original export recipes logic has incomplete functionality:\n1. The output of quantize stage is not getting propagated to next stages\n2. When quantize stage is run, we should re-export the model before we lower to edge.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_\n\ncc @JacobSzwejbka @angelayi", "url": "https://github.com/pytorch/executorch/issues/12659", "state": "closed", "labels": [ "module: exir", "triaged" ], "created_at": "2025-07-19T03:22:30Z", "updated_at": "2025-07-23T21:48:14Z", "user": "abhinaykukkadapu" }, { "repo": "pytorch/tutorials", "number": 3473, "title": "\ud83d\udca1trace images are too small to see anything", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nThe trace images in https://docs.pytorch.org/tutorials/intermediate/pinmem_nonblock.html are not quite readable because they are massively scaled down. Is it possible to make them clickable/zoom-able?\n\n\"Image\"\n\nI was able to view them via browser's open image in a new tab feature and then zoom, but this is very cumbersome.\n\nThis probably applies to some other tutorials as well if they contains trace snapshots.\n\nthanks.\n\n### Existing tutorials on this topic\n\nhttps://docs.pytorch.org/tutorials/intermediate/pinmem_nonblock.html \n\n### Additional context\n\n_No response_\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/tutorials/issues/3473", "state": "open", "labels": [ "website" ], "created_at": "2025-07-18T22:18:37Z", "updated_at": "2025-07-18T22:34:43Z", "comments": 0, "user": "stas00" }, { "repo": "pytorch/torchtitan", "number": 1415, "title": "[Feature request] Use omegaconf or hydra for the config system", "body": "Is there a plan to use Omegaconf or Hydra for the configuration system?\n\nThe current .toml-based configuration system is simple but verbose: it does not support configuration inheritance or composition, which prevents config reuse. \n\nIf this is needed, I am interested in contributing an alternative configuration solution based on Omegaconf.", "url": "https://github.com/pytorch/torchtitan/issues/1415", "state": "open", "labels": [], "created_at": "2025-07-18T18:28:34Z", "updated_at": "2025-07-19T00:49:55Z", "comments": 3, "user": "yzhao30" }, { "repo": "pytorch/executorch", "number": 12627, "title": "How to build executorch for Cortex-A cpu", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI wan to run executorch in Cortex-A cpu devices;\nHow can i do?\nThank you very much\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/executorch/issues/12627", "state": "closed", "labels": [ "need-user-input", "triaged" ], "created_at": "2025-07-18T01:34:51Z", "updated_at": "2025-07-21T12:28:34Z", "user": "barbecacov" }, { "repo": "pytorch/ao", "number": 2566, "title": "FP8 PerRow quantization (CUDA capability>=9.0)", "body": "I found a description as below:\n--------------------------------------------------------------------------------------------------------------\nA8W8 Float8 Dynamic Quantization with Rowwise Scaling\n# for torch 2.5+\nfrom torchao.quantization import quantize_, PerRow, Float8DynamicActivationFloat8WeightConfig\nquantize_(model, Float8DynamicActivationFloat8WeightConfig(granularity=PerRow()))\nPer-row scaling is only supported for bfloat16 weight and activation. This API is only tested on H100. Hardware with CUDA compute capability 8.9 or greater is required.\n----------------------------------------------------------------------------------------------------------------\nwhich said \"CUDA compute capability 8.9 or greater is required.\".But actually, I found that PerRow() needs CUDA compute capability >=9.0, as in the code \n-------------------------------------------------------------------------------------------------------------\nFile \"/opt/conda/lib/python3.11/site-packages/torchao/quantization/quant_api.py\", line 1475, in _normalize_granularity\n assert is_sm_at_least_90() or is_MI300(), (\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAssertionError: PerRow quantization only works for CUDA>=9.0 and MI300+\n----------------------------------------------------------------------------------------------------------\nI use torchao==0.11.0, so is there a typo mistake or the code was wrong?", "url": "https://github.com/pytorch/ao/issues/2566", "state": "open", "labels": [], "created_at": "2025-07-17T04:04:24Z", "updated_at": "2025-07-17T18:26:55Z", "comments": 2, "user": "zzlin-0629" }, { "repo": "pytorch/TensorRT", "number": 3691, "title": "\u2753 [Question] How to understand the value of this project", "body": "## \u2753 Question\n\nI am sorry for I did not use this tool before. but since there is a `tensorrt` released in Nvidia tensorrt lib, and this project depends on the Nvidia tensorrt lib, so what is the value of this project? Is it more safe to use this tool to convert pytorch checkpoints to tensorrt engine file directly, then that with pytorch->onnx -> tensorrt pipeline? I had tired to convert my checkpoint to onnx, AMP trained, and no error on onnx fp32, then I use trtexec to convert the onnx to tensorrt engine file, fp16, an bug-in trt file generated and can not be used for inferece. Can I use this package to directly convert checkpoint to trt file, without the inner bugs? or if there is inner-bug, the conversion will report witch line of my pytorch model code triggered this bug? \nThe report of polygraphy is too hard to find back which line is the bad code.\n\n## What you have already tried\n\n\n\n## Environment\n\n> Build information about Torch-TensorRT can be found by turning on debug messages\n\n - PyTorch Version (e.g., 1.0):\n - CPU Architecture:\n - OS (e.g., Linux):\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\n - Build command you used (if compiling from source):\n - Are you using local sources or building from archives:\n - Python version:\n - CUDA version:\n - GPU models and configuration:\n - Any other relevant information:\n\n## Additional context\n\n\n", "url": "https://github.com/pytorch/TensorRT/issues/3691", "state": "closed", "labels": [ "question" ], "created_at": "2025-07-17T03:47:19Z", "updated_at": "2025-08-19T07:22:22Z", "user": "JohnHerry" }, { "repo": "pytorch/TensorRT", "number": 3683, "title": "\u2753 [Question] HELP:dynamic shape of offset and input is not supported in aten_ops_embedding_bag converter", "body": "## offset and input with dynamic shape is not supported \nIts failed When using tensorrt to compile embedding bag module with dynamic shape in aot mode,\nWhat confuses me is whether the aten_ops_embedding_bag converter supports dynamic shapes for the offset and indices parameters. \nThe official test demo only covers the scenario where the weight has a dynamic shape.\nHowever, during my tests, I found that an negative dimensions error occurs when offset and input is set to a dynamic shape.\n\n## Test Code Demo \n```\n\nclass EmbeddingBagModel(nn.Module):\n def __init__(self, num_embeddings, embedding_dim, hidden_dim=128, mode='mean'):\n super().__init__()\n self.embedding_bag = nn.EmbeddingBag(\n num_embeddings=num_embeddings,\n embedding_dim=embedding_dim,\n mode=mode,\n sparse=False\n )\n nn.init.uniform_(self.embedding_bag.weight, -0.1, 0.1)\n\n self.mlp = nn.Sequential(\n nn.Linear(embedding_dim, hidden_dim),\n nn.ReLU(),\n #nn.BatchNorm1d(hidden_dim),\n nn.Linear(hidden_dim, 1)\n )\n self.sigmoid = nn.Sigmoid()\n def forward(self, input, offsets):\n embedded = self.embedding_bag(input, offsets)\n embedded = embedded.reshape(-1,1,embedding_dim)\n hidden = self.mlp(embedded)\n output = self.sigmoid(hidden)\n return output\n# main\nnum_embeddings = 10000\nembedding_dim = 64\nhidden_dim = 128\nbatch_size = 8\nseq_length = 4\nmodel = EmbeddingBagModel(num_embeddings, embedding_dim, hidden_dim).cuda()\ninput_tensor = torch.randint(0, num_embeddings, (batch_size * seq_length,), dtype=torch.int32).cuda()\noffsets_tensor = torch.arange(0, batch_size * seq_length, seq_length, dtype=torch.int32).cuda()\ninputs=(input_tensor, offsets_tensor)\ndynamic_shapes={\n \"input\": { 0: torch.export.Dim(\"dyn_dim_in\", min=2, max=32),},\n \"offsets\": { 0: torch.export.Dim(\"dyn_dim_off\", min=2, max=32),},\n }\n fx_model = torch.export.export(model, inputs, dynamic_shapes=dynamic_shapes)\n trt_model= torch_tensorrt.dynamo.compile(\n fx_model,\n inputs=inputs,\n enable_precisions=torch.float32,\n min_block_size=1\n )\n\n```\n \n## Error log\n\n```\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py\", line 288, in compile\n trt_gm = compile_module(\n ^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py\", line 462, in compile_module\n trt_module = convert_module(\n ^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_conversion.py\", line 142, in convert_module\n interpreter_result = interpret_module_to_result(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_conversion.py\", line 121, in interpret_module_to_result\n interpreter_result = interpreter.run()\n ^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 610, in run\n self._construct_trt_network_def()\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 347, in _construct_trt_network_def\n super().run()\n File \"/usr/local/lib/python3.12/dist-packages/torch/fx/interpreter.py\", line 146, in run\n self.env[node] = self.run_node(node)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 676, in run_node\n trt_node: torch.fx.Node = super().run_node(n)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch/fx/interpreter.py\", line 203, in run_node\n return getattr(self, n.op)(n.target, args, kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 785, in call_function\n return converter(self.ctx, target, args, kwargs, self._cur_node_name)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/converter_utils.py\", line 526, in convert_with_type_enforcement\n return func(ctx, target, new_args, new_kwargs, name)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/aten_ops_converters.py\", line 313, in aten_ops_embedding_bag\n return impl.embedding.embedding_bag(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/impl/embedding.py\", line 401, in embedding_bag\n return embedding_bag_with_ITensor_offsets(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages", "url": "https://github.com/pytorch/TensorRT/issues/3683", "state": "closed", "labels": [ "question" ], "created_at": "2025-07-15T09:01:39Z", "updated_at": "2025-09-09T20:44:07Z", "user": "theflyfish" }, { "repo": "pytorch/helion", "number": 303, "title": "RuntimeError: Tile(0) is not tracked with proxy for", "body": "Hi, I noticed the following when a tile is used in a function:\n\nCode:\n```python\nimport ast\nimport torch\nimport helion\nimport helion.language as hl\nfrom helion.language import _decorators\nfrom helion._compiler.inductor_lowering import CodegenState\n\n@_decorators.api()\ndef func(\n tensor: torch.Tensor,\n tile: tuple[int, ...]\n) -> torch.Tensor:\n raise NotInsideKernel\n\n\n@_decorators.register_fake(func)\ndef _(\n tensor: torch.Tensor,\n tile: tuple[int, ...]\n) -> torch.Tensor:\n return tensor\n\n\n@_decorators.codegen(func)\ndef _(state: CodegenState) -> ast.AST:\n tensor = state.ast_arg(0)\n assert isinstance(tensor, ast.AST)\n return tensor\n\n\n\n@helion.kernel(static_shapes=True)\ndef helion_func(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:\n\n for tile_m, tile_n in hl.tile((x.shape[0], x.shape[1])):\n x_tile = func(x, (tile_m, tile_n))\n\nx = torch.randn(16, 16)\ny = torch.randn(16, 16)\nhelion_func(x, y)\n```\n\nThe above will print:\n\n```python\nInternalError: RuntimeError: Tile(0) (140637436543456)is not tracked with proxy for \n```\n\nAny chance you know how to fix this? Thanks again!", "url": "https://github.com/pytorch/helion/issues/303", "state": "closed", "labels": [ "question" ], "created_at": "2025-07-13T07:45:20Z", "updated_at": "2025-08-25T21:25:22Z", "user": "HanGuo97" }, { "repo": "pytorch/vision", "number": 9146, "title": "https://github.com/pytorch/vision/blob/b818d320a14a2e6d9d9f28853e9e7beae703e52e/torchvision/io/video.py#L274", "body": "### \ud83d\udc1b Describe the bug\n\nhttps://github.com/pytorch/vision/blob/b818d320a14a2e6d9d9f28853e9e7beae703e52e/torchvision/io/video.py#L274\n\nthis function warning infinite.\n\nand we don't know how to find the equalent code in torchcodec as well/......\n\n### Versions\n\ndsf", "url": "https://github.com/pytorch/vision/issues/9146", "state": "open", "labels": [], "created_at": "2025-07-11T14:46:36Z", "updated_at": "2025-08-07T14:22:22Z", "comments": 2, "user": "OpenJarvisAI" }, { "repo": "pytorch/torchtitan", "number": 1369, "title": "Puzzling collectives in TP ( SP to be exact)", "body": "### Bug description\n\nOn running 1 step of a modified Llama3 debug_model ( n_layer=1) on 2 ranks with TP=2 , noticed 12 alleduce's ( reduce_scatter+allgather) of expected size , 8 * 2048 * 256 / 2 = 2097152 . There should be 8 allreduce's altogether, right ? One each for SelfAttention and FFN/MLP in the forward and backward for each rank. In total 4 for each rank. \n\nBut from the collectives it looks like what is called TP is actually SP ! In that case, there should have been 16 collectives. 8 for each rank. 2 allgather and 2 reduce-scatter each in forward and backward . \n\n```\ndebug_model.toml:\n\n[training]\nlocal_batch_size = 8\nseq_len = 2048\nmax_norm = 1.0 # grad norm clipping\nsteps = 1\ncompile = false\ndataset = \"c4_test\" # supported datasets: c4_test (2K), c4 (177M)\n\n[parallelism]\ndata_parallel_replicate_degree = 1\ndata_parallel_shard_degree = -1\nfsdp_reshard_after_forward = \"default\" # default / never / always\ntensor_parallel_degree = 2\nenable_async_tensor_parallel = false\npipeline_parallel_degree = 1\ncontext_parallel_degree = 1\n\n\n__init__.py:\n \"debugmodel\": TransformerModelArgs(\n dim=256, n_layers=1, n_heads=16, rope_theta=500000\n ),\n\n\n```\n\nAlso wondering what the other 3 allreduce are for ( count 1, 256 and 2048) ? \n\n\n```\n[titan] 2025-07-07 20:12:54,779 - root - INFO - Training starts at step 1.\nhopper01:191370:191370 [1] NCCL INFO ReduceScatter: opCount 1 sendbuff 0x7fa915800000 recvbuff 0x7fa916800000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191369 [0] NCCL INFO AllGather: opCount 2 sendbuff 0x7fdf55800000 recvbuff 0x7fdf54434000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191370 [1] NCCL INFO AllGather: opCount 2 sendbuff 0x7fa915800000 recvbuff 0x7fa914434000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191369 [0] NCCL INFO ReduceScatter: opCount 3 sendbuff 0x7fdf61200000 recvbuff 0x7fdf56000000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191369:191369 [0] NCCL INFO AllGather: opCount 4 sendbuff 0x7fdf60200000 recvbuff 0x7fdf61200000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191370 [1] NCCL INFO ReduceScatter: opCount 3 sendbuff 0x7fa921200000 recvbuff 0x7fa916000000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191370:191370 [1] NCCL INFO AllGather: opCount 4 sendbuff 0x7fa920200000 recvbuff 0x7fa921200000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191369 [0] NCCL INFO ReduceScatter: opCount 5 sendbuff 0x7fdf69400000 recvbuff 0x7fdf60a00000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191369:191369 [0] NCCL INFO AllGather: opCount 6 sendbuff 0x7fdf61200000 recvbuff 0x7fdf69400000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191370 [1] NCCL INFO ReduceScatter: opCount 5 sendbuff 0x7fa929400000 recvbuff 0x7fa920a00000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191370:191370 [1] NCCL INFO AllGather: opCount 6 sendbuff 0x7fa921200000 recvbuff 0x7fa929400000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191369 [0] NCCL INFO AllReduce: opCount 7 sendbuff 0x7fdf556b8000 recvbuff 0x7fdf556b8000 count 16384 datatype 7 op 2 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191370 [1] NCCL INFO AllReduce: opCount 7 sendbuff 0x7fa9156b8000 recvbuff 0x7fa9156b8000 count 16384 datatype 7 op 2 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191369 [0] NCCL INFO AllReduce: opCount 8 sendbuff 0x7fdf556c8000 recvbuff 0x7fdf556c8000 count 16384 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191370 [1] NCCL INFO AllReduce: opCount 8 sendbuff 0x7fa9156c8000 recvbuff 0x7fa9156c8000 count 16384 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191369 [0] NCCL INFO AllReduce: opCount 9 sendbuff 0x7fdf556d8000 recvbuff 0x7fdf556d8000 count 16384 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191370 [1] NCCL INFO AllReduce: opCount 9 sendbuff 0x7fa9156d8000 recvbuff 0x7fa9156d8000 count 16384 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0\nhopper01:191369:191420 [0] NCCL INFO ReduceScatter: opCount a sendbuff 0x7fdf69400000 recvbuff 0x7fdf6a400000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430\nhopper01:191370:191425 [1] NCCL INFO ReduceScatter: opCount a sendbuff 0x7fa929400000 recvbuff 0x7fa92a400000 c", "url": "https://github.com/pytorch/torchtitan/issues/1369", "state": "open", "labels": [ "question" ], "created_at": "2025-07-07T22:12:46Z", "updated_at": "2025-07-10T01:28:07Z", "user": "githubsgi" }, { "repo": "pytorch/tutorials", "number": 3429, "title": "[BUG] - Broken link in intro of 'Learn the Basics' tutorial", "body": "### Add Link\n\nhttps://docs.pytorch.org/tutorials/beginner/basics/intro.html\n\n\n### Describe the bug\n\nIn the 'How to Use This Guide' section, the text reads:\n\n```\nIf you\u2019re new to deep learning frameworks, head right into the first section of our step-by-step guide: [1. Tensors](https://docs.pytorch.org/tutorials/beginner/basics/tensor_tutorial.html).\n```\n\nThat link at the end is broken, because tensor_tutorial.html does not exist. The link should point to tensorqs_tutorial.html instead.\n\nThe result is that clicking on this link results in a 404 error, when it should actually go to the Tensors section\n\n### Describe your environment\n\nMacOs + Google Chrome", "url": "https://github.com/pytorch/tutorials/issues/3429", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-07T19:52:10Z", "updated_at": "2025-07-07T22:18:19Z", "comments": 0, "user": "pankajkakkar" }, { "repo": "pytorch/xla", "number": 9447, "title": "[RFC] Controller for SPMD+MPMD", "body": "# [RFC] Controller for SPMD+MPMD\n\n## Background\nCurrent work is being done to design a solution for making `mark_sharding` first trace the model before it is loaded into devices (https://github.com/pytorch/xla/issues/9341). Together with [Local SPMD](https://github.com/pytorch/xla/issues/9181), this should enable us to achieve [SPMD+MPMD as per its RFC](https://github.com/pytorch/xla/issues/9019). One leftover question is which controller to leverage and how to do it. This RFC aims to provide an approach, and two examples of how SPMD+MPMD.\n\n## API Discussion\nBefore thinking about the specifics on the controller, I think it is important to quickly discuss the user interaction experience with SPMD+MPMD. Specifically, how to handle pipeline parallelism in the context of also doing gSPMD. I see two different approaches: (1) to hide some of that process behind a newly created API, or a new level of abstraction; (2) to leverage existing pipeline parallelism tooling.\n\nI think there is a temptation to create something behind a new API to try to simplify the process as much as possible, and create an easy user experience. However, PyTorch already has strong tooling around pipeline parallelism. These tools see external use, and they themselves ease the process of handling multiple processes running different parts of the pipeline.\n\nRather than creating a new API standard, it is likely better to approach this from a pytorch angle from a \u201cthis is a pytorch backend, how do I do pipeline parallelism with pytorch\u201d. Looking at that angle, it is better to support SPMD+MPMD in these pipeline parallelism APIs rather than to create a new API.\n\n## Approach\nThe general approach will be to:\n1) Trace model without loading it to devices\n2) Split model into individually executing modules\n3) Create processes to execute on split modules\n4) Have modules be executed by process that will be responsible for executing gSPMD\n\nFrom an implementation perspective, the idea is that by allowing Local SPMD, and latent model initialization, APIs created to specialize on pipeline parallelism should be able to manage their individual processes.\n\n## PiPPy\n[PiPPy](https://github.com/pytorch/PiPPy/tree/main) is the pipeline parallelism library created by pytorch. It has an overall tool set that might be convenient. For PiPPy, pipeline parallelism usually will usually take:\n1) Initializing a model without loading it to devices\n2) Creating a pipe through pipeline\n a. At this step, a `GraphModule` is created which contain the modules for each process to execute later\n3) Initializing a process group ([`dist.init_process_group`](https://docs.pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group))\n4) Creating `PipelineStage`s based on the pipe\n5) Executing each pipeline stage\n\nYou can see a step by step in [PiPPy\u2019s read me](https://github.com/pytorch/PiPPy/tree/main), or a llama model example [here](https://github.com/pytorch/PiPPy/blob/main/examples/llama/pippy_llama.py).\n\nEither way, this lets PiPPy to admin individual processes while each process executes gSPMD for the specific modules it was created with.\n\n## Ray\n[Ray](https://github.com/ray-project/ray) is a cluster controller for python that has a lot of utility for scaling large applications, including AI. Ray does not have an explicit pipeline parallelism API, but it can achieve it by leveraging its [actors](https://docs.ray.io/en/latest/ray-core/actors.html).\n\n1) Leverage PiPPy pipeline to create a `GraphModule`\n2) Leverage \u201cGraphModule\u201d to identify module splits\n3) Create Ray actors based on these graph modules\n4) Launch Ray actors, and wait for them to resolve\n\nRay will administer the different actor pod while each pod executes gSPMD for the specific modules it was created with.\n\n## A tale of two pipeline parallelism approaches\nCurrently PyTorchXLA does have a pipeline parallelism approach documented in https://github.com/pytorch/xla/tree/r2.7?tab=readme-ov-file. In its existing approach, each device is associated with a process. As the original [SPMD+MPMD RFC highlighted](https://github.com/pytorch/xla/issues/9019), this is a flawed approach as we are unable to apply gSPMD when using pipeline parallelism. The endeavor here to allow gSPMD to run in pipeline parallel through PiPPy, Ray, and other APIs might cause some confusion as a duplication of functionality.\n\nGiven that, it is worth noting that after the SPMD+MPMD effort, we should reassess our existing pipeline parallelism methodology, and see if it is possible to deduplicate to the more pytorch approach suggested in the RFC.\n", "url": "https://github.com/pytorch/xla/issues/9447", "state": "open", "labels": [ "distributed", "RFC" ], "created_at": "2025-07-07T05:22:59Z", "updated_at": "2025-07-09T02:01:27Z", "comments": 2, "user": "pgmoka" }, { "repo": "pytorch/ao", "number": 2496, "title": "[Feature Req] Can you add *args and **kwargs to improve extensibility ?", "body": "**Description:**\n\nThe current class implementations have not _*args_ and _**kwargs_ and this reduces extensibility.\n\n**Example:**\n\n> Current\n\n```python\nclass AdamW4bit(_AdamBase):\n def __init__(\n self,\n params,\n lr=1e-3,\n betas=(0.9, 0.999),\n eps=1e-8,\n weight_decay=1e-2,\n amsgrad=False,\n *,\n block_size=128,\n bf16_stochastic_round=False,\n ) -> None:\n super().__init__(\n params,\n lr,\n betas,\n eps,\n weight_decay,\n amsgrad,\n block_size=block_size,\n bf16_stochastic_round=bf16_stochastic_round,\n is_adamw=True,\n )\n\n @staticmethod\n def _subclass_zeros(p: Tensor, signed: bool, block_size: int):\n return OptimState4bit.zeros(p.shape, signed, block_size, p.device)\n```\n\n> Suggested \n\n```python\n\nclass AdamW4bit(_AdamBase):\n def __init__(\n self,\n params,\n lr=1e-3,\n betas=(0.9, 0.999),\n eps=1e-8,\n weight_decay=1e-2,\n amsgrad=False,\n *,\n block_size=128,\n bf16_stochastic_round=False,**kwargs #NOTE: <------- Here\n ) -> None:\n super().__init__(\n params,\n lr,\n betas,\n eps,\n weight_decay,\n amsgrad,\n block_size=block_size,\n bf16_stochastic_round=bf16_stochastic_round,\n is_adamw=True,**kwargs #NOTE: <------- Here\n )\n\n @staticmethod\n def _subclass_zeros(p: Tensor, signed: bool, block_size: int):\n return OptimState4bit.zeros(p.shape, signed, block_size, p.device)\n\n```", "url": "https://github.com/pytorch/ao/issues/2496", "state": "open", "labels": [ "triaged" ], "created_at": "2025-07-06T17:29:19Z", "updated_at": "2025-08-01T02:52:20Z", "comments": 3, "user": "Musa-Sina-Ertugrul" }, { "repo": "pytorch/executorch", "number": 12221, "title": "How to build executorch with ANDROID_ABI=armeabi-v7a", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nhttps://github.com/pytorch/executorch/blob/main/tools/cmake/Utils.cmake#L89\nhere, there is no \"ANDROID_ABI=armeabi-v7a\" option, so if i want to build executorch for ANDROID_ABI=armeabi-v7a, how to do?\nthank you very much\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_\n\ncc @larryliu0820 @jathu", "url": "https://github.com/pytorch/executorch/issues/12221", "state": "open", "labels": [ "module: build/install", "triaged" ], "created_at": "2025-07-04T02:22:51Z", "updated_at": "2025-12-01T07:52:13Z", "user": "barbecacov" }, { "repo": "pytorch/ao", "number": 2477, "title": "Support running multi-device tests in CI", "body": "For float8 training, the test_everything.sh script requires multiple GPUs for FSDP/TP tests, so we currently don't run in CI as it's not configured for multi-device jobs. We should figure out how to run these multi-device tests in CI. This would also be useful for some of our new MoE training parallelism tests.", "url": "https://github.com/pytorch/ao/issues/2477", "state": "closed", "labels": [ "ci", "float8" ], "created_at": "2025-07-02T16:29:47Z", "updated_at": "2025-07-16T16:31:06Z", "comments": 2, "user": "danielvegamyhre" }, { "repo": "pytorch/pytorch", "number": 157393, "title": "How to compose HSDP with CP?", "body": "### \ud83d\udc1b Describe the bug\n\nWe're trying to compose HSDP with CP following the [torchtitan blog post](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082) but are running into some issues and it's unclear to us why.\n\nSuppose we have a device mesh with dimensions `[\"dp\", \"cp\", \"ep\"]` where `ep` corresponds to expert parallelism. What we want to do is FSDP on `dp+cp` shards for the expert parameters and HSDP (replicate on `dp+cp`, shard on `ep`) for the non-expert parameters.\n\nOur code looks like the following:\n\n```\nmesh = DeviceMesh(..., mesh_dim_names=[\"dp\", \"cp\", \"ep\"])\nfsdp_mesh = mesh[\"dp\", \"cp\"]._flatten(mesh_dim_name=\"dp_cp\")\nhsdp_mesh = mesh[\"dp_cp\", \"ep\"]\n```\n\nLine 3 above fails because \"ep\" somehow does not exist in the mesh after \"dp_cp\". I'm not sure if this is a bug or the intended way for DeviceMesh to behave. If the latter, is there any way to use a flattend mesh as the replication dim for HSDP?\n\n### Versions\n\n\nCollecting environment information...\nPyTorch version: 2.7.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 20.04.6 LTS (x86_64)\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\nClang version: Could not collect\nCMake version: version 3.31.6\nLibc version: glibc-2.31\n\nPython version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-5.15.0-1081-aws-x86_64-with-glibc2.31\nIs CUDA available: False\nCUDA runtime version: 12.1.105\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: Could not collect\nNvidia driver version: Could not collect\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 46 bits physical, 48 bits virtual\nCPU(s): 48\nOn-line CPU(s) list: 0-47\nThread(s) per core: 1\nCore(s) per socket: 24\nSocket(s): 2\nNUMA node(s): 2\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 85\nModel name: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz\nStepping: 4\nCPU MHz: 2499.994\nBogoMIPS: 4999.98\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 1.5 MiB\nL1i cache: 1.5 MiB\nL2 cache: 48 MiB\nL3 cache: 66 MiB\nNUMA node0 CPU(s): 0-23\nNUMA node1 CPU(s): 24-47\nVulnerability Gather data sampling: Unknown: Dependent on hypervisor status\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\nVulnerability L1tf: Mitigation; PTE Inversion\nVulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Meltdown: Mitigation; PTI\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Vulnerable\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.5\n[pip3] nvidia-cublas-cu12==12.8.3.14\n[pip3] nvidia-cuda-cupti-cu12==12.8.57\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.61\n[pip3] nvidia-cuda-runtime-cu12==12.8.57\n[pip3] nvidia-cudnn-cu12==9.7.1.26\n[pip3] nvidia-cufft-cu12==11.3.3.41\n[pip3] nvidia-curand-cu12==10.3.9.55\n[pip", "url": "https://github.com/pytorch/pytorch/issues/157393", "state": "closed", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2025-07-01T20:45:27Z", "updated_at": "2025-07-09T00:10:23Z", "user": "EugenHotaj" }, { "repo": "pytorch/pytorch", "number": 157352, "title": "[aot_compile]Explanation: Dynamo does not know how to trace the builtin `time.time.`", "body": "### \ud83d\udc1b Describe the bug\n\nGraph break error happened when I compile yolov5 with torch._export.aot_compile interface. I also try with torch.compile and graph breaks also happened. but it compile normally. I am not sure whether this is dynamo bug and how can I resolve this issue.\n\n### Error logs\n\n# code example:\n```\nclass MyYoulo(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.youlo = torch.hub.load('ultralytics/yolov5', 'yolov5s')\n\n def forward(self, x):\n return self.youlo(x)\n\nwith torch.no_grad():\n torch.manual_seed(0)\n torch._dynamo.config.suppress_errors = True\n input_cpu = torch.rand([1, 3, 640, 640])\n model_cpu = MyYoulo()\n model_cpu.eval()\n output_cpu = model_cpu(input_cpu)\n\n device = \"cuda\"\n model = model_cpu.to(device=device)\n x = input_cpu.cuda()\n example_inputs = (x,)\n batch_dim = torch.export.Dim(\"batch\", min=1, max=1024)\n model_so_path = torch._export.aot_compile(\n model,\n example_inputs,\n #dynamic_shapes={\"x\": {0: batch_dim}},\n options={\"aot_inductor.output_path\": os.path.join(os.getcwd(), \"libyolo.so\")},\n )\n```\n\n# backtrace\n\n```\n File \"/root/workspace/youlo/youlo.py\", line 35, in \n model_so_path = torch._export.aot_compile(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/conda/lib/python3.11/site-packages/torch/_export/__init__.py\", line 133, in aot_compile\n gm = _export_to_torch_ir(\n ^^^^^^^^^^^^^^^^^^^^\n File \"/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py\", line 739, in _export_to_torch_ir\n gm_torch_level, _ = torch._dynamo.export(\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py\", line 1677, in inner\n result_traced = opt_f(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py\", line 659, in _fn\n raise e.with_traceback(None) from None\ntorch._dynamo.exc.Unsupported: Attempted to call function marked as skipped\n Explanation: Dynamo does not know how to trace the builtin `time.time.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).\n Hint: If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.\n Hint: If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.\n```\n\n### Versions\n\nversion: torch-2.7.0+cu128\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/157352", "state": "closed", "labels": [ "oncall: pt2", "module: dynamo", "oncall: export" ], "created_at": "2025-07-01T05:58:10Z", "updated_at": "2025-07-04T06:23:42Z", "user": "duanmu0228" }, { "repo": "pytorch/examples", "number": 1362, "title": "Resnet50 on single node with 8 GPUs, all the parameters are default. why the result is different ?", "body": "Hello, I use the command \"python main.py -a resnet50 --dist-url 'tcp://127.0.0.1:60000/' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 /my_data_dir/\" train and test resnet50 on a single node with 8 GPUs. But I got Acc@1 75.694 Acc@5 92.704, this is different from the result presented on https://github.com/facebookarchive/fb.resnet.torch/blob/master/pretrained/README.md (ResNet-50 error rate TOP1:24.01\tTOP5:7.02). All the parameters are default. why the result is different ?", "url": "https://github.com/pytorch/examples/issues/1362", "state": "open", "labels": [], "created_at": "2025-07-01T04:37:58Z", "updated_at": "2025-07-01T04:37:58Z", "comments": 0, "user": "sdwhzh" }, { "repo": "pytorch/TensorRT", "number": 3637, "title": "\u2753 [Question] Why is `torch.bfloat16` excluded from the `allowed_casts` set ?", "body": "https://github.com/pytorch/TensorRT/blob/a66241158dc33a96138ac768a9e1facf0cae3594/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L1030-L1037\n\n\nIs there a specific reason why `torch.bfloat16` is not included in the `allowed_casts` set within the `to_copy_dtype_validator` function?\n\nPlus, this causes graph partitioning when performing a `aten.ops._to_copy` operation to `torch.bfloat16`. I'm wondering if this could potentially impact performance.", "url": "https://github.com/pytorch/TensorRT/issues/3637", "state": "closed", "labels": [ "question" ], "created_at": "2025-06-30T02:24:47Z", "updated_at": "2025-07-04T00:01:16Z", "user": "junstar92" }, { "repo": "pytorch/torchtitan", "number": 1355, "title": "Llama4 TP bug: DTensor local tensor dtype does not match DTensorSpec tensor meta dtype, causing meta registration error", "body": "### Bug description\n\nWhen I apply FSDP+TP to the Llama4 debug model using plain eager bf16 training, the MoE routed experts weights are DTensors. The local tensor dtype is bf16, but the Dtensor spec tensor meta dtype (`self.w1._spec.tensor_meta.dtype`) is fp32. This mismatch seems to cause the meta registration error below.\n\n### Repro command\n```\nNGPU=4 CONFIG_FILE=\"./torchtitan/experiments/llama4/train_configs/debug_model.toml\" ./run_train.sh --training.steps=100 --parallelism.tensor_parallel_degree=2 \n```\n\n### Meta registration error\n```\n File \"/home/danvm/.conda/envs/torchtitan/lib/python3.13/site-packages/torch/_meta_registrations.py\", line 7527, in _meta_grouped_mm_common\n torch._check(\n ~~~~~~~~~~~~^\n mat_a.dtype == torch.bfloat16 and mat_b.dtype == torch.bfloat16,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n lambda: f\"Expected inputs of BF16 type but got mat_a.dtype={mat_a.dtype} and mat_b.dtype={mat_b.dtype}.\",\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File \"/home/danvm/.conda/envs/torchtitan/lib/python3.13/site-packages/torch/__init__.py\", line 1702, in _check\n _check_with(RuntimeError, cond, message)\n ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/danvm/.conda/envs/torchtitan/lib/python3.13/site-packages/torch/__init__.py\", line 1684, in _check_with\n raise error_type(message_evaluated)\n RuntimeError: Expected inputs of BF16 type but got mat_a.dtype=torch.bfloat16 and mat_b.dtype=torch.float32.\n\n```\n\n\n### PDB log\n\nThe following pdb commands/log show inspection of `self.w1` in the MoE layer, confirming the DTensor's local tensor dtype is bf16, yet the DTensorSpec has tensor meta dtype of fp32. This seems to be what is causing the meta registration error mismatch.\n\n```\n[rank0]: 86 -> torch.distributed.breakpoint()\n[rank0]: 87 h = F.silu(torch._grouped_mm(x, self.w1, offs=offsets))\n[rank0]: 88 h = h * torch._grouped_mm(x, self.w3, offs=offsets)\n[rank0]: 89 out = torch._grouped_mm(h, self.w2, offs=offsets)\n[rank0]: 90 \n[rank0]: 91 return out\n\nself.w1\n[rank0]:(Pdb) [rank0]:DTensor(local_tensor=tensor([[[-0.0050, -0.0244, 0.0243, ..., 0.0317, 0.0069, -0.0222],\n[rank0]: [-0.0125, 0.0201, -0.0250, ..., 0.0376, 0.0055, -0.0094],\n[rank0]: [-0.0045, -0.0300, -0.0115, ..., -0.0493, -0.0259, 0.0117],\n[rank0]: ...,\n[rank0]: [-0.0112, -0.0012, -0.0051, ..., -0.0104, 0.0087, -0.0325],\n[rank0]: [ 0.0209, 0.0086, 0.0109, ..., -0.0430, -0.0036, 0.0359],\n[rank0]: [ 0.0110, -0.0234, -0.0066, ..., -0.0238, 0.0148, -0.0304]],\n[rank0]:\n[rank0]: [[-0.0168, -0.0038, 0.0179, ..., 0.0076, -0.0461, -0.0182],\n[rank0]: [-0.0109, -0.0120, 0.0427, ..., -0.0027, -0.0048, -0.0131],\n[rank0]: [-0.0156, 0.0018, -0.0083, ..., 0.0189, 0.0309, 0.0066],\n[rank0]: ...,\n[rank0]: [-0.0021, -0.0231, 0.0132, ..., -0.0095, -0.0050, -0.0168],\n[rank0]: [-0.0422, 0.0035, 0.0017, ..., 0.0339, 0.0195, 0.0003],\n[rank0]: [ 0.0183, 0.0415, 0.0552, ..., 0.0084, 0.0159, 0.0229]],\n[rank0]:\n[rank0]: [[ 0.0036, -0.0337, 0.0398, ..., 0.0027, -0.0219, 0.0043],\n[rank0]: [-0.0107, -0.0270, 0.0166, ..., 0.0044, -0.0030, 0.0432],\n[rank0]: [ 0.0233, 0.0203, 0.0106, ..., -0.0018, -0.0118, -0.0060],\n[rank0]: ...,\n[rank0]: [-0.0247, -0.0038, -0.0322, ..., 0.0172, 0.0156, -0.0047],\n[rank0]: [-0.0225, 0.0289, 0.0299, ..., 0.0025, -0.0221, 0.0134],\n[rank0]: [ 0.0093, 0.0255, -0.0039, ..., 0.0045, -0.0226, -0.0170]],\n[rank0]:\n[rank0]: ...,\n[rank0]:\n[rank0]: [[-0.0120, -0.0054, -0.0262, ..., 0.0086, -0.0012, -0.0043],\n[rank0]: [-0.0192, -0.0245, 0.0143, ..., -0.0083, 0.0111, 0.0067],\n[rank0]: [ 0.0220, -0.0182, 0.0442, ..., 0.0008, 0.0240, 0.0167],\n[rank0]: ...,\n[rank0]: [ 0.0165, -0.0152, 0.0175, ..., 0.0027, 0.0120, 0.0100],\n[rank0]: [ 0.0050, -0.0135, 0.0160, ..., 0.0311, 0.0106, 0.0571],\n[rank0]: [ 0.0199, -0.0073, 0.0215, ..., 0.0131, 0.0327, 0.0097]],\n[rank0]:\n[rank0]: [[ 0.0113, 0.0044, -0.0234, ..., 0.0009, 0.0026, -0.0031],\n[rank0]: [ 0.0059, -0.0195, -0.0089, ..., 0.0269, -0.0195, 0.0033],\n[rank0]: [ 0.0366, 0.0199, 0.0055, ..., -0.0400, -0.0101, -0.0386],\n[rank0]: ...,\n[rank0]: [-0.0040, -0.0228, -0.0114, ..., -0.0342, -0.0032, -0.0157],\n[rank0]: [ 0.0277, -0.0120, -0.0300, ..., 0.0079, 0.0038, 0.0342],\n[rank0]: [-0.0057, 0.0148, -0.0048, ..., -0.0192, -0.0291, 0.0187]],\n[rank0]:\n[rank0]: [[-0.0291, -0.0271, 0.0058, ..., 0.0035, 0.0095, 0.0045],\n[rank0]: [ 0.0508, 0.0175, -0.0264, ..., 0.0070, -0.0014, -0.0064],\n[rank0]: [", "url": "https://github.com/pytorch/torchtitan/issues/1355", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-28T05:31:22Z", "updated_at": "2025-08-21T03:23:49Z", "comments": 2, "user": "danielvegamyhre" }, { "repo": "pytorch/ao", "number": 2456, "title": "How to not decompose the choose_qparams_affine call_func", "body": "Hi,\nIn the current v0.11.0, after torch.export.export() I have the graph below:\n```\n(Pdb) print(ep.graph)\ngraph():\n %linear1_weight : [num_users=1] = get_attr[target=linear1.weight]\n %x : [num_users=2] = placeholder[target=x]\n %choose_qparams_affine : [num_users=2] = call_function[target=torch.ops.torchao.choose_qparams_affine.default](args = (%x, SYMMETRIC, [2, 32], torch.float8_e4m3fn, -448, 448, 1.1920928955078125e-07, torch.float32, None, True, NONE), kwargs = {})\n %getitem : [num_users=2] = call_function[target=operator.getitem](args = (%choose_qparams_affine, 0), kwargs = {})\n %getitem_1 : [num_users=0] = call_function[target=operator.getitem](args = (%choose_qparams_affine, 1), kwargs = {})\n %quantize_affine : [num_users=1] = call_function[target=torch.ops.torchao.quantize_affine.default](args = (%x, [2, 32], %getitem, None, torch.float8_e4m3fn, -448, 448, NONE), kwargs = {})\n %reshape : [num_users=1] = call_function[target=torch.ops.aten.reshape.default](args = (%quantize_affine, [-1, 32]), kwargs = {})\n %numpy_t : [num_users=1] = call_function[target=torch.ops.aten.numpy_T.default](args = (%access_subclass_inner_tensor_default_72,), kwargs = {})\n %_scaled_mm : [num_users=1] = call_function[target=torch.ops.aten._scaled_mm.default](args = (%reshape, %numpy_t, %getitem, %access_subclass_inner_tensor_default_73, None, None, torch.float32, True), kwargs = {})\n %reshape_1 : [num_users=1] = call_function[target=torch.ops.aten.reshape.default](args = (%_scaled_mm, [2, 16]), kwargs = {})\n return (reshape_1,)\n```\n\nHowever if I use the latest torchao nightly, I found that choose_qparams_affine call_func being decomposed to a set of aten ops:\nwhich is probablly introduced by \nhttps://github.com/pytorch/ao/commit/8940aa72b182afe70f95e33500f01fc270c9f7cd#diff-d2a11602a79e83305208472f1abe6a4106f02ce62a7f9524007181813863fcf6\n\nIs there a way to avoid decompose the choose_qparams_affine call_func?\nOr from the decomposed ep.graph, how can I get the undecomposed nodes?\n\n\nexample code:\n\n```\nimport torch\nfrom torchao.quantization.quant_api import (\n quantize_,\n Float8DynamicActivationFloat8WeightConfig\n)\n\nclass SimpleNetwork(torch.nn.Module):\n def __init__(self):\n super(SimpleNetwork, self).__init__()\n self.linear = torch.nn.Linear(in_features=32, out_features=16, bias=False)\n\n def forward(self, x):\n return self.linear(x)\n\nmodel= SimpleNetwork().eval().cuda()\ninput = torch.randn(2, 32).cuda()\nconfig = Float8DynamicActivationFloat8WeightConfig()\nquantize_(model, config)\n\nep = torch.export.export(model, (input,), strict=False) \n```", "url": "https://github.com/pytorch/ao/issues/2456", "state": "open", "labels": [], "created_at": "2025-06-27T22:23:33Z", "updated_at": "2025-07-25T18:26:32Z", "user": "lanluo-nvidia" }, { "repo": "pytorch/torchtitan", "number": 1344, "title": "Issue reproducing Float8 performance benchmark", "body": "### Bug description\n\nI'm looking at https://github.com/pytorch/torchtitan/blob/main/benchmarks/llama3_h100_202412_torchtitan.md. Specifically, this table:\n\n\"Image\"\n\nI'm not certain what the repro command for this. From https://github.com/pytorch/torchtitan/blob/main/docs/float8.md, I went ahead with `CONFIG_FILE=\"./torchtitan/models/llama3/train_configs/llama3_8b.toml\" ./run_train.sh --model.converters=\"float8\" --float8.enable_fsdp_float8_all_gather --float8.precompute_float8_dynamic_scale_for_fsdp --float8.force_recompute_fp8_weight_in_bwd --training.compile`. \n\nMade the following changes to my llama3 toml: https://gist.github.com/xmfan/53fca4ed56cf7e713a282ce6e1922e9e\n- seq_len = 32768\n- data_parallel_shard_degree = 8 (for 8 gpu fsdp)\n- activation_checkpoint.mode = \"full\"\n- steps = 400 (just for a shorter run)\n\nBut my peak memory of the run seems way lower than the one quoted in the perf benchmarks, which makes me think I did something wrong. @tianyu-l tried these settings, and got a hang instead.\n\nAre these the correct settings for this benchmark?\n\nhttps://gist.github.com/xmfan/5a6b6daa0968aed7499ef364dae61420\n\n### Versions\n\nlatest torchao (`USE_CPP=0 python -m pip install git+https://github.com/pytorch/ao.git`), pytorch 06/25 nightly, torchtitan main", "url": "https://github.com/pytorch/torchtitan/issues/1344", "state": "open", "labels": [ "documentation" ], "created_at": "2025-06-26T04:22:28Z", "updated_at": "2025-07-10T01:53:47Z", "comments": 6, "user": "xmfan" }, { "repo": "pytorch/xla", "number": 9405, "title": "Cannot mark sharding or print values of a SPMD tensor in a scanned function", "body": "## \ud83d\udc1b Bug\n\nCannot mark sharding or print values of a SPMD tensor in a scanned function\n\n## To Reproduce\n\n```python\nimport torch_xla.core.xla_model as xm\nimport torch_xla.runtime as xr\nimport torch_xla.distributed.spmd as xs\nfrom torch_xla.experimental.scan import scan\n\nimport torch\nfrom torch import nn\n\nimport numpy as np\n\nclass ModelWithOnlyScan(nn.Module):\n def __init__(self, size: int, num_layers: int):\n super().__init__()\n self.linear_weight = nn.Parameter(torch.randn(num_layers, size, size))\n\n @staticmethod\n def scan_fn(carry, w):\n x, y = carry\n xs.mark_sharding(y, xs.get_global_mesh(), (None, None)) # !! exception here\n # or\n print(y) # !! exception here\n x = x * torch.nn.functional.gelu(x @ w.T, approximate=\"tanh\") * (y @ w.T)\n return (x, y), None\n\n def forward(self, x, y):\n state = (x, y)\n return scan(self.scan_fn, init=state, xs=self.linear_weight)[0]\n\ndef init_spmd() -> xs.Mesh:\n n_dev = xr.global_runtime_device_count()\n mesh_shape = (n_dev,)\n dev_id = np.array(range(n_dev))\n xr.use_spmd()\n\n mesh = xs.Mesh(dev_id, mesh_shape, (\"fsdp\", ))\n xs.set_global_mesh(mesh)\n\n return mesh\n\ndef test_scan_spmd():\n init_spmd()\n mesh = xs.get_global_mesh()\n\n size = 32\n num_layers = 4\n model = ModelWithOnlyScan(size, num_layers).to(\"xla\")\n xs.mark_sharding(model.linear_weight, mesh, (None, \"fsdp\", None))\n\n input_x = torch.randn(4, size).to(\"xla\")\n input_y = torch.randn(4, size).to(\"xla\")\n xs.mark_sharding(input_x, mesh, (\"fsdp\", None))\n xs.mark_sharding(input_y, mesh, (\"fsdp\", None))\n \n output = model(input_x, input_y)\n\n xm.mark_step()\n print(output)\n\nif __name__ == \"__main__\":\n test_scan_spmd()\n```\n\nSample stack trace:\n```\nTraceback (most recent call last):\n File \"/root/my-repo/./repro_spmd.py\", line 60, in \n test_scan_spmd()\n File \"/root/my-repo/./repro_spmd.py\", line 54, in test_scan_spmd\n output = model(input_x, input_y)\n File \"/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/root/my-repo/./repro_spmd.py\", line 27, in forward\n return scan(self.scan_fn, init=state, xs=self.linear_weight)[0]\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/experimental/scan.py\", line 158, in scan\n forward, alias_input, backward = value_and_grad_partitioned(\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/experimental/scan.py\", line 255, in value_and_grad_partitioned\n out = fn_compiled(fake_carry_pytree, fake_x_pytree)\n File \"/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 929, in returned_function\n compiled_fn, _ = create_aot_dispatcher_function(\n File \"/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 570, in create_aot_dispatcher_function\n return _create_aot_dispatcher_function(\n File \"/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 671, in _create_aot_dispatcher_function\n fw_metadata = run_functionalized_fw_and_collect_metadata(\n File \"/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py\", line 197, in inner\n flat_f_outs = f(*flat_f_args)\n File \"/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py\", line 184, in flat_fn\n tree_out = fn(*args, **kwargs)\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/experimental/scan.py\", line 244, in fn_no_output_aliasing\n return tree_map(lambda v: v.clone() if v in inputs else v, fn(*args))\n File \"/root/my-repo/./repro_spmd.py\", line 19, in scan_fn\n xs.mark_sharding(y, xs.get_global_mesh(), (None, None))\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py\", line 563, in mark_sharding\n annotate_func(unwrap_sharded_tensor(t), op_sharding)\nRuntimeError: torch_xla/csrc/aten_xla_bridge.cpp:110 : Check failed: xtensor \n*** Begin stack trace ***\n tsl::CurrentStackTrace[abi:cxx11]()\n torch_xla::bridge::GetXlaTensor(at::Tensor const&)\n torch_xla::ShardingUtil::XlaMarkSharding(at::Tensor const&, xla::OpSharding)\n\n\n\n\n _PyObject_MakeTpCall\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n _PyEval_EvalFrameDefault\n\n\n _PyEval_EvalFrameDefault\n\n\n _PyEval_EvalFrameDefault\n\n _PyObject_FastCallDictTstate\n _PyObject_Call_Prepend\n\n _PyObject_MakeTpCall\n _PyEval_EvalFrameDefau", "url": "https://github.com/pytorch/xla/issues/9405", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-25T10:36:50Z", "updated_at": "2025-06-27T12:31:38Z", "comments": 3, "user": "Topologized" }, { "repo": "pytorch/pytorch", "number": 156797, "title": "How to use compile cache?", "body": "According to the documentation at https://docs.pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html, we can use torch.compiler.save_cache_artifacts() and torch.compiler.load_cache_artifacts() to reduce compilation time. \n\nHowever, when exactly should we save the cache, and when should we load it? Is there a clear example or recommended practice for this?\n\ncc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/156797", "state": "closed", "labels": [ "module: docs", "oncall: pt2" ], "created_at": "2025-06-25T06:15:38Z", "updated_at": "2025-06-30T03:32:22Z", "user": "jhl13" }, { "repo": "pytorch/torchtitan", "number": 1334, "title": "[Low-bit Optimizers] Do torchtitan plan to integrate AdamW8bit or AdamWFP8 from TorchAO", "body": "Currently, using low-bit optimizers from [TorchAO](https://github.com/pytorch/ao) such as AdamW8bit and AdamWFP8 is not supported in this repo. Low-bit optimizers could significantly reduce memory usage and improve training efficiency. It would be a great enhancement to support them natively.\n\nIs there any plan to support them in torchtitan? Would love to hear thoughts on potential integration or any known workarounds!\n\n", "url": "https://github.com/pytorch/torchtitan/issues/1334", "state": "open", "labels": [], "created_at": "2025-06-24T21:28:20Z", "updated_at": "2025-06-25T03:13:35Z", "comments": 4, "user": "haochengxi" }, { "repo": "pytorch/pytorch", "number": 156673, "title": "[Onnx] How to do torch-dynamo based onnx exports for SAM-like models with optional inputs?", "body": "### \ud83d\udc1b Describe the bug\n\nI like to generate an onnx model with torch-dynamo for SAM. How can I work with conditional inputs, like so:\n```\nfrom typing import Optional\nimport torch\nfrom torch import Tensor\n\n\nclass Model(torch.nn.Module):\n\n def __init__(self):\n super().__init__()\n\n def foward(self, image, points: Optional[Tensor], bb: Optional[Tensor]):\n if points is not None:\n return torch.ones(1, 1, image.shape[2], image.shape[3])\n elif bb is not None:\n return torch.rand((1, 1, image.shape[2], image.shape[3]))\n return torch.zeros(1, 1, image.shape[2], image.shape[3])\n```\nThe original code is [here:](https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/predictor.py#L138)\n \nI guess, I can deal with the branch by using `torch.cond`. But I wonder how to trace both paths? How should I specify the function arguments in [torch.onnx.dyanmo_export](https://docs.pytorch.org/docs/stable/onnx_dynamo.html#torch.onnx.dynamo_export)\n\nThere is [documentation](https://docs.pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_export_sam2.html ) about ONNX Export for SAM, but that specializes on label inputs: \n\n\n### Error logs\n\n_No response_\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.8.0.dev20250512+cu118\nIs debug build: False\nCUDA used to build PyTorch: 11.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.2 LTS (x86_64)\nGCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0\nClang version: 19.1.1 (1ubuntu1~24.04.2)\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.11.0-26-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.0.140\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU\nNvidia driver version: 550.144.03\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 22\nOn-line CPU(s) list: 0-21\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) Ultra 7 155H\nCPU family: 6\nModel: 170\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 4\nCPU(s) scaling MHz: 29%\nCPU max MHz: 4800.0000\nCPU min MHz: 400.0000\nBogoMIPS: 5990.40\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 544 KiB (14 instances)\nL1i cache: 896 KiB (14 instances)\nL2 cache: 18 MiB (9 instances)\nL3 cache: 24 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-21\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBR", "url": "https://github.com/pytorch/pytorch/issues/156673", "state": "closed", "labels": [ "module: onnx", "oncall: pt2", "oncall: export" ], "created_at": "2025-06-24T04:20:24Z", "updated_at": "2025-09-11T04:37:42Z", "user": "FabianSchuetze" }, { "repo": "pytorch/torchtitan", "number": 1329, "title": "OOM recovery under multi-node FSDP/HSDP", "body": "### Bug description\n\nDoes torchtitan provide any recipes of how to implement batch skipping / OOM recovery in multi-node FSDP setup?\n\nIn RL/GRPO training this is very pertinent (where we don't know response seqlens a-priori to do packing / clipping):\n- https://github.com/volcengine/verl/issues/2159\n\nOne thing I could think of:\n- some sort of micro-batching for backward pass\n- some generic batch skipping\n\nSome sort of memory operation tracing would also be very useful to better know what is the reason of OOM (fragmentation):\n- https://github.com/pytorch/pytorch/issues/91692#issuecomment-2996838221\n\n### Versions\n\nN/A", "url": "https://github.com/pytorch/torchtitan/issues/1329", "state": "open", "labels": [ "question", "post training" ], "created_at": "2025-06-23T16:22:58Z", "updated_at": "2025-10-02T02:33:20Z", "user": "vadimkantorov" }, { "repo": "pytorch/ao", "number": 2419, "title": "Benefits of Using QAT Before GGUF Quantization?", "body": "Hi,\nthank you for the amazing project.\n\nI have a question regarding quantization workflows. Does applying QAT before convering to GGUF format (e.g. using `Q4, Q4_K_M`) result in better quality fompared to directy quantizing with GGUF alone?\n\nI'm planning to serve my model using llama.cpp, so converting to GGUF is required. I\u2019ve noticed a noticeable quality drop when using methods provided by llama.cpp, so I\u2019m considering trying QAT to mitigate this.\n\nHas anyone experimented with this approach or have any insights to share?\n\nThanks.", "url": "https://github.com/pytorch/ao/issues/2419", "state": "closed", "labels": [], "created_at": "2025-06-21T01:22:49Z", "updated_at": "2025-06-25T11:56:11Z", "comments": 5, "user": "kiyoonyoo" }, { "repo": "pytorch/torchtitan", "number": 1323, "title": "Why `preserve_rng_state=False` in activation checkpointing", "body": "Why does torchtitan set `preserve_rng_state=False` for activation checkpointing? E.g.:\nhttps://github.com/pytorch/torchtitan/blob/f4048f8e1b36827156c4dc861c9680333a8542f9/torchtitan/models/llama3/infra/parallelize.py#L238", "url": "https://github.com/pytorch/torchtitan/issues/1323", "state": "open", "labels": [ "question", "high priority", "triage review", "module: activation checkpointing" ], "created_at": "2025-06-20T20:22:42Z", "updated_at": "2025-08-25T04:58:04Z", "user": "awgu" }, { "repo": "pytorch/torchtitan", "number": 1322, "title": "How to adapt HuggingFace or other models for TorchTitan", "body": "Is there any thought on how to adapt HuggingFace or other models for pre-training with TorchTitan ?", "url": "https://github.com/pytorch/torchtitan/issues/1322", "state": "open", "labels": [ "duplicate" ], "created_at": "2025-06-20T19:39:54Z", "updated_at": "2025-08-21T03:22:37Z", "user": "githubsgi" }, { "repo": "pytorch/torchrec", "number": 3114, "title": "Which lightning strategy to use with torchrec optimizers?", "body": "Hi, thank you for this great work. I would like to know which [distributed strategy](https://github.com/Lightning-AI/pytorch-lightning/blob/76d3d22c5997398ffb5296cf500c723a176c0a06/src/lightning/pytorch/trainer/trainer.py#L95) to use with lightning trainer. I see two potential avenues:\n1. DDP strategy: [following this example](https://github.com/pytorch/torchrec/blob/ab1cbe13833f51ace06f5075653ca1e16d937038/examples/bert4rec/bert4rec_main.py#L512-L524), I verified that the updates are not sparse, ie, embeddings not used to compute the loss for the current batch were still updated when using Adam (due to momentum/weight decay)\n2. Custom strategy for DMP: [when using DMP](https://github.com/pytorch/torchrec/blob/ab1cbe13833f51ace06f5075653ca1e16d937038/examples/bert4rec/bert4rec_main.py#L491-L507), I've verified the updates are sparse. However, AFAIK, there is not a DMP strategy for lightning, and so I would need to define a custom strategy. \n\nIs it possible to make DDP work for sparse opt, and if not, is a custom strategy the best option?\n\nMWE:\n\n```\n\nimport argparse\nimport os\nimport sys\nfrom typing import Any, cast, Dict, List, Union\n\nfrom fbgemm_gpu.split_embedding_configs import EmbOptimType\n\nimport torch\nfrom torch import distributed as dist, nn, optim \nimport torch.utils.data as data_utils\nfrom torch.nn.parallel import DistributedDataParallel as DDP\n\nimport torchrec\nfrom torchrec.distributed.embeddingbag import EmbeddingBagCollectionSharder\nfrom torchrec.distributed.model_parallel import DistributedModelParallel as DMP\nfrom torchrec.distributed.types import ModuleSharder\nfrom torchrec.optim.keyed import CombinedOptimizer, KeyedOptimizerWrapper\nfrom torchrec.optim.optimizers import in_backward_optimizer_filter\nfrom torchrec.sparse.jagged_tensor import KeyedJaggedTensor\nfrom torchrec.modules.embedding_configs import ShardingType\nfrom torchrec import EmbeddingBagCollection, EmbeddingBagConfig, PoolingType\n\nclass DataSet(torch.utils.data.IterableDataset):\n def __init__(\n self,\n max_id: int,\n max_seq_len: int\n ) -> None:\n self.max_seq_len = max_seq_len\n self.max_id = max_id\n\n def __iter__(self):\n while True:\n len_ = torch.randint(1, self.max_seq_len + 1, (1, )).item()\n yield torch.randint(0, self.max_id, (len_,))\n\n\nclass Model(torch.nn.Module):\n\n def __init__(\n self,\n max_id: int,\n emb_dim: int,\n ) -> None:\n super().__init__()\n self.emb_dim = emb_dim\n\n item_embedding_config = EmbeddingBagConfig(\n name=\"item_embedding\",\n embedding_dim=emb_dim,\n num_embeddings=max_id,\n feature_names=[\"item\"],\n weight_init_max=1.0,\n weight_init_min=-1.0,\n pooling=PoolingType.MEAN,\n )\n self.ebc = EmbeddingBagCollection(\n tables=[item_embedding_config],\n )\n self.head = nn.Linear(emb_dim, 1)\n\n def forward(self, x: KeyedJaggedTensor) -> torch.Tensor:\n out = self.ebc(x)[\"item\"].to_dense()\n return self.head(out)\n\ndef parse_args(argv: List[str]) -> argparse.Namespace:\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--mode\",\n type=str,\n default=\"ddp\",\n help=\"dmp (distributed model parallel) or ddp (distributed data parallel)\",\n )\n return parser.parse_args(argv)\n\n\ndef _to_kjt(seqs: torch.LongTensor, device: torch.device) -> KeyedJaggedTensor:\n seqs_list = list(seqs)\n lengths = torch.IntTensor([value.size(0) for value in seqs_list])\n values = torch.cat(seqs_list, dim=0)\n\n kjt = KeyedJaggedTensor.from_lengths_sync(\n keys=[\"item\"], values=values, lengths=lengths\n ).to(device)\n return kjt\n\ndef get_embedding_weights(model: Union[DDP, DMP], x: List[torch.Tensor]):\n emb_weights = [v.data.clone() for k, v in model.named_parameters() if \"embedding\" in k]\n assert len(emb_weights) == 1\n emb_weights = emb_weights[0]\n x = torch.cat(x)\n ids = torch.arange(len(emb_weights)).type_as(x)\n used_mask = torch.isin(ids, x)\n return emb_weights[used_mask], emb_weights[~used_mask]\n\ndef _train_one_epoch(\n model: Union[DDP, DMP],\n loader: data_utils.DataLoader,\n device: torch.device,\n optimizer: optim.Adam,\n) -> None:\n model.train()\n if torch.cuda.is_available():\n torch.cuda.set_device(dist.get_rank())\n i = 0\n NUM_ITER = 5\n for batch in loader:\n i += 1\n batch = [x.to(device) for x in batch]\n optimizer.zero_grad()\n kjt = _to_kjt(batch, device)\n loss = model(kjt).norm()\n used_embs_pre, unused_embs_pre = get_embedding_weights(model, batch)\n loss.backward()\n optimizer.step()\n used_embs_post, unused_embs_post = get_embedding_weights(model, batch)\n\n diffs_used = torch.norm(used_embs_post - used_embs_pre).item()\n diffs_unused = torch.norm(unused_embs_post - unused_embs_pre).item()\n\n print(f\"Iter {i", "url": "https://github.com/meta-pytorch/torchrec/issues/3114", "state": "open", "labels": [], "created_at": "2025-06-18T19:25:10Z", "updated_at": "2025-06-19T06:04:22Z", "comments": 0, "user": "JacobHelwig" }, { "repo": "pytorch/vision", "number": 9110, "title": "RoIHeads.postprocess_detections boxes slicing error occurs when removing predictions with the background label", "body": "### \ud83d\udc1b Describe the bug\n\n**Bug Report: Incorrect Box Slicing in Faster R-CNN's postprocess_detections**\n\n### Minimal Reproduction Code\n```python\nimport torch\nimport torchvision\n\ndetector = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\ndata = torch.zeros((1, 3, 1080, 1920), dtype=torch.float32)\ndetections = detector(data)\n```\n\n### Description\nThe bug occurs in [`roi_heads.py` (line 701)](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/roi_heads.py#L701) in the `postprocess_detections` function of `RoIHeads` when processing Faster R-CNN outputs. The current implementation incorrectly handles box dimension slicing when removing background class predictions.\n\n### Problem Location\nThe problematic code segment:\n```python\nfor boxes, scores, image_shape in zip(pred_boxes_list, pred_scores_list, image_shapes):\n ...\n # remove predictions with the background label\n boxes = boxes[:, 1:] # Incorrect slicing\n scores = scores[:, 1:]\n labels = labels[:, 1:]\n ...\n```\n\n### Root Cause\n1. The boxes tensor has shape `[N, num_classes * 4]` (where each class has 4 coordinate values)\n2. The current slicing `boxes[:, 1:]` incorrectly operates on the last dimension (class*coordinates) instead of just the class dimension\n3. This causes misalignment between boxes, scores, and labels since they're being sliced differently\n\n![Image](https://github.com/user-attachments/assets/d1c0b97d-c873-469e-9cc6-5eb0a80e6765)\n\n### Expected Behavior\nThe boxes tensor should first be reshaped to `[N, num_classes, 4]` before slicing to properly separate class and coordinate dimensions.\n\n### Proposed Fix\n```python\nfor boxes, scores, image_shape in zip(pred_boxes_list, pred_scores_list, image_shapes):\n ...\n # remove predictions with the background label\n boxes = boxes.reshape(-1, num_classes, 4) # Proper dimension separation\n boxes = boxes[:, 1:, :] # Correct class dimension slicing\n scores = scores[:, 1:]\n labels = labels[:, 1:]\n ...\n```\n\n### Impact\nThe current implementation leads to:\n1. Misaligned boxes and their corresponding scores/labels\n2. Potentially incorrect final detection results\n3. Silent failure without explicit errors\n\n### Versions\n\nbranch: 6473b779bdb8ba02bab0fc9e0f4ef4661ebb632a", "url": "https://github.com/pytorch/vision/issues/9110", "state": "closed", "labels": [ "bug", "question" ], "created_at": "2025-06-18T08:55:33Z", "updated_at": "2025-09-04T14:52:39Z", "user": "FeiFanMoKe" }, { "repo": "pytorch/pytorch", "number": 156191, "title": "Dynamo does not know how to trace method `__len__` of class `` with torch.logging calls", "body": "### \ud83d\udc1b Describe the bug\n\nWhenever we use any logging function, there is a graph break due to calling `__len__` on an unkown type. I dug into the logging source code and set a breakpoint, and the `root.handlers` object is defintiely a standard list but torch.compile isn't able to parse that.\n\nI know that there is there is this change https://github.com/pytorch/pytorch/pull/139403 that allows us to ignore certain logging functions but calling `logging.info` still forces us through this code path.\n\nWe use a ton of logging throughout our large training script and the graph breaks kill our performance. Any help resolving this would be great! Note: we don't actually care about seeing the logs in a torch.compiled graph, we already log everything once eagerly before compiling.\n\n```python\nimport torch\nimport logging\nimport triton\n\ntorch._logging.set_logs(graph_breaks=True)\n\n_NUM_ITERATIONS = 20\n\n@torch.compile\ndef _logging_fn(x, y):\n result = x\n for _ in range(_NUM_ITERATIONS):\n logging.info(\"Hello\")\n result += (x * y)\n return result\n\n# Benchmark\nDEVICE = \"cuda\"\ntest_x = torch.randn(1000).to(DEVICE).to(torch.float32)\ntest_y = torch.randn(1000).to(DEVICE).to(torch.float32)\nprint(f\"logging_fn: {triton.testing.do_bench(lambda: _logging_fn(test_x, test_y))}\")\n\n\n```\n\n### Error logs\n\n```\n[__graph_breaks] Graph break in user code at /home/aboubezari/.conda/envs/torch-env2/lib/python3.10/logging/__init__.py:2127\n[__graph_breaks] Graph Break Reason: Unsupported method call\n[__graph_breaks] Explanation: Dynamo does not know how to trace method `__len__` of class ``\n[__graph_breaks] Hint: Avoid calling `.__len__` in your code.\n[__graph_breaks] Hint: Please report an issue to PyTorch.\n```\n\n### Versions\n\nPyTorch version: 2.7.1+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 20.04.6 LTS (x86_64)\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\nClang version: Could not collect\nCMake version: version 3.16.3\nLibc version: glibc-2.31\n\nPython version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)\nPython platform: Linux-5.15.0-1083-gcp-x86_64-with-glibc2.31\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB\nNvidia driver version: 535.247.01\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 46 bits physical, 48 bits virtual\nCPU(s): 12\nOn-line CPU(s) list: 0-11\nThread(s) per core: 2\nCore(s) per socket: 6\nSocket(s): 1\nNUMA node(s): 1\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 85\nModel name: Intel(R) Xeon(R) CPU @ 2.20GHz\nStepping: 7\nCPU MHz: 2200.166\nBogoMIPS: 4400.33\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 192 KiB\nL1i cache: 192 KiB\nL2 cache: 6 MiB\nL3 cache: 38.5 MiB\nNUMA node0 CPU(s): 0-11\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; Enhanced IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe ", "url": "https://github.com/pytorch/pytorch/issues/156191", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2025-06-17T16:54:31Z", "updated_at": "2025-06-17T19:27:01Z", "user": "aboubezari" }, { "repo": "pytorch/xla", "number": 9371, "title": "Failing `torch_xla._XLAC._xla_custom_call()` with `RuntimeError: Bad StatusOr access: UNIMPLEMENTED: No registered implementation for custom call to my_lib.my_op.default for platform CUDA`", "body": "## \u2753 Questions and Help\n\nDuring execution of `torch_xla.stablehlo.exported_program_to_stablehlo()`, it fails with `RuntimeError: Bad StatusOr access: UNIMPLEMENTED: No registered implementation for custom call to my_lib.my_op.default for platform CUDA`. For more context, `my_op` is registered under a custom library as follows\n\n```python\nfrom torch.library import Library, impl\nfrom torch.library import impl_abstract\n\nMY_LIB = Library(\"my_lib\", \"DEF\")\n\nMY_LIB.define(\"my_op(Tensor t) -> Tensor\")\n\n\n@impl(f\"{MY_LIB.ns}::my_op\", \"default\")\ndef my_op(t):\n return t\n\n\n@impl_abstract(f\"{MY_LIB.ns}::my_op\")\ndef my_op_meta(t):\n return torch.empty_like(t)\n```\n\nI am able to get the torch ExportedProgram and the `MY_LIB` namespace is allowed in the stablehlo graph as a custom op by specifying \n```\nStableHLOExportOptions(\n custom_ops_allowed_in_graph={MY_LIB.ns}\n)\n```\n\nIt **seems** to me that if XLA does not attempt to execute the graph then the error is not thrown. I have a few questions here:\n\n1. How can I get around this `RuntimeError`? \n2. Does registering a custom op under torch library (the way I did in the first code snippet) not expose the implementation to XLA?", "url": "https://github.com/pytorch/xla/issues/9371", "state": "open", "labels": [ "bug", "stablehlo" ], "created_at": "2025-06-16T21:01:05Z", "updated_at": "2025-06-24T18:55:50Z", "comments": 4, "user": "hsjts0u" }, { "repo": "pytorch/xla", "number": 9366, "title": "PyTorch/XLA custom Triton kernel export to StableHLO", "body": "I'd like to export a model to StableHLO with a simple custom Triton kernel. Following the [guide here](https://docs.pytorch.org/xla/master/features/triton.html) on Pytorch/XLA with custom GPU kernels. However, I am encountering errors with the [torch.export](https://docs.pytorch.org/xla/master/features/stablehlo.html) where it seems like it is unable to run tracing due to the existence of the custom operations. How can I properly export my model with custom GPU kernel to StableHLO?\n\nError:\n```\nTraceback (most recent call last):\n File \"/root/test_code.py\", line 73, in \n exported = export(model, (x,y))\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/__init__.py\", line 270, in export\n return _export(\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py\", line 1017, in wrapper\n raise e\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py\", line 990, in wrapper\n ep = fn(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/exported_program.py\", line 114, in wrapper\n return fn(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py\", line 1880, in _export\n export_artifact = export_func( # type: ignore[operator]\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py\", line 1224, in _strict_export\n return _strict_export_lower_to_aten_ir(\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py\", line 1252, in _strict_export_lower_to_aten_ir\n gm_torch_level = _export_to_torch_ir(\n File \"/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py\", line 560, in _export_to_torch_ir\n gm_torch_level, _ = torch._dynamo.export(\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py\", line 1432, in inner\n result_traced = opt_f(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py\", line 465, in _fn\n return fn(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 1269, in __call__\n return self._torchdynamo_orig_callable(\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 526, in __call__\n return _compile(\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 924, in _compile\n guarded_code = compile_inner(code, one_graph, hooks, transform)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 666, in compile_inner\n return _compile_inner(code, one_graph, hooks, transform)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_utils_internal.py\", line 87, in wrapper_function\n return function(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 699, in _compile_inner\n out_code = transform_code_object(code, transform)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py\", line 1322, in transform_code_object\n transformations(instructions, code_options)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 219, in _fn\n return fn(*args, **kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py\", line 634, in transform\n tracer.run()\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py\", line 2796, in run\n super().run()\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py\", line 983, in run\n while self.step():\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py\", line 895, in step\n self.dispatch_table[inst.opcode](self, inst)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py\", line 582, in wrapper\n return inner_fn(self, inst)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py\", line 1692, in CALL_FUNCTION_KW\n self.call_function(fn, args, kwargs)\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py\", line 830, in call_function\n self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]\n File \"/root/testing/lib64/python3.9/site-packages/torch/_dynamo/variables/functions.py\",", "url": "https://github.com/pytorch/xla/issues/9366", "state": "open", "labels": [ "enhancement", "xla:gpu", "Triton" ], "created_at": "2025-06-16T18:28:42Z", "updated_at": "2025-06-23T19:55:53Z", "comments": 4, "user": "annabellej" }, { "repo": "pytorch/torchtitan", "number": 1301, "title": "Slow checkpoint saving time (6 mins to save an 8B model checkpoint in sync mode)", "body": "It takes ~6 minutes to save a checkpoint using non async mode. Is this expected? \n\n### Sync mode\n\n```\n[rank0]:[titan] 2025-06-15 21:31:48,968 - root - INFO - TensorBoard logging enabled. Logs will be saved at ./outputs/tb/20250615-2131 \n[rank0]:[titan] 2025-06-15 21:31:48,969 - root - INFO - CUDA capacity: NVIDIA H100 80GB HBM3 with 79.10GiB memory \n[rank0]:[titan] 2025-06-15 21:31:49,083 - root - INFO - Model llama3 8B size: 8,030,261,248 total parameters \n[rank0]:[titan] 2025-06-15 21:31:49,084 - root - INFO - Applied full activation checkpointing to the model \n[rank0]:[titan] 2025-06-15 21:31:49,164 - root - INFO - Applied FSDP to the model \n[rank0]:[titan] 2025-06-15 21:31:49,505 - root - INFO - Peak FLOPS used for computing MFU: 9.890e+14 \n[rank0]:[titan] 2025-06-15 21:31:49,505 - root - INFO - CUDA memory usage for model: 3.95GiB(4.99%) \n[rank0]:[titan] 2025-06-15 21:31:49,535 - root - INFO - Checkpointing active. Checkpoints will be loaded from and saved to ./outputs/c\nheckpoint \n[rank0]:[titan] 2025-06-15 21:31:49,535 - root - INFO - Trainer is initialized with local batch size 1, global batch size 64, gradient\n accumulation steps 8, sequence length 8192, total steps 1000 (warmup 40). \n[rank0]:[titan] 2025-06-15 21:31:49,535 - root - INFO - Loading the checkpoint from assets/models/dcp/llama3.1-8B. \n[rank0]:[titan] 2025-06-15 21:32:02,935 - root - INFO - [GC] GC collection for checkpoint loading. 0.01 seconds. \n[rank0]:[titan] 2025-06-15 21:32:02,935 - root - INFO - Finished loading the checkpoint in 13.40 seconds. \n[rank0]:[titan] 2025-06-15 21:32:02,935 - root - INFO - Training starts at step 1. \n[rank0]:[titan] 2025-06-15 21:32:15,816 - root - INFO - step: 1 loss: 2.4292 memory: 29.18GiB(36.90%) tps: 2,452 tflops: 141.98 \n mfu: 14.36% \n[rank0]:[titan] 2025-06-15 21:32:15,816 - root - INFO - Saving the checkpoint (or staging if async is enabled). \n[rank0]:[titan] 2025-06-15 21:38:31,430 - root - INFO - [GC] GC collection invoked by checkpointer. 0.04 seconds. \n[rank0]:[titan] 2025-06-15 21:38:31,431 - root - INFO - Finished saving the checkpoint (or staging if async is enabled)in 375.61 secon\nds. \n[rank0]:[titan] 2025-06-15 21:38:31,431 - root - INFO - Synchronizing and adjusting timeout for all ProcessGroups to 0:01:40 \n[rank0]:[titan] 2025-06-15 21:40:09,439 - root - INFO - step: 10 loss: 2.3602 memory: 36.65GiB(46.33%) tps: 1,245 tflops: 72.12 \nmfu: 7.29% \n```\n\n## Async mode:\n\n```\nrank0]:[titan] 2025-06-15 21:44:35,889 - root - INFO - step: 1 loss: 2.4292 memory: 29.18GiB(36.90%) tps: 2,327 tflops: 134.74 mfu: 13.62%\n[rank0]:[titan] 2025-06-15 21:44:35,890 - root - INFO - Saving the checkpoint (or staging if async is enabled).\n[rank0]:[titan] 2025-06-15 21:44:35,898 - root - INFO - [GC] GC collection invoked by checkpointer. 0.01 seconds.\n[rank0]:[titan] 2025-06-15 21:44:47,661 - root - INFO - [GC] GC collection invoked by checkpointer. 0.00 seconds.\n[rank0]:[titan] 2025-06-15 21:44:47,672 - root - INFO - Finished saving the checkpoint (or staging if async is enabled)in 11.78 seconds.\n[rank0]:[titan] 2025-06-15 21:44:47,672 - root - INFO - Synchronizing and adjusting timeout for all ProcessGroups to 0:01:40\n[rank0]:/home/ubuntu/code/thirdparty/torchtitan/.venv/lib/python3.13/site-packages/torch/distributed/checkpoint/filesystem.py:111: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\n[rank0]: if tensor.storage().size() != tensor.numel():\n[rank0]:[titan] 2025-06-15 21:46:26,319 - root - INFO - step: 10 loss: 2.3601 memory: 36.64GiB(46.33%) tps: 5,341 tflops: 309.34 mfu: 31.28%\n```\n\n\n\nReproduction: check out https://github.com/pytorch/torchtitan/pull/1300 and run\n\n```\nCONFIG_FILE=\"./torchtitan/models/llama3/train_configs/llama3_8b.toml\" uv run ./run_train.sh \\\n --model.tokenizer_path assets/tokenizer/Meta-Llama-3.1-8B-tokenizer.model \\\n --training.max_seq_len 131072 \\\n --checkpoint.initial_load_path \"assets/models/dcp/llama3.1-8B\" \\\n --profiling.no_enable", "url": "https://github.com/pytorch/torchtitan/issues/1301", "state": "closed", "labels": [ "question", "module: checkpoint" ], "created_at": "2025-06-15T21:42:47Z", "updated_at": "2025-06-23T16:34:52Z", "user": "vwxyzjn" }, { "repo": "pytorch/examples", "number": 1355, "title": "`language_translation` has typo which make loaded tgt tensor invalid", "body": "for `_yield_token` implementation in `src/data.py`, the third argument `src` expected to be `True` or `False`\n```\n# Turns an iterable into a generator\ndef _yield_tokens(iterable_data, tokenizer, src):\n\n # Iterable data stores the samples as (src, tgt) so this will help us select just one language or the other\n index = 0 if src else 1\n\n for data in iterable_data:\n yield tokenizer(data[index])\n```\n\nBut the actual used argument is `str` (e.g. 'de' or 'en'), which will make `_yield_tokens` always construct `tgt` vocab from `src` tokens, so the loaded tgt tensor was wrong\n```\n tgt_vocab = build_vocab_from_iterator(\n _yield_tokens(train_iterator, tgt_tokenizer, tgt_lang), <-- tgt_lang is 'de' or 'en'\n min_freq=1,\n specials=list(special_symbols.keys()),\n special_first=True\n```\n\nexample of wrong tgt tensor, too much `0` values (which means `unknown`)\n```\ntensor([[ 2, 2, 2, 2, 2, 2, 2, 2],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 7, 0, 7, 0, 0],\n [ 0, 0, 0, 0, 3425, 0, 0, 0],\n [ 0, 0, 7, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 28, 0],\n [ 7, 5, 0, 0, 0, 15, 5, 0],\n [ 0, 3, 0, 0, 0, 0, 3, 0],\n [ 0, 1, 5, 0, 5, 0, 1, 0],\n [ 0, 1, 3, 0, 3, 0, 1, 5315],\n [ 0, 1, 1, 0, 1, 0, 1, 0],\n [ 5, 1, 1, 0, 1, 0, 1, 0],\n [ 3, 1, 1, 0, 1, 0, 1, 5],\n [ 1, 1, 1, 5, 1, 0, 1, 3],\n [ 1, 1, 1, 3, 1, 5, 1, 1],\n [ 1, 1, 1, 1, 1, 3, 1, 1]], device='cuda:0')\n```", "url": "https://github.com/pytorch/examples/issues/1355", "state": "closed", "labels": [], "created_at": "2025-06-14T12:13:35Z", "updated_at": "2025-06-16T13:55:52Z", "comments": 0, "user": "zwzmzd" }, { "repo": "pytorch/xla", "number": 9356, "title": "Transition torch_xla::ShardingSec to torch_xla::OpSharding", "body": "This is primarily for the sake of documentation and consistency.", "url": "https://github.com/pytorch/xla/issues/9356", "state": "open", "labels": [ "distributed", "documentation" ], "created_at": "2025-06-13T23:07:34Z", "updated_at": "2025-06-13T23:07:34Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/TensorRT", "number": 3571, "title": "\u2753 [Question] Can I export a serialized engine from Torch-TensorRT targeting TensorRT 10.3.0.26?", "body": "## \u2753 Question\n\nHello, I am attempting to export a serialized engine from Torch-TRT. I require TensorRT version 10.3.0.26, as I am planning to use this engine with a Nvidia DeepStream container that requires that TensorRT version. I attempted to use torch-tensorrt==2.5.0, but this version is listed as using builtin TensorRT version 10.3.0, and did not work with the container. How would you recommend generating this .engine for this specific TensorRT version? Unfortunately, I cannot just use trtexec as the outputs of the trtexec model are incorrect.\n\nI am assuming probably building from source, but the documentation at https://docs.pytorch.org/TensorRT/getting_started/installation.html appears a bit outdated, as there is no longer any WORKSPACE file as referenced in that install guide. Please advise, thank you!\n\n## Environment\n\nContainer to be used on: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream (deepstream-7.1-multiarch)\n\n - PyTorch Version (e.g., 1.0): Any. I have tested 2.4/2.5/2.5.1.\n - CPU Architecture: x86\n - OS (e.g., Linux): Ubuntu 22.04\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\n - Build command you used (if compiling from source): NA\n - Are you using local sources or building from archives: NA\n - Python version: 3.10\n - CUDA version: 12.6\n - GPU models and configuration: A sample model can be found here: https://drive.google.com/file/d/1NukSOFFQwVGhZh6VrasjMBiKnLL8CHM9/view?usp=sharing\n - Any other relevant information:\n\n## Additional context\n\n\n", "url": "https://github.com/pytorch/TensorRT/issues/3571", "state": "closed", "labels": [ "question" ], "created_at": "2025-06-13T16:44:40Z", "updated_at": "2025-06-16T20:06:15Z", "user": "geiche735" }, { "repo": "pytorch/torchtitan", "number": 1291, "title": "Using official HuggingFace script to convert DCP weights to HF format\uff0cthe outputs are not human-readable", "body": "DCP -> torch (in PyTorch, see https://github.com/pytorch/torchtitan/blob/main/docs/checkpoint.md)\ntorch -> HF (from [HF](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py), although missing params.json if saved from DCP)[](url)\n\n![Image](https://github.com/user-attachments/assets/7c480739-4248-4be3-ac7f-69c9d098937d)\n\n\nDoes anyone have a working convert script or know how to fix this issue?", "url": "https://github.com/pytorch/torchtitan/issues/1291", "state": "closed", "labels": [ "module: checkpoint" ], "created_at": "2025-06-13T03:11:23Z", "updated_at": "2025-07-02T07:18:01Z", "comments": 7, "user": "guang11644331" }, { "repo": "pytorch/torchtitan", "number": 1283, "title": "KV Replication for context parallel (Ring attention)", "body": "Hi,\n\nFor the llama3-8b model (which has GQA, with num_kv_heads=8, num_heads=32), I see the KV replication being done inside the Attention module in model.py\n\nWill this lead to additional communication volume for ring attention (with passKV) wherein we'll be circulating 32 heads instead of 8?\n\nAfaik flash attention kernels support GQA internally (ie, it accepts QKV with num_kv_heads < num_q_heads), so can we omit the KV replication in the attention module?\n\nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/1283", "state": "open", "labels": [ "question", "module: context parallel" ], "created_at": "2025-06-11T22:18:04Z", "updated_at": "2025-06-12T16:08:59Z", "user": "rghadia" }, { "repo": "pytorch/examples", "number": 1353, "title": "tensor_parallel_example.py and sequence_parallel_example.py", "body": "The primary difference between the two files are as follows. The TP case , only see 1 allreduce per iteration - is that what is expected ? Seems to be same as DDP ! In the SP case, see 1 allgather and 1 reduce -scatter per iteration. \n\n```\n# Custom parallelization plan for the model\nsp_model = parallelize_module(\n module=model,\n device_mesh=device_mesh,\n parallelize_plan={\n \"in_proj\": ColwiseParallel(input_layouts=Shard(0)),\n \"out_proj\": RowwiseParallel(output_layouts=Shard(0)),\n },\n)\n\n# Custom parallelization plan for the model\ntp_model = parallelize_module(\n module=tp_model,\n device_mesh=device_mesh,\n parallelize_plan={\n \"in_proj\": ColwiseParallel(),\n \"out_proj\": RowwiseParallel(),\n },\n)\n```\n\nCommDebugMode also appears to show 1 allreduce in fwd and no allreduce in bwd. \n\n```\n FORWARD PASS [12/1864]\n *c10d_functional.all_reduce: 1\n BACKWARD PASS\n ToyModel\n *module type: class '__main__.ToyModel'\n FORWARD PASS\n *c10d_functional.all_reduce: 1\n ToyModel.in_proj\n *module type: class 'torch.nn.modules.linear.Linear'\n *Parameter List\n *weight: (Shard(dim=0),)\n *bias: (Shard(dim=0),)\n FORWARD PASS\n **aten.addmm.default\n shape: [torch.Size([32]), torch.Size([4, 10]), torch.Size([10, 32])]\n sharding: [(Shard(dim=0),), (Replicate(),), (Shard(dim=1),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n BACKWARD PASS\n **aten.mm.default\n shape: [torch.Size([32, 4]), torch.Size([4, 10])]\n sharding: [(Shard(dim=0),), (Replicate(),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.sum.dim_IntList\n shape: [torch.Size([4, 32])]\n sharding: [(Shard(dim=1),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.add_.Tensor\n shape: [torch.Size([32]), torch.Size([32])]\n sharding: [(Shard(dim=0),), (Shard(dim=0),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.add_.Tensor\n shape: [torch.Size([32, 10]), torch.Size([32, 10])]\n sharding: [(Shard(dim=0),), (Shard(dim=0),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n ToyModel.relu\n *module type: class 'torch.nn.modules.activation.ReLU'\n FORWARD PASS\n BACKWARD PASS\n ToyModel.out_proj\n *module type: class 'torch.nn.modules.linear.Linear'\n *Parameter List\n *weight: (Shard(dim=1),)\n *bias: (Replicate(),)\n FORWARD PASS\n *c10d_functional.all_reduce: 1\n **aten.addmm.default\n shape: [torch.Size([5]), torch.Size([4, 32]), torch.Size([32, 5])]\n sharding: [(Replicate(),), (Shard(dim=1),), (Shard(dim=0),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n BACKWARD PASS\n **aten.mm.default\n shape: [torch.Size([4, 5]), torch.Size([5, 32])]\n sharding: [(Replicate(),), (Shard(dim=1),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.mm.default\n shape: [torch.Size([5, 4]), torch.Size([4, 32])]\n sharding: [(Replicate(),), (Shard(dim=1),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.sum.dim_IntList\n shape: [torch.Size([4, 5])]\n sharding: [(Replicate(),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.add_.Tensor\n shape: [torch.Size([5]), torch.Size([5])]\n sharding: [(Replicate(),), (Replicate(),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n **aten.add_.Tensor\n shape: [torch.Size([5, 32]), torch.Size([5, 32])]\n sharding: [(Shard(dim=1),), (Shard(dim=1),)]\n device mesh: DeviceMesh('cuda', [0, 1, 2, 3])\n\n```", "url": "https://github.com/pytorch/examples/issues/1353", "state": "open", "labels": [], "created_at": "2025-06-11T01:10:08Z", "updated_at": "2025-10-30T09:12:25Z", "comments": 2, "user": "githubsgi" }, { "repo": "pytorch/torchtitan", "number": 1278, "title": "[Qes] Is `torch.float32` as the default dtype when training?", "body": "I ran the example config, and found the parameter dtype of model is `torch.float32`. I don't understand why we use this as the default dtype, why not half precision? And I found the only way to change it to half precision is enabling fsdp and set mix dtype to half.", "url": "https://github.com/pytorch/torchtitan/issues/1278", "state": "closed", "labels": [], "created_at": "2025-06-10T09:30:35Z", "updated_at": "2025-06-12T06:13:43Z", "comments": 2, "user": "foreverlms" }, { "repo": "pytorch/pytorch", "number": 155391, "title": "how to save the fx graph with output tensor shapes ?", "body": "### \ud83d\udc1b Describe the bug\n\n# When I use **f.write** to save the fx graph, it doesn't have output tensor shapes \n> refer to https://www.doubao.com/chat/7948299479012098\n```\nwith open(\"fx_graph.py\", \"w\") as f:\n f.write(graph_module.code)\n```\n* its dump is similar to \n```\ndef forward(self, inputs_1, labels_1):\n view = torch.ops.aten.view.default(inputs_1, [32, -1]); inputs_1 = None\n _param_constant0 = self._param_constant0\n t = torch.ops.aten.t.default(_param_constant0); _param_constant0 = None\n _param_constant1 = self._param_constant1\n ...\n```\n\n\n# Compare to **print(joint_graph._graph.python_code(root_module=\"self\", verbose=True).src)**, we can see that there is output tensor shapes \n* its dump is similar to \n```\ndef forward(self, inputs_1: f32[32, 1, 784], labels_1: i64[32]):\n # No stacktrace found for following nodes\n view: f32[32, 784] = torch.ops.aten.view.default(inputs_1, [32, -1]); inputs_1 = None\n _param_constant0 = self._param_constant0\n t: f32[784, 64] = torch.ops.aten.t.default(_param_constant0); _param_constant0 = None\n _param_constant1 = self._param_constant1\n ...\n```\n\n\n### Versions\n\nPython 3.10.14\ntorch 2.1.0\ntorch-npu 2.1.0.post6.dev20240716\ntorchaudio 2.1.0\ntorchvision 0.16.0 ", "url": "https://github.com/pytorch/pytorch/issues/155391", "state": "closed", "labels": [], "created_at": "2025-06-07T02:35:38Z", "updated_at": "2025-06-07T02:58:25Z", "user": "vfdff" }, { "repo": "pytorch/xla", "number": 9303, "title": "Runtime is already initialized. Do not use the XLA ' RuntimeError: Runtime is already initialized. Do not use the XLA device before calling xmp.spawn.", "body": "## \ud83d\udc1b Bug\n-- Block 13 ALT: Direct xmp.spawn (Consolidated) ---\ntorch_xla and xmp imported for Block 13.\nDefining hyperparameters for training function...\nHyperparameters for training function defined.\nSetting XLA/TPU specific environment variables for xmp.spawn...\nXRT_TPU_CONFIG already set: localservice;0;localhost:51011\nEnvironment variables set.\nArguments tuple for xmp.spawn's target function prepared.\nSet TPU_NUM_DEVICES = 8\nUsing nprocs = None (None = use all available devices) for xmp.spawn.\n\n\ud83d\ude80 Launching TPU training directly via xmp.spawn with nprocs=None (auto-detect devices)...\n\u274c\u274c\u274c xmp.spawn FAILED: Runtime ALREADY initialized.\n/tmp/ipykernel_10/3843059188.py:91: UserWarning: tpu_cores not found or invalid from Block 0/1. Defaulting to 8 for TPU v3-8.\n warnings.warn(\"tpu_cores not found or invalid from Block 0/1. Defaulting to 8 for TPU v3-8.\")\nTraceback (most recent call last):\n File \"/tmp/ipykernel_10/3843059188.py\", line 103, in \n xmp.spawn(\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 39, in spawn\n return pjrt.spawn(fn, nprocs, start_method, args)\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 213, in spawn\n run_multiprocess(spawn_fn, start_method=start_method)\n File \"/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 145, in run_multiprocess\n raise RuntimeError('Runtime is already initialized. Do not use the XLA '\nRuntimeError: Runtime is already initialized. Do not use the XLA device before calling xmp.spawn.\nEnsuring WandB run is finished...\nSynced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)\n\u2705 Block 13 ALT Completed (Direct xmp.spawn Attempted).\n\n\n## To Reproduce\nI have working on this problem for the past two weeks and l can't get my head over it, i really don't know what am doing wrong. \nMy question if you are using tpu vm v3-8 in kaggle, does it mean you can't \"!pip install \"torch~=2.6.0\" \"torchvision~=0.21.0\" \"torch_xla[tpu]~=2.6.0\" -f https://storage.googleapis.com/libtpu-releases/index.html --quiet\" in your kaggle notebook\nprint(\"PyTorch/XLA installation attempt complete.\\n\")\nit any unique install pytorch/xla? I initially started with notebook_launcher, accelerator from huggingface. \n\n\nSteps to reproduce the behavior:\n\n1.\n2.\n3.\n\n\n\n## Expected behavior\n\n\n\n## Environment\n\n - Torch: 2.6.0+cu124\n - TorchXLA: 2.6.0+libtpu\n\n\n## Additional context\n\n\n", "url": "https://github.com/pytorch/xla/issues/9303", "state": "open", "labels": [ "question" ], "created_at": "2025-06-06T01:12:22Z", "updated_at": "2025-06-10T23:04:30Z", "user": "pojoba02" }, { "repo": "pytorch/pytorch", "number": 155242, "title": "Partitioner loses Inplace ops where source is constant", "body": "### \ud83d\udc1b Describe the bug\n\nIf backward contains some constant compute, e.g. result of joint constant propagation:\n```\nPOST_JOINT_CONST_FOLDING:graph():\n237 %primals_1 : [num_users=1] = placeholder[target=primals_1]\n238 %primals_2 : [num_users=2] = placeholder[target=primals_2]\n239 %tangents_1 : [num_users=1] = placeholder[target=tangents_1]\n240 %clone : [num_users=1] = call_function[target=torch.ops.aten.clone.default](args = (%primals_1,), kwargs = {})\n241 %full_default : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([2], 0.0), kwargs = {dtype: torch.float32, layout: torch.strided, device: cuda:0, pin_memory: False})\n242 %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%full_default, 1), kwargs = {})\n243 %mul_1 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%add, 1), kwargs = {})\n244 %add_1 : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%primals_2, %mul_1), kwargs = {})\n245 %copy_ : [num_users=0] = call_function[target=torch.ops.aten.copy_.default](args = (%primals_2, %add_1), kwargs = {})\n246 return [clone, tangents_1, None]\n```\nAnd this add_1 will be counted as \"Invalid\" for backward in partitioner and copy_ will not be captured at all. \n\nRepro:\n```\nimport torch\nclass Func(torch.autograd.Function):\n @staticmethod\n def forward(ctx, dummy, inplace_tensor, attach_gradient):\n ctx.attach_gradient = attach_gradient\n ctx.inplace_tensor = inplace_tensor\n return dummy.clone()\n @staticmethod\n def backward(ctx, grad_output):\n inplace_tensor = ctx.inplace_tensor\n attach_gradient = ctx.attach_gradient\n gradient_attachment = (grad_output * 0 + 1)\n inplace_tensor.add_(1 * gradient_attachment)\n return grad_output, None, None\ndef call(dummy, inplace_tensor, attach_gradient):\n return Func.apply(dummy, inplace_tensor, attach_gradient)\ncompiled_call = torch.compile(call)\ndummy = torch.randn((2,), requires_grad=True).to('cuda')\ninplace_tensor = torch.zeros((2,), requires_grad=False).to('cuda')\nprint(f'Uncompiled')\nloss = call(dummy, inplace_tensor, True).sum()\nprint(f'Pre backward inplace: {inplace_tensor}')\nloss.backward()\nprint(f'Post backward inplace: {inplace_tensor}\\n')\ninplace_tensor.zero_()\nprint(f'Compiled no gradient attachment')\nloss = compiled_call(dummy, inplace_tensor, True).sum()\nprint(f'COMPILED Pre backward inplace: {inplace_tensor}')\nloss.backward()\nprint(f'COMPILED Post backward inplace: {inplace_tensor}\\n')\ninplace_tensor.zero_()\n```\n\nResult:\n```\n ===== Joint graph 0 =====\n /data/users/ivankobzarev/b/pytorch/torch/fx/_lazy_graph_module.py class joint_helper(torch.nn.Module):\n def forward(self, primals, tangents):\n primals_1: \"f32[2][1]cuda:0\"; primals_2: \"f32[2][1]cuda:0\"; tangents_1: \"f32[2][1]cuda:0\"; \n \n primals_1, primals_2, tangents_1, = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)\n # File: /home/ivankobzarev/task-inplace/r.py:23 in call, code: return Func.apply(dummy, inplace_tensor, attach_gradient)\n clone: \"f32[2][1]cuda:0\" = torch.ops.aten.clone.default(primals_1); primals_1 = None\n mul: \"f32[2][1]cuda:0\" = torch.ops.aten.mul.Tensor(tangents_1, 0)\n add: \"f32[2][1]cuda:0\" = torch.ops.aten.add.Tensor(mul, 1); mul = None\n mul_1: \"f32[2][1]cuda:0\" = torch.ops.aten.mul.Tensor(add, 1); add = None\n add_1: \"f32[2][1]cuda:0\" = torch.ops.aten.add.Tensor(primals_2, mul_1); mul_1 = None\n \n # No stacktrace found for following nodes\n copy_: \"f32[2][1]cuda:0\" = torch.ops.aten.copy_.default(primals_2, add_1); primals_2 = add_1 = copy_ = None\n return pytree.tree_unflatten([clone, tangents_1, None], self._out_spec)\n \nINFO: aot_config id: 0, fw_metadata=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=False, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=True), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=, raw_type=, base_idx=None, dynamic_dims=set(), requires_grad=True, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[FakeTensor(..., device='cuda:0', size=(2,))], subclass_inp_meta=[PlainTensorMeta(unwrapped_idx=0, memory_format=None), PlainTensorMeta(unwrapped_idx=1, memory_format=None)], subclass_fw_graph_out_meta=[PlainTensorMeta(unwrapped_idx=0, memory_format=None)], subclass_tangent_", "url": "https://github.com/pytorch/pytorch/issues/155242", "state": "closed", "labels": [ "triaged", "module: correctness (silent)", "module: aotdispatch" ], "created_at": "2025-06-05T17:34:38Z", "updated_at": "2025-06-11T12:50:03Z", "user": "IvanKobzarev" }, { "repo": "pytorch/ao", "number": 2310, "title": "[Question] Combining QAT and Sparsity Training", "body": "First of all, thank you for all the time and effort invested in this project to make (large) models more accessible.\nI am fairly new to optimizing my models using sparsity, and therefore, wanted to ask if my understanding of this library is correct.\nIn general, I would like to train my model using sparsity and QAT.\n\nFor QAT, I would follow this [guide](https://github.com/pytorch/ao/blob/main/torchao/quantization/qat/README.md#quantize_-api-recommended).\nNow I am curious how to correctly use this together with sparsity.\nI assume this `swap_linear_with_semi_sparse_linear(model, sparse_config)` is the correct snippet ([guide](https://github.com/pytorch/ao/tree/main/torchao/sparsity/training#quickstart)).\nIf I want to combine these two optimizations, what is the correct way to do so?\n\n1. Train baseline\n2. Train a sparse model\n3. Train a sparse and quantization-aware model\n\nAdditionally, I found this statement\n\n> A fully sparse 2:4 trained model exhibited a -0.5 pp accuracy drop; we were able to further reduce the accuracy loss to -0.1 pp by first training with 2:4 sparsity enabled and then switching over to normal dense training.\n\nDoes this mean adding a step?\n\n4. Revert sparsity using `swap_semi_sparse_linear_with_linear(model)` and train\n\nLastly, the sparsity `sparsify_ ` and quantization `quantize_` need to be 'applied'.\n\nI would greatly appreciate your input on this.", "url": "https://github.com/pytorch/ao/issues/2310", "state": "closed", "labels": [ "question" ], "created_at": "2025-06-05T13:03:12Z", "updated_at": "2025-06-20T12:37:47Z", "user": "CaptainDario" }, { "repo": "pytorch/torchtitan", "number": 1262, "title": "Checkpointer Feature Enhancements", "body": "This document tracks and describes the essential checkpointing features still to be added to TorchTitan.\n\n- [ ] **Full `state_dict` saving** \n - Support exporting the complete (unsharded) model `state_dict`; many existing formats only handle full `state_dict`.\n - https://github.com/pytorch/torchtitan/pull/1219 is WIP to support this.\n - Need removing FP8 tensor subclass from the `state_dict`.\n\n- [x] **Model `state_dict` mapping** \n - Provide an interface for users/developers to plug in custom converters between TorchTitan\u2019s `state_dict`/model definitions and other model definitions (e.g., Hugging Face models).\n\n- [x] **Hugging Face format saving** \n - Depends on full `state_dict` export \n - Optionally leverages the `model state_dict mapping` interface for users who require conversion\n - Uses the Hugging Face API for saving\n\n- [x] **Hugging Face format loading** \n - Depends on the `model state_dict interface` as most use cases require conversion from other model definitions \n - DCP already supports HF loading but needs tighter API integration and performance tuning (collaboration with DCP)\n\n- [ ] **Enhanced checkpoint debugging & comparison tools** \n - Provide APIs (e.g., per-tensor checksums or diff reports) to pinpoint mismatches in model state, optimizer state, etc. \n - Streamline root-cause analysis when loaded checkpoints lead to unexpected accuracy changes\n\n- [x] **Complete unit tests**\n - Checkpointer has a lot of logic and branches. We can verify Checkpointer through Mock without using GPUs.\n\n- [ ] **Decouple `state_dict` staging from checkpointing/DCP calls** \n - Allow staging of the `state_dict` to CPU (or other targets) independently of DCP \n - Enables downstream workflows (e.g., RL trainers or parameter servers) to consume staged state without invoking DCP\n\n- [ ] **Remove the call to get_model_state_dict and get_optimizer_state_dict**\n - While this originally is viewed as a BE project to demonstrate how to directly get model and optimizer state_dict with canonical FQNs, https://github.com/pytorch/torchtitan/pull/1280 actually depends on this enhancement. \n", "url": "https://github.com/pytorch/torchtitan/issues/1262", "state": "open", "labels": [ "enhancement", "better engineering", "module: checkpoint" ], "created_at": "2025-06-04T20:44:27Z", "updated_at": "2025-08-21T03:20:05Z", "comments": 3, "user": "fegin" }, { "repo": "pytorch/torchtitan", "number": 1257, "title": "Question about fixed std=0.02 initialization of `w1` in `moe.py`", "body": "Hi torchtitan team,\n\nThanks for the great work on this project! I had a question regarding a detail in the code at moe.py#L92\n\nhttps://github.com/pytorch/torchtitan/blob/768cde131105bde624160029d808e94649faf0f4/torchtitan/experiments/llama4/model/moe.py#L92\n\nI noticed that `w1` is initialized with a fixed standard deviation of 0.02, whereas `w2` and `w3` are initialized using a configurable `init_std` parameter. I\u2019m wondering if this discrepancy is intentional, and if so, what the reasoning is behind using a hardcoded value for `w1`.\n\nWould greatly appreciate any insights you could share!\n\nThanks again!\n", "url": "https://github.com/pytorch/torchtitan/issues/1257", "state": "open", "labels": [ "question", "triage review" ], "created_at": "2025-06-03T04:06:53Z", "updated_at": "2025-08-21T07:03:44Z", "user": "trestad" }, { "repo": "pytorch/tutorials", "number": 3373, "title": "[BUG] Running `make html-noplot` yields errors.", "body": "### Add Link\n\nI ran the following command about 10 hours ago, around 12:20:00 utc and it gave me errors. (I am being specific about the time, because I was unable to find a release that I could point to).\n`git clone --depth 1 https://github.com/pytorch/tutorials.git`\n\n### Describe the bug\n\n## What errors did you encounter?\n```\ngenerating gallery for beginner... [ 6%] saving_loading_models.py\nExtension error (sphinx_gallery.gen_gallery):\nHandler for event 'builder-inited' threw an exception (exception: Can't pickle : attribute lookup call_fn on __main__ failed)\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"C:\\Python\\Python310\\lib\\multiprocessing\\spawn.py\", line 107, in spawn_main\n new_handle = reduction.duplicate(pipe_handle,\n File \"C:\\Python\\Python310\\lib\\multiprocessing\\reduction.py\", line 79, in duplicate\n return _winapi.DuplicateHandle(\nOSError: [WinError 6] The handle is invalid\nmake: *** [html-noplot] Error 2\n```\n\n## What did you expect to happen?\nAs stated in the README.md file, I expected a basic html version of the tutorial to be built at `_build/html`\n\n## Steps to Reproduce the error\n1. Run the git command below\n `git clone --depth 1 https://github.com/pytorch/tutorials.git`\n\n2. Run `pip install -r .ci/docker/requirements.txt`. I am aware the instruction was to `pip install -r requirements.txt`. But \n I keep encountering the errors below, so I improvised.\n\n```\nERROR: Invalid requirement: '.ci/docker/requirements.txt': Expected package name at the start of dependency specifier\n .ci/docker/requirements.txt\n ^ (from line 1 of requirements.txt)\n```\n3. Run `make html-noplot`. For this one, I gnuWin32 make. This is what is available on Windows.\n\n I noticed that this error is similar to that found when I run re.compile('\\\\c'). I am familiar with this scenario and so I looked further and traced the error to the code [here](https://github.com/pytorch/tutorials/blob/20bf27e027d35a455d24469098f6d685547ff11d/.jenkins/get_sphinx_filenames.py#L13). I was able to move on from this error by modifying my local version of the code to \n `SPHINX_SHOULD_RUN = \"|\".join(get_files_for_sphinx()).replace('\\\\', '\\\\\\\\')`\n I want to note that I do not feel confident in that action because I notice that code was last modified 2 years ago, unless I have a wrong interpretation of what the \"2 years ago\" that I see around there means. It was last modified 2 years ago! That means that working tutorials have been built with that piece of code. This makes me feel very strongly that something is wrong with my setup. But I resisted raising any issues because I considered that it might not be worth it to distract the attention of our dear conscientious developers whose efforts to maintain this codebase does not go unnoticed.\n\n4. Run `make html-noplot` once more.\n The error [above](##what-errors-did-you-encounter?) appears. I look at the error and I see `multiprocessing.py` there. I do not know how to do anything with code that runs on more than one thread or process. I would appreciate knowing what I have done wrong in my environment because surely the code in this repository works as it has been tested as required.\n\n\n\n### Describe your environment\n\n## Environment\n* Python 3.10.5\n* pip 25.1.1\n* All commands were run in the top directory of the cloned repository\n* All *pip-installing* was done in a fresh virtual environment created using **venv** and located in the top directory of the cloned repository. The command used for that was `python -m venv doc-env`.\n* GPU (not cuda): Intel Iris Xe (Not sure this is relevant) ", "url": "https://github.com/pytorch/tutorials/issues/3373", "state": "open", "labels": [ "bug", "build issue" ], "created_at": "2025-06-02T23:37:26Z", "updated_at": "2025-06-03T20:01:28Z", "comments": 5, "user": "phonokoye" }, { "repo": "pytorch/xla", "number": 9272, "title": "Improve documentation for running benchmark unit tests", "body": "## \ud83d\udcda Documentation\n\nCurrently, in the `benchmarks/` directory, the `README.md` file only specified to use `make -C ...` to run the unit tests for the benchmarking code. The python tests like `test_benchmark_model.py` is not run.\n\nWe need better instructions on how to run the python unit tests.\n\nCurrently, I have to add the `benchmarks/` dir to the `$PYTHONPATH` to have the tests discover the python packages needed for the tests and run the tests by `python test/benchmarks/test_benchmark_model.py`.\n\n@ysiraichi may know a better way.", "url": "https://github.com/pytorch/xla/issues/9272", "state": "open", "labels": [ "documentation", "benchmarking" ], "created_at": "2025-05-30T22:01:17Z", "updated_at": "2025-06-04T12:10:55Z", "comments": 1, "user": "haifeng-jin" }, { "repo": "pytorch/xla", "number": 9269, "title": "Torch model parameters as HLO constants", "body": "## \u2753 Questions and Help\nHello, I am wondering if there is a way to bake model parameters into the produced HLO model as constants. For Torch-XLA it seems like model parameters are treated as additional input args which makes it difficult to port this into openxla/xla for execution in cpp. The HLO produced from Jax already has the model parameters as constants within the model. Is there a way to do something closer to Jax where I can save an HLO/StableHLO model from Torch with the model parameters being a part of the HLO and the only arguments being the true model inputs? Thanks!", "url": "https://github.com/pytorch/xla/issues/9269", "state": "open", "labels": [ "question" ], "created_at": "2025-05-30T20:18:01Z", "updated_at": "2025-06-13T04:35:59Z", "user": "drewjenks01" }, { "repo": "pytorch/torchtitan", "number": 1237, "title": "[Bug] Potential bugs in \"_grouped_mm\" in Llama4 MoE codes", "body": "### Bug description\n\n### Descriptions for Bugs.\n\nI encountered NaN loss values when running Llama 4 MoE experimental codes.\nThe errors come from [here](https://github.com/pytorch/torchtitan/blob/ed2bbc07dda35ce26187bb0d743115381e884b35/torchtitan/experiments/llama4/model/moe.py#L85-L87). \n\nAfaik `offsets` are defined as `torch.cumsum(num_local_tokens_per_expert)` and `x` (`routed_input`) is permuted with the shape of `original_shape + num_experts * ALIGN_SIZE_M`.\nThus, there was a difference between `x.shape[0]` and `offsets[-1]`. \n\nI'm not sure which expert will be allocated for those redundant tensors in x in `grouped_mm`.\nI believe the expected behavior would be the outputs from them should always be 0, because they are filled with 0 values.\nBut `_grouped_mm` sometimes results in large values, which first index of outputs gets `inf` elements ([here](https://github.com/pytorch/torchtitan/blob/ed2bbc07dda35ce26187bb0d743115381e884b35/torchtitan/experiments/llama4/model/moe.py#L322)).\n\n### How to Reproduce?\n\n1. I used [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) tokenizer.\n2. I used `debug_model.toml`, but with different batch size and seq_len in 1 H200 GPU. Here is the running script:\n```\ntorchrun --nnodes 1 --nproc_per_node 1 ./torchtitan/train.py \\\n--job.config_file ./torchtitan/experiments/llama4/train_configs/debug_model.toml --job.dump_folder ./outputs/250528_grouped_mm_debug \\\n--profiling.save_traces_folder profile_trace --comm.trace_buf_size 0 --checkpoint.folder ./checkpoints/250528_grouped_mm_debug --checkpoint.interval 13000 \\\n--training.steps 114440 --training.batch_size 1 --training.seq_len 2048 \\\n--metrics.log_freq 100 --lr_scheduler.warmup_steps 1000 --optimizer.lr 6e-4 \\\n--parallelism.data_parallel_shard_degree 1 --parallelism.tensor_parallel_degree 1\n```\n3. Add `x = x.to(torch.bfloat16)` and `..., dtype=torch.bfloat16)` for `self.w1`, `self.w2`, and `self.w3`, since 1 GPU will automatically use torch.float32 in the code and `_grouped_mm` requires tensors are in GPU.\n4. I used `pdb` to get intermediate outputs one by one.\n\n\n### Results and Expected Behaviors.\n\nRouted outputs sometimes show the following results (at the first step or a few steps later):\n```\noffsets : tensor([ 176, 416, 736, 992, 1296, 1584, 1840, 2096], device='cuda:0', dtype=torch.int32)\n\nx.shape : torch.Size([2176, 256])\n\nh = F.silu(torch._grouped_mm(x, self.w1, offs=offsets)) :\ntensor([[ 3.7598e-02, -9.3262e-02, 1.3965e-01, ..., -1.7822e-02,\n -2.2949e-02, 2.0020e-02],\n [ 1.1572e-01, 2.2461e-01, 3.1641e-01, ..., 8.6060e-03,\n -5.3711e-02, -2.7100e-02],\n [ 1.4551e-01, 2.1973e-02, 1.3086e-01, ..., -2.5269e-02,\n 3.7354e-02, -1.5503e-02],\n ...,\n [-0.0000e+00, 2.9297e-02, -0.0000e+00, ..., 5.2246e-02,\n 7.7462e+18, -1.8066e-02],\n [ 2.8531e+26, 5.1025e-02, -0.0000e+00, ..., 1.1670e-01,\n 3.2028e-28, 1.5076e-02],\n [ 6.3348e+26, 3.8818e-02, 4.0250e+01, ..., -2.8229e-03,\n 2.4844e-32, -8.6670e-03]], device='cuda:0', dtype=torch.bfloat16,\n grad_fn=)\n\nh = h * torch._grouped_mm(x, self.w3, offs=offsets)\ntensor([[-1.8692e-03, -2.8992e-03, 1.6327e-03, ..., -1.5564e-03,\n -1.0681e-02, 5.1022e-05],\n [-5.5237e-03, 6.0425e-03, 1.0864e-02, ..., 9.8419e-04,\n 3.0396e-02, -4.2152e-04],\n [-1.6785e-03, -4.5776e-04, -2.0142e-03, ..., 1.0193e-02,\n -4.6082e-03, -1.3733e-04],\n ...,\n [ 0.0000e+00, 1.2054e-03, -0.0000e+00, ..., -2.5177e-03,\n 3.5863e+11, -1.7548e-03],\n [ -inf, 6.3705e-04, 0.0000e+00, ..., 9.5825e-03,\n -2.9000e+02, 3.2234e-04],\n [ 8.4410e+07, 4.0588e-03, -1.0379e+31, ..., 3.7432e-05,\n 1.2387e-07, -1.3733e-03]], device='cuda:0', dtype=torch.bfloat16,\n grad_fn=)\n\nout = torch._grouped_mm(h, self.w2, offs=offsets)\ntensor([[ 6.3782e-03, 4.0894e-03, -1.3672e-02, ..., -8.4839e-03,\n -2.8229e-03, -3.9978e-03],\n [-1.9379e-03, -4.6387e-03, 8.5449e-03, ..., -4.8523e-03,\n -4.4861e-03, -1.4114e-03],\n [-3.1128e-03, -2.5177e-03, -3.4332e-03, ..., 1.3062e-02,\n -6.7139e-03, -7.6904e-03],\n ...,\n [-1.6251e-03, -1.3279e-10, -7.3787e+19, ..., -5.1659e-10,\n -3.8780e+34, -3.5834e-10],\n [ 4.7055e+34, -1.6735e-09, 6.0889e+18, ..., -1.1205e-09,\n 7.1024e+24, 3.1287e-10],\n [-2.4087e-21, -2.1682e-09, 3.0898e+20, ..., 2.9831e-09,\n 2.4898e-30, 5.5297e-10]], device='cuda:0', dtype=torch.bfloat16,\n grad_fn=)\n```\n\nWe expect that tensors, where the sequence positions are from 2096 to 2176, should be always zero.\nThis causes to hidden states to have nan values, and nan values of loss eventually.\n\n### Versions\n\nPython 3.13 with the following packages:\n\n```\nabsl-py==2.2.2\naiohappyeyeballs==2.6.1\naiohttp==3.11.18\naiosignal==1.3.2\nannotated-types==0.7.0\nast", "url": "https://github.com/pytorch/torchtitan/issues/1237", "state": "closed", "labels": [], "created_at": "2025-05-29T00:07:09Z", "updated_at": "2025-07-08T04:54:37Z", "comments": 8, "user": "raymin0223" }, { "repo": "pytorch/xla", "number": 9259, "title": "need an incremental build script", "body": "## \ud83d\ude80 Feature\n\nAfter making a small change to the source code, we should be able to do an incremental build that only rebuilds the affected targets. We need to document how to do that. It may require writing a script that can be easily invoked.\n\n## Motivation\n\nCurrently we recommend developers to run https://github.com/pytorch/xla/blob/master/scripts/build_developer.sh to rebuild after a change. However, this script doesn't a full rebuild (even though it may benefit from build caching), making it unnecessarily slow.\n\nWe should have a smart build script (e.g. based on make and/or bazel) that skips the rebuilding of things that haven't changed).", "url": "https://github.com/pytorch/xla/issues/9259", "state": "closed", "labels": [ "tech debt", "build" ], "created_at": "2025-05-28T23:15:38Z", "updated_at": "2025-05-30T01:30:56Z", "comments": 4, "user": "zhanyong-wan" }, { "repo": "pytorch/xla", "number": 9256, "title": "Docs build issues errors / warnings on duplicate labels (anchors)", "body": "Docs build indicates that the docs have duplicate labels (aka anchors). These predate the recent changes to myst but now that we have standardized on the same tooling as upstream PT, we should now start fixing these. Here is an output. Note that you have to manually clean by deleting the build directory to force a full rebuild.\n\n(nightly311) yho_google_com@t1v-n-50ea3a23-w-0:/mnt/disks/yho/pytorch/xla/docs$ ./docs_build.sh \nObtaining pytorch_sphinx_theme from git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme (from -r requirements.txt (line 4))\n Updating ./src/pytorch-sphinx-theme clone\n Running command git fetch -q --tags\n Running command git reset --hard -q 4125c834e1aa0945fde6ef58ff2f77f7abedc460\n Installing build dependencies ... done\n Checking if build backend supports build_editable ... done\n Getting requirements to build editable ... done\n Preparing editable metadata (pyproject.toml) ... done\nRequirement already satisfied: sphinx==5.0.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (5.0.0)\nRequirement already satisfied: sphinxcontrib.katex==0.8.6 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 8)) (0.8.6)\nRequirement already satisfied: sphinx-copybutton==0.5.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 13)) (0.5.0)\nRequirement already satisfied: myst-parser==0.18.1 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 15)) (0.18.1)\nRequirement already satisfied: myst-nb==0.16 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 18)) (0.16.0)\nRequirement already satisfied: sphinxcontrib-applehelp in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)\nRequirement already satisfied: sphinxcontrib-devhelp in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)\nRequirement already satisfied: sphinxcontrib-jsmath in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (1.0.1)\nRequirement already satisfied: sphinxcontrib-htmlhelp>=2.0.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.1.0)\nRequirement already satisfied: sphinxcontrib-serializinghtml>=1.1.5 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)\nRequirement already satisfied: sphinxcontrib-qthelp in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)\nRequirement already satisfied: Jinja2>=2.3 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (3.1.6)\nRequirement already satisfied: Pygments>=2.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.19.1)\nRequirement already satisfied: docutils<0.19,>=0.14 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (0.18.1)\nRequirement already satisfied: snowballstemmer>=1.1 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (3.0.1)\nRequirement already satisfied: babel>=1.3 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.17.0)\nRequirement already satisfied: alabaster<0.8,>=0.7 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (0.7.16)\nRequirement already satisfied: imagesize in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (1.4.1)\nRequirement already satisfied: requests>=2.5.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.32.3)\nRequirement already satisfied: packaging in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (25.0)\nRequirement already satisfied: markdown-it-py<3.0.0,>=1.0.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from myst-parser==0.18.1->-r requirements.txt (line 15)) (2.2.0)\nRequirement already satisfied: mdit-py-plugins~=0.3.1 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from myst-parser==0.18.1->-r requirements.txt (line 15)) (0.3.5)\nRequirement already satisfied: pyyaml in /mnt/disks/yho/miniconda/envs/nig", "url": "https://github.com/pytorch/xla/issues/9256", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-05-28T19:02:14Z", "updated_at": "2025-07-16T22:48:17Z", "comments": 1, "user": "yaoshiang" }, { "repo": "pytorch/TensorRT", "number": 3536, "title": "\u2753 [Question] Do you have any plan to release v2.6.1 ?", "body": "## \u2753 Question\n\nHello, Torch-TensorRT team,\n\nI'd like to ask if there are any plans to release a patch version, such as v2.6.1.\n\nThe current release (v2.6.0) includes a `breakpoint()` call left in [the code](https://github.com/pytorch/TensorRT/blob/v2.6.0-rc3/py/torch_tensorrt/dynamo/conversion/custom_ops_converters.py#L57), which halts execution and makes the release unusable in production environments unless modified manually or installed from the source. Since `Torch-TensorRT` tightly couples with a specific TensorRT version and PyTorch version, there's currently no alternative.\n\nA quick patch release would be greatly appreciated.\nThanks for your great work.\n", "url": "https://github.com/pytorch/TensorRT/issues/3536", "state": "closed", "labels": [ "question" ], "created_at": "2025-05-28T08:37:18Z", "updated_at": "2025-06-03T04:50:48Z", "user": "junstar92" }, { "repo": "pytorch/tutorials", "number": 3367, "title": "\ud83d\udca1 [REQUEST] - Proposal: Add Tutorial on Differentiable Decision Forests (DNDF-style)", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\n### Proposal: Add a Tutorial/Documentation Example on Differentiable Decision Forests\n\n**Overview**\n\nThis is a proposal to add a well-documented example or tutorial demonstrating a *Differentiable Decision Forest* model in PyTorch \u2014 inspired by the Deep Neural Decision Forests paper (Kontschieder et al., ICCV 2015).\n\nThe goal is not to introduce a new `torch.nn` module, but rather to show how such a model can be implemented using native PyTorch operations in a transparent and educational way.\n\n**Why This?**\n\n- Combines the interpretability of decision trees with the feature learning power of neural networks.\n- Uses soft routing (sigmoid decisions) and learnable leaf distributions (softmax) to allow end-to-end backpropagation.\n- Offers an alternative to traditional ensembles or black-box classifiers, especially for tabular and hybrid domains.\n\n**What the Tutorial Would Include**\n\n- Overview of the model structure (CNN \u2192 decision trees)\n- How to implement soft decisions and routing probabilities (\u03bc) with PyTorch ops like `sigmoid`, `softmax`, `einsum`, `gather`, etc.\n- Joint optimization of routing and leaf distributions\n- Training on MNIST or tabular datasets\n- Emphasis on \"Simple over Easy\" \u2014 no custom abstractions\n\n**Reference**\n\n- [Kontschieder et al., Deep Neural Decision Forests, ICCV 2015](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/ICCV15_DeepNDF_main.pdf)\n\n**Final Note**\n\nThis is not a request to add this as a built-in PyTorch module \u2014 in fact, that might go against PyTorch's *Simple over Easy* philosophy. \nInstead, this would be best suited as a community-contributed tutorial or example in the official [PyTorch Tutorials](https://github.com/pytorch/tutorials) repository or documentation site.\n\nExtended Note\nI'm currently in the middle of university exams and may not be able to actively contribute for a few weeks \u2014 but I\u2019d be very interested in helping develop the tutorial afterwards.\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/tutorials/issues/3367", "state": "open", "labels": [ "tutorial-proposal" ], "created_at": "2025-05-27T10:01:23Z", "updated_at": "2025-07-02T15:00:18Z", "comments": 6, "user": "Tunahanyrd" }, { "repo": "pytorch/torchtitan", "number": 1223, "title": "How to pretrain from scratch a Qwen 2.5 7B-base model using Torchtitan?", "body": "HI team,\n\nThank you for the excellent work!\n\nCould you please tell me where to find example scripts/templates for pretraining from scratch a Qwen 2.5 7B-base model using Torchtitan?\n\nThanks again!", "url": "https://github.com/pytorch/torchtitan/issues/1223", "state": "closed", "labels": [], "created_at": "2025-05-25T00:42:15Z", "updated_at": "2025-08-21T03:18:41Z", "user": "tjoymeed" }, { "repo": "pytorch/audio", "number": 3918, "title": "`io.UnsupportedOperation: seek` when using `torchaudio.io.StreamWriter` with a File-like object", "body": "### \ud83d\udc1b Describe the bug\n\nIn [the tutorial for `StreamWriter`](https://docs.pytorch.org/audio/stable/tutorials/streamwriter_basic_tutorial.html#file-like-objects), it is clearly stated that `StreamWriter` works with File-like object that implements `io.RawIOBase.write`. However, when I used `StreamWriter` with the [Google Cloud Storage `BlobWriter`](https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.fileio.BlobWriter) object that implements `write` but not `seek`, an error is thrown on calling `StreamWriter.close()`:\n\n```python\nfrom google.cloud.storage import Blob\nfrom torch.io import StreamWriter\n\nblob = Blob(name=..., bucket=...)\n\nwith blob.open(\"wb\") as f:\n writer = StreamWriter(dst=f)\n with writer.open():\n ...\n```\n```\nself = \n\n def close(self):\n \"\"\"Close the output\n \n :py:class:`StreamingMediaEncoder` is also a context manager and therefore supports the\n ``with`` statement.\n It is recommended to use context manager, as the file is closed automatically\n when exiting from ``with`` clause.\n \n See :py:meth:`StreamingMediaEncoder.open` for more detail.\n \"\"\"\n if self._is_open:\n> self._s.close()\nE io.UnsupportedOperation: seek\n\n.venv/lib/python3.11/site-packages/torio/io/_streaming_media_encoder.py:451: UnsupportedOperation\n```\n\nClearly `seek` is called in `close()`, which causes this error. For now, can I get around this issue by not calling `close` on the `writer` object but do call `close` the `blob` object?\n\n### Versions\n\ngoogle-cloud-storage 3.1.0\ntorchaudio 2.6.0", "url": "https://github.com/pytorch/audio/issues/3918", "state": "open", "labels": [], "created_at": "2025-05-23T15:24:45Z", "updated_at": "2025-05-23T15:40:48Z", "comments": 0, "user": "digicosmos86" }, { "repo": "pytorch/ao", "number": 2249, "title": "int4_weight_only get plain weight are padded", "body": "I try to quantize a model with int4_weight_only, and want to get the plained weight, but found the weight has been padded. To reproduce it, run the following script:\n```python\nimport torch\nfrom transformers import TorchAoConfig, AutoModelForCausalLM\n \nmodel_name = \"JackFram/llama-68m\"\nquantization_config = TorchAoConfig(\"int4_weight_only\")\nquantized_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map=\"cuda:0\", quantization_config=quantization_config)\nprint(quantized_model.model.layers[0].self_attn.q_proj.weight.tensor_impl.get_plain()[0].shape)\nprint(quantized_model.model.layers[0].self_attn.q_proj.weight.tensor_impl.get_plain()[0])\n```\noutput\n```\n(768, 1024)\ntensor([[11, 12, 8, ..., 0, 0, 0],\n [ 5, 6, 5, ..., 0, 0, 0],\n [ 5, 7, 7, ..., 0, 0, 0],\n ...,\n [ 7, 5, 2, ..., 0, 0, 0],\n [ 6, 1, 7, ..., 0, 0, 0],\n [ 8, 11, 9, ..., 0, 0, 0]], device='cuda:0', dtype=torch.int32)\n```\nThe original shape should be `(768, 768)`, but the plained weight shape is `(768, 1024)`. Can we have a remove padding process in `get_plain()` function?", "url": "https://github.com/pytorch/ao/issues/2249", "state": "open", "labels": [ "question", "quantize_" ], "created_at": "2025-05-23T07:17:20Z", "updated_at": "2025-06-24T20:14:53Z", "user": "jiqing-feng" }, { "repo": "pytorch/xla", "number": 9236, "title": "make README work for people using python 3.12/13", "body": "## \ud83d\udcda Documentation\n\nThe installation instructions in README fail if the user has python 3.12 or 3.13 as the default. (Currently pytorch-xla only works with python 3.8-3.11.)\n\nWe should:\n\n- document the requirement for the python version.\n- add workaround instructions for people whose default python version is not 3.8-3.11.", "url": "https://github.com/pytorch/xla/issues/9236", "state": "open", "labels": [ "documentation" ], "created_at": "2025-05-22T00:33:29Z", "updated_at": "2025-05-22T16:09:41Z", "comments": 4, "user": "zhanyong-wan" }, { "repo": "pytorch/pytorch", "number": 154027, "title": "How to add custom attributes to torch tensor?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHow can I add custom attributes like device_local or host_local to a PyTorch tensor without affecting TensorImpl or StorageImpl? I have a use case where I need to convert an external tensor into a PyTorch tensor while preserving such properties\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/154027", "state": "closed", "labels": [], "created_at": "2025-05-21T09:13:16Z", "updated_at": "2025-05-21T13:42:11Z", "user": "bailuan" }, { "repo": "pytorch/vision", "number": 9079, "title": "Build pytorch trunk from source and build vision from source makes `import torchvision;` fail", "body": "### \ud83d\udc1b Describe the bug\n\nIf I build pytorch from turnk (2.8+1478d0185c29) and build vision from source, I can't run `import torchvision;`.\n\n```\nimport torchvision\n```\n\nwill report: `RuntimeError: operator torchvision::nms does not exist`.\n\nIt will succeed if I replace the version of pytorch from trunk to branch `release/2.7`. (Build from source still).\n\nHow can I build vision with pytorch from source?\n\n### Versions\n\ntrunk d02b1845a2fabea1eb8f9d09310369a5cbb5514f", "url": "https://github.com/pytorch/vision/issues/9079", "state": "open", "labels": [], "created_at": "2025-05-21T03:25:05Z", "updated_at": "2025-09-02T15:27:37Z", "comments": 3, "user": "ChuanqiXu9" }, { "repo": "pytorch/pytorch", "number": 154009, "title": "SourcelessBuilder.create does not know how to wrap ", "body": "### \ud83d\udc1b Describe the bug\n\nI am trying to use torch compile on my functions and encounter this issue. I attached a minimum test program so anyone can reproduce the issue. \n\n```python\nfrom dataclasses import dataclass\n\nimport torch\n\n@dataclass(frozen=True)\nclass BaseFlexData:\n dtype: torch.dtype | None = None\n\n def view(self, x: torch.Tensor):\n if self.dtype is None:\n return x\n return x.view(self.dtype)\n\n def reinterpret(self, x):\n if self.dtype is None or x.dtype.itemsize > 1:\n return x\n return x.view(self.dtype)\n\n@dataclass(frozen=True)\nclass InFlexData(BaseFlexData):\n scale: torch.Tensor | None = None\n\n @property\n def is_per_batch(self):\n return False if self.scale is None else len(self.scale) > 1\n\n@dataclass(frozen=True)\nclass OutFlexData(BaseFlexData):\n expected_scale: torch.Tensor | None = None\n actual_scale: torch.Tensor | None = None\n checksum_scale: torch.Tensor | None = None\n\n def __iter__(self):\n yield self.expected_scale\n yield self.actual_scale\n yield self.checksum_scale\n\n@dataclass(frozen=True)\nclass FlexCtx:\n lhs_data: InFlexData = InFlexData()\n rhs_data: InFlexData = InFlexData()\n out_data: OutFlexData = OutFlexData()\n\n@dataclass\nclass DummyClass:\n flex_ctx: FlexCtx = FlexCtx()\n\n def __post_init__(self):\n assert self.flex_ctx.rhs_data.scale is None, \"flex and mx_ctx cannot be used together\"\n\n@torch.compile(fullgraph=True)\ndef dummy_method():\n var = DummyClass(flex_ctx=FlexCtx(rhs_data=InFlexData()))\n return var\n\ndummy_method()\n\n```\n\n\n\n### Error logs\n\n```\nTORCHDYNAMO_VERBOSE=1 python test_compile.py\nTraceback (most recent call last):\n File \"/home/eecs/yongye.zhu/vllm/tests/kernels/moe/test_compile.py\", line 56, in \n dummy_method()\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py\", line 685, in _fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 1463, in __call__\n return self._torchdynamo_orig_callable(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 624, in __call__\n return _compile(\n ^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 1087, in _compile\n guarded_code = compile_inner(code, one_graph, hooks, transform)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_utils_internal.py\", line 97, in wrapper_function\n return function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 778, in compile_inner\n return _compile_inner(code, one_graph, hooks, transform)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 817, in _compile_inner\n out_code = transform_code_object(code, transform)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py\", line 1423, in transform_code_object\n transformations(instructions, code_options)\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 264, in _fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py\", line 742, in transform\n tracer.run()\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 3508, in run\n super().run()\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 1345, in run\n while self.step():\n ^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 1253, in step\n self.dispatch_table[inst.opcode](self, inst)\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 828, in wrapper\n return inner_fn(self, inst)\n ^^^^^^^^^^^^^^^^^^^^\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 2934, in CALL\n self._call(inst)\n File \"/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py\", line 2928, in _call\n self.call_function(fn, args, kwargs", "url": "https://github.com/pytorch/pytorch/issues/154009", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "dynamo-dataclasses", "vllm-compile", "module: vllm" ], "created_at": "2025-05-21T02:34:19Z", "updated_at": "2025-10-24T16:39:07Z", "user": "zyongye" }, { "repo": "pytorch/ao", "number": 2228, "title": "[Quant] Can quant not be decomposed on inductor?", "body": "torch.ops.torchao.dequantize_affine decomposed to convert_element_type and mul.\nInductor will do constant_fold before pattern matching\nOn constant_fold, inductor replace fp8 weight and some previous operations with fp32 weight\nIs this as expected?\n\nNow register_decomposition on [register_decomposition](https://github.com/pytorch/ao/blob/96aec6a3e713687c1728a20a08d5c54db0344377/torchao/utils.py#L226)\n\nThis sample test can reproduce the issue\n\n```python\n\nimport os\n\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\nos.environ[\"TORCHINDUCTOR_FREEZING\"] = \"1\"\nos.environ[\"TORCH_COMPILE_DEBUG\"] = \"0\"\nos.environ[\"TORCHDYNAMO_PRINT_GUARD_FAILS\"] = \"0\"\n\nfrom typing import Callable, List, Optional, Union\nimport torch\nfrom torch import nn\nimport torchao\n#import torchao.quantization.pt2e.quantizer.x86_inductor_quantizer as xiq\n\ndef dequantize_per_tensor(\n input: torch.Tensor,\n scale: torch.Tensor,\n output_dtype: torch.dtype\n) -> torch.Tensor:\n res = torch.ops.torchao.dequantize_affine(\n input=input,\n block_size=input.shape,\n scale=scale,\n zero_point=torch.tensor(0),\n input_dtype=torch.float8_e4m3fn,\n )\n if output_dtype != torch.float:\n res = res.to(output_dtype)\n return res\n\ndef quantize_per_tensor(\n input: torch.Tensor,\n scale: torch.Tensor,\n) -> torch.Tensor:\n return torch.ops.torchao.quantize_affine(\n input=input,\n block_size=input.shape,\n scale=scale,\n zero_point=torch.tensor(0),\n output_dtype=torch.float8_e4m3fn,\n )\n\nclass Perceptron(torch.nn.Module):\n def __init__(\n self,\n in_size: int,\n out_size: int,\n bias: bool = True,\n activation: Union[\n torch.nn.Module,\n Callable[[torch.Tensor], torch.Tensor],\n ] = torch.relu,\n device: Optional[torch.device] = None,\n dtype: torch.dtype = torch.float32,\n ) -> None:\n super().__init__()\n self._out_size = out_size\n self._in_size = in_size\n self._linear: nn.Linear = nn.Linear(\n self._in_size,\n self._out_size,\n bias=bias,\n device=device,\n dtype=dtype,\n )\n self._activation_fn: Callable[[torch.Tensor], torch.Tensor] = activation\n\n def forward(self, input: torch.Tensor) -> torch.Tensor:\n return self._activation_fn(self._linear(input))\n\nclass MLP(torch.nn.Module):\n def __init__(\n self,\n in_size: int,\n layer_sizes: List[int],\n bias: bool = True,\n activation: Union[\n str,\n Callable[[], torch.nn.Module],\n torch.nn.Module,\n Callable[[torch.Tensor], torch.Tensor],\n ] = torch.relu,\n device: Optional[torch.device] = None,\n dtype: torch.dtype = torch.float32,\n ) -> None:\n super().__init__()\n\n if activation == \"relu\":\n activation = torch.relu\n elif activation == \"sigmoid\":\n activation = torch.sigmoid\n\n if not isinstance(activation, str):\n self._mlp: torch.nn.Module = torch.nn.Sequential(\n *[\n Perceptron(\n layer_sizes[i - 1] if i > 0 else in_size,\n layer_sizes[i],\n bias=bias,\n activation=activation,\n device=device,\n dtype=dtype,\n )\n for i in range(len(layer_sizes))\n ]\n )\n else:\n assert (\n ValueError\n ), \"This MLP only support str version activation function of relu, sigmoid, and swish_layernorm\"\n\n def forward(self, input: torch.Tensor) -> torch.Tensor:\n return self._mlp(input)\n\nclass DenseArch(nn.Module):\n def __init__(\n self,\n in_features: int,\n layer_sizes: List[int],\n device: Optional[torch.device] = None,\n ) -> None:\n super().__init__()\n self.model: nn.Module = MLP(\n in_features, layer_sizes, bias=True, activation=\"relu\", device=device\n )\n\n def forward(self, features: torch.Tensor) -> torch.Tensor:\n return self.model(features)\n\n\ndef inc_convert(model, dtype):\n model.eval()\n qtype = torch.float8_e4m3fn\n\n #from torch.ao.quantization.fx._decomposed import quantize_per_tensor, dequantize_per_tensor\n from torch.nn import functional as F\n\n class FP8QDQLinear(torch.nn.Module):\n def __init__(self, in_features, out_features):\n super().__init__()\n self.weight = torch.empty((out_features, in_features),)\n self.weight_scale = None\n self.scale = None\n self.bias = None\n\n def forward(self, input):\n weight = dequantize_per_tensor(\n self.weight.data,\n self.weight_scale,\n dtype,\n )\n q_input = quantize_per_tensor(\n ", "url": "https://github.com/pytorch/ao/issues/2228", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2025-05-20T09:25:54Z", "updated_at": "2025-06-25T08:22:25Z", "user": "shiyang-weng" }, { "repo": "pytorch/TensorRT", "number": 3525, "title": "\u2753 [Question] How to save the compiled while using torch.compile", "body": "For the example below, how do I save the compiled model?\n\nbackend = \"torch_tensorrt\"\ntp_model = torch.compile(\n tp_model,\n backend=backend,\n options={\n \"truncate_long_and_double\": True,\n \"enabled_precisions\": {torch.float32, torch.float16},\n \"use_python_runtime\": True,\n \"min_block_size\": 1,\n },\n dynamic=False,\n)", "url": "https://github.com/pytorch/TensorRT/issues/3525", "state": "open", "labels": [ "question" ], "created_at": "2025-05-20T03:06:53Z", "updated_at": "2025-05-20T15:15:27Z", "user": "klin2024" }, { "repo": "pytorch/torchchat", "number": 1543, "title": "[IMPORTANT] torchchat sunset", "body": "**As of May 19th 2025, we are halting active development on torchchat.** \n\nThe original intent of torchchat was to both demonstrate how to run LLM inference using PyTorch and improve the performance and functionality of the entire PyTorch ecosystem.\n\nSince torchchat\u2019s launch, we\u2019ve seen vLLM become the dominant player for server-side LLM inference. We\u2019re ecstatic to have [vLLM join the PyTorch Ecosystem](https://pytorch.org/blog/vllm-joins-pytorch/) and recommend folks use them for hosting LLMs in server production environments. Given the growth of vLLM and others, we do not see the need to maintain an active demonstration of how to run LLM inference using PyTorch.\n\nWe are very proud of the performance and functionality improvements we saw in the PyTorch ecosystem over the last year, including:\n\n- The performance of LLM inference increase by multiples for every device we support (CUDA, CPU, MPS, ARM, etc) \n- Working code, demonstrating how to run LLM inference for all the major execution modes (Eager, Compile, AOTI and ET) giving users a starting point for using PyTorch for LLM inference from server to embedded devices and everything in between\n- Quantization expand to support the most popular schemes and bit sizes\n- torchchat become the testing grounds for new advancements ([experimental torchao kernels](https://github.com/pytorch/torchchat/blob/fd3059bf830494cf14dd474af348c7ebb3d6be76/docs/quantization.md#experimental-torchao-lowbit-kernels), [MPS compile](https://github.com/pytorch/pytorch/blob/31f175ea9a00b1ca392858cd0d160706201b12da/torch/_inductor/codegen/mps.py), [AOTI Packaging](https://github.com/pytorch/pytorch/blob/f2e8e41855caaae6ed7254f7abf4e31122363722/docs/source/torch.compiler_aot_inductor.rst#aotinductor-ahead-of-time-compilation-for-torchexport-ed-models))\n\nThere\u2019s still plenty of exciting work to do across the LLM Inference space and PyTorch will stay invested in improving things.\nWe appreciate and thank everyone in the community for all that you\u2019ve contributed. \n\nThanks to our contributors:\n@mikekgfb @Jack-Khuu @metascroy @malfet @larryliu0820 @kirklandsign @swolchok @vmpuri @kwen2501 @Gasoonjia @orionr @guangy10 @byjlw @lessw2020 @mergennachin @GregoryComer @shoumikhin @kimishpatel @manuelcandales @lucylq @desertfire @gabe-l-hart @seemethere @iseeyuan @jerryzh168 @leseb @yanbing-j @mreso @fduwjj @Olivia-liu @angelayi @JacobSzwejbka @ali-khosh @nlpfollower @songhappy @HDCharles @jenniew @silverguo @zhenyan-zhang-meta @ianbarber @dbort @kit1980 @mcr229 @georgehong @krammnic @xuedinge233 @anirudhs001 @shreyashah1903 @soumith @TheBetterSolution @codereba @jackzhxng @KPCOFGS @kuizhiqing @kartikayk @nobelchowdary @mike94043 @vladoovtcharov @prideout @sanchitintel @cbilgin @jeffdaily @infil00p @msaroufim @zhxchen17 @vmoens @wjunLu \n\n-**PyTorch Team**", "url": "https://github.com/pytorch/torchchat/issues/1543", "state": "open", "labels": [], "created_at": "2025-05-20T02:41:03Z", "updated_at": "2025-05-20T11:06:54Z", "comments": 3, "user": "Jack-Khuu" }, { "repo": "pytorch/xla", "number": 9201, "title": "Issue warning on set_mat_mul", "body": "On #9080 and #9103, there was a request to add a warning when user sets mat mul. I added it to the PR, but, the ci/ci now skips running documentation. \n\nThis issue and PR will cherry pick the code changes to isolate them from docs, allowing code cicd to run on this PR, and docs build cicd to run on 9082. ", "url": "https://github.com/pytorch/xla/issues/9201", "state": "closed", "labels": [ "documentation", "CI" ], "created_at": "2025-05-19T21:21:48Z", "updated_at": "2025-05-21T18:38:49Z", "comments": 0, "user": "yaoshiang" }, { "repo": "pytorch/xla", "number": 9199, "title": "Simplify device count external API calls", "body": "Currently there are many external APIs related getting the number of devices associate with PyTorch XLA. Those that I could find were:\n\n- \"global_runtime_device_count\": returns the total number of devices across all processes/hosts, but it has \"@functools.lru_cache()\"\n- \"global_device_count\": returns the total number of devices across all processes/hosts, but it has \"@functools.lru_cache()\"\n- \"addressable_runtime_device_count\": Access number of [addressable devices](https://github.com/pytorch/xla/blob/r2.7/torch_xla/csrc/init_python_bindings.cpp#L15026) visible to a process.\n- \"addressable_device_count\": Access number of [addressable devices](https://github.com/pytorch/xla/blob/r2.7/torch_xla/csrc/init_python_bindings.cpp#L1481) visible to a process. It specifically returns 1 in case of SPMD.\n- \"local_device_count\": takes the number of [addressable devices](https://github.com/pytorch/xla/blob/01b5408dded9bf5bdea3e59c387b3b201a2bdab9/torch_xla/csrc/init_python_bindings.cpp#L1486) and multiplies it by the number of local [process counts](https://github.com/pytorch/xla/blob/r2.7/torch_xla/runtime.py#L129). Equivalent of the answer of the number of devices running on a host.\n\nFrom these, some existing observations are:\n- `addressable_runtime_device_count` and `addressable_device_count` are extremely similar in implementation and name. Perhaps we should make the distinction more clear. Perhaps there is some context around `addressable_device_count` particular I don't fully grasp.\n- `local_device_count` terminology can be confusing when compared with JAX's concept for local devices for [jax.local_devices](https://docs.jax.dev/en/latest/_autosummary/jax.local_devices.html). `local_device_count` being the number of devices in the host, while JAX's definition is of devices in the process\n- We should deduplicate `global_runtime_device_count` and `global_device_count`, just have one reference the other to remove multiple calls", "url": "https://github.com/pytorch/xla/issues/9199", "state": "open", "labels": [ "usability", "documentation" ], "created_at": "2025-05-19T19:26:46Z", "updated_at": "2025-06-04T05:52:28Z", "comments": 4, "user": "pgmoka" }, { "repo": "pytorch/torchtitan", "number": 1202, "title": "How to run the tests in the tests directory", "body": "Looking for how to documentations to run the tests in the tests directory. ", "url": "https://github.com/pytorch/torchtitan/issues/1202", "state": "closed", "labels": [ "documentation", "good first issue" ], "created_at": "2025-05-16T17:33:46Z", "updated_at": "2025-05-20T04:02:02Z", "user": "githubsgi" }, { "repo": "pytorch/helion", "number": 46, "title": "[QST] Compiler Pipeline", "body": "@jansel @yf225 \n\nVery cool project. \n\nIs there any documentation on how helion leverages inductor to generate triton kernels?\n\nTrying to understand the overlap between dynamo and helion. My naive take is that dynamo parses general python code to an fx graph that is then passed to inductor whereas helion parses a subset of python defined by helion-specific operators to an fx graph then onto inductor...\n\nIn either case, hoping to use helion to better understand inductor, from IR to lowering, optimization, and codegen.", "url": "https://github.com/pytorch/helion/issues/46", "state": "closed", "labels": [ "question" ], "created_at": "2025-05-16T12:30:52Z", "updated_at": "2025-08-25T21:28:38Z", "user": "jeromeku" }, { "repo": "pytorch/TensorRT", "number": 3522, "title": "\u2753 [Question] Manually Annotate Quantization Parameters in FX Graph", "body": "## \u2753 Question\n\nis there a way to manually annotate quantization parameters that will be respected throughout torch_tensorrt conversion (e.g. manually adding q/dq nodes, or specifying some tensor metadata) via dynamo? thank you!", "url": "https://github.com/pytorch/TensorRT/issues/3522", "state": "open", "labels": [ "question" ], "created_at": "2025-05-16T07:38:33Z", "updated_at": "2025-06-02T15:35:40Z", "user": "patrick-botco" }, { "repo": "pytorch/xla", "number": 9178, "title": "Code sample for basic mark sharding doesn't work", "body": "## \ud83d\udcda Documentation\n\nThis document:\n\nhttps://docs.pytorch.org/xla/master/learn/api-guide.html#module-torch_xla.distributed.spmd\n\nhas an important code sample to demonstrate sharding tensors across devices. It doesn't work - there are imports and setup that are not included.\n\nMore broadly, all of these samples should go into a larger guide that gently walks a user through the process of understanding how PT/XLA handles multi-device and multi-host up through gSPMD. It's very elegant and powerful, but poorly documented. \n", "url": "https://github.com/pytorch/xla/issues/9178", "state": "open", "labels": [ "distributed", "documentation" ], "created_at": "2025-05-15T17:28:02Z", "updated_at": "2025-05-19T13:59:30Z", "comments": 0, "user": "yaoshiang" }, { "repo": "pytorch/xla", "number": 9177, "title": "make CI build fast", "body": "## \ud83d\udc1b Bug\n\nThe CI build takes ~2 hours, significantly affects dev velocity.\n\nJudging from https://github.com/pytorch/xla/actions/runs/14986142268/job/42100348515, the `Build PyTorch/XLA` steps seems the bottleneck (it takes 1h15m and blocks a whole bunch of downstream test jobs). If we can speed this up, we may shove a large chunk from the build time.\n\nPotential long-hanging fruit:\n\n- The log suggests that there are only 32 parallel bazel actions for this job, far below our recommended dev set-up (112 actions). I suspect the worker machines have only 32 vCPUs. Can we upgrade to 128+ vCPUs? Build machines are highly leveraged, so investment there will pay for itself quickly in terms of dev velocity.\n- Set up a bazel remote build farm so that the build is parallelized across machines.\n", "url": "https://github.com/pytorch/xla/issues/9177", "state": "open", "labels": [ "tech debt", "CI", "build" ], "created_at": "2025-05-15T16:48:36Z", "updated_at": "2025-05-15T16:48:36Z", "comments": 0, "user": "zhanyong-wan" }, { "repo": "pytorch/data", "number": 1489, "title": "Implement a Cache node", "body": "### \ud83d\ude80 The feature\n\nAt some point, there were a [`InMemoryCacheHolder`](https://docs.pytorch.org/data/0.9/generated/torchdata.datapipes.iter.InMemoryCacheHolder.html?highlight=cache#torchdata.datapipes.iter.InMemoryCacheHolder) datapipe. However, this has been removed from the new node design.\n\nThis would be very useful for some expensive parts of the DAG that would gain from being stored in memory rather than recomputed each time.\n\n### Motivation, pitch\n\nSome transforms are quite expensive, and I would like to avoid needing to repeat them at each epoch. Therefore, it would be handy to have some cache mechanism that would allow skipping expensive parts of the DAG if they have been computed before. The user could have a choice to cache on memory or on the disk.\n\nHowever, I'm not sure what the interface would look like. I feel like there would be 2 nodes needed, sharing the cache:\n - One at the start of the DAG branch to skip (that would check if passing through the branch is needed)\n - One at the end of the branch (that would store the result of the branch for it to be used later)\n\nI can't really think of another way to make this work, as you can't have just the first one (or else how do you store the result of the computation at the end of the branch?), and you can't have just the last one (bc how do you determine if the item have been cached or not?).\n\nAs far as I understand nodes, they are executed in a bottom-up manner, with the last node requiring the result of the previous node, itself requiring the result of the previous one, all the way up to the first node. However, this design makes it difficult to deal with a cache as you need to decide which branch to take from the bottom. This would be easier with a top-down design, with the data coming from the first node, up to the entrance of the cache, which would be able to make a decision on the branch to choose to continue.\n\nMaybe having a some sort of `CacheWrapper` that would wrap a single node would be the solution? But then it would be cumbersome to cache entire branches of the DAG.", "url": "https://github.com/meta-pytorch/data/issues/1489", "state": "open", "labels": [], "created_at": "2025-05-15T09:47:19Z", "updated_at": "2025-05-20T04:25:09Z", "comments": 1, "user": "leleogere" }, { "repo": "pytorch/xla", "number": 9175, "title": "Add documentation on multi-controller", "body": "## \ud83d\udcda Documentation\n\nAdd documentation demonstrating multi-node coordination. Start with 2 machines, each with [n] TPUs, and demonstrate ssh into each machine to run the same script with an all-reduce. Reference necessary information for network configuration to allow two hosts to communicate on GCP (optional: AWS and Azure). Cannot be just a toy example on the same machine using localhost as the coordination. Optional: demonstrate using slurm to further simplify coordination. \n\nShould end up similar to:\n\nhttps://docs.jax.dev/en/latest/multi_process.html\n\nand\n\nhttps://docs.pytorch.org/tutorials/intermediate/ddp_series_multinode.html\n\n\n", "url": "https://github.com/pytorch/xla/issues/9175", "state": "open", "labels": [ "documentation" ], "created_at": "2025-05-15T02:50:00Z", "updated_at": "2025-05-19T13:58:20Z", "comments": 0, "user": "yaoshiang" }, { "repo": "pytorch/torchtitan", "number": 1192, "title": "document the usage of environment variables", "body": "This is one of the community requests.\n\nSimilarly, we should also document the inductor flag usages.\n\nFormat can be a dedicated `.md` under `docs/`.", "url": "https://github.com/pytorch/torchtitan/issues/1192", "state": "open", "labels": [ "documentation", "better engineering", "high priority", "triage review" ], "created_at": "2025-05-14T08:41:36Z", "updated_at": "2025-05-14T08:41:40Z", "comments": 0, "user": "tianyu-l" }, { "repo": "pytorch/torchtitan", "number": 1184, "title": "[Question] CP and DP", "body": "Hi, this is a really great repo! Thanks for open-sourcing it!\n\nI am reading the code of how torchtian handles the multi-dimensional parallelism. It seems the `cp` is a part of the mesh dimensions interacting with `dp_shard`, `dp_replicate` etc. My understanding of `cp` is that it is orthogonal to other parallelisms. For example, it is a validate configuration of `dp_shard=8`, `dp_replicate=1` and `cp=8` for a 8-GPU node. But according to the code, it will raise an error as `dp_shard * cp != world_size`. \n\n\nhttps://github.com/pytorch/torchtitan/blob/6df8c8925bb2ba9b4e6aa88cece0e3f0633ab6ce/torchtitan/distributed/parallel_dims.py#L48\n\n", "url": "https://github.com/pytorch/torchtitan/issues/1184", "state": "closed", "labels": [ "question", "module: context parallel" ], "created_at": "2025-05-13T03:30:10Z", "updated_at": "2025-05-13T17:19:22Z", "user": "galalalala" }, { "repo": "pytorch/torchtitan", "number": 1179, "title": "FSDP2+DPP vs 2D Device Mesh FSDP2", "body": "I have a question regarding FSDP2 + DDP, in torchtitan codebase it is used as FSDP2 -> DDP. In FSDP2 doc it is said that you can use 2d device mesh to apply MISC equivalent in deepspeed which IUC is FSDP wrapped in DDP.\n\nIs there any difference between those 2 methods that I should be aware of, or are they functionally equivalent and achieve the same speed/results.", "url": "https://github.com/pytorch/torchtitan/issues/1179", "state": "closed", "labels": [], "created_at": "2025-05-09T18:02:56Z", "updated_at": "2025-05-10T16:52:15Z", "comments": 2, "user": "S1ro1" }, { "repo": "pytorch/torchtitan", "number": 1177, "title": "Can we support outputting checkpoints directly in .pt format?", "body": "Today we need to do an extra conversion step according to this README: https://github.com/pytorch/torchtitan/blob/main/docs/checkpoint.md\n\n```\npython -m torch.distributed.checkpoint.format_utils dcp_to_torch outputs/checkpoint/step-100 /tmp/checkpoint.pt\n```\n\nI think we should **provide an option for users to specify which format to output their checkpoints** instead, and call this function in torchtitan for users as part of outputting the checkpoint.\n\n------------------------------------------------------------------------------------------\n\n**Bonus:** This conversion step actually fails today if we used FP8 training. I had to manually add the following line to the `dcp_to_torch` function as a hack to get it to work:\n```\ntorch.serialization.add_safe_globals([torchao.float8.fsdp_utils.WeightWithDynamicFloat8CastTensor])\n```\nIt would be great if we can just either implicitly add the safe globals when we output the checkpoint in torchtitan, or simply remove this `WeightWithDynamicFloat8CastTensor` from the BC surface.", "url": "https://github.com/pytorch/torchtitan/issues/1177", "state": "open", "labels": [ "enhancement", "module: checkpoint" ], "created_at": "2025-05-09T16:01:50Z", "updated_at": "2025-08-21T03:18:12Z", "comments": 8, "user": "andrewor14" }, { "repo": "pytorch/xla", "number": 9129, "title": "set_mat_mul_precision is flakey", "body": "## \ud83d\udc1b Bug\n\nset_mat_mul_precision seems to allow switching the precision within a single process... sometimes, like in the precision_tutorial.py/ipynb. But in the unit test test_mat_mul_precision, there's an example of a test that switches the precision unsuccessfully. \n\n## To Reproduce\n\nOne unit test in test_mat_mul_precision.py is decorated \"@expectedFailure\". Once this issue is resolved, we should be able to remove that decorator and see that these tests work as intended, within a loop. \n\nPYTHONPATH=\"$TEST_CDIR${PYTHONPATH:+:$PYTHONPATH}\" python3 -m unittest test_mat_mul_precision.TestMatMulPrecision.test_all\n\n\n## Expected behavior\n\nProgram can switch mat_mul_precision between default, high, and highest dynamically in a single program.\n\n", "url": "https://github.com/pytorch/xla/issues/9129", "state": "open", "labels": [ "bug", "runtime" ], "created_at": "2025-05-09T03:22:22Z", "updated_at": "2025-05-12T12:23:12Z", "comments": 1, "user": "yaoshiang" }, { "repo": "pytorch/xla", "number": 9118, "title": "Add installation instructions to `benchmarks/README.md`", "body": "## \ud83d\udcda Documentation\n\nThe [`benchmarks/README.md`](https://github.com/pytorch/xla/blob/master/benchmarks/README.md) does not contain the installation instructions, which is crucial for running the benchmarks.\n\nIt requires installing the [`pytorch/benchmark`](https://github.com/pytorch/benchmark) repo and other libraries like `libGL` (required by Llava).\n\n## Solution\n\nAdd the instructions to [`benchmarks/README.md`](https://github.com/pytorch/xla/blob/master/benchmarks/README.md).\nInstall [`pytorch/benchmark`](https://github.com/pytorch/benchmark) as a library.\nInstall `libGL`.\nInstall any other requirements.\n\nTo verify, make sure the instructions work with the devcontainer.", "url": "https://github.com/pytorch/xla/issues/9118", "state": "closed", "labels": [ "documentation", "benchmarking" ], "created_at": "2025-05-08T17:51:31Z", "updated_at": "2025-05-22T17:40:05Z", "comments": 1, "user": "haifeng-jin" }, { "repo": "pytorch/serve", "number": 3416, "title": "Adding vendor RBLN(Rebellions)", "body": "TorchServe has a varying structure for different accelerator types through recently added #3371.\n\nAlthough [Rebellions](https://rebellions.ai/) provides a guide on how to utilize `TorchServe with the RBLN(Rebellions) NPUs` through its official document page(https://docs.rbln.ai/software/model_serving/torchserve/torchserve.html), the current implementation of TorchServe does not recognize the RBLN NPU as a valid accelerator vendor. As a result, even when `gpu_id` is set in configuration using the `RBLN NPU`, the specified RBLN NPUs cannot be properly utilized.\n\nWe would like to propose adding RBLN NPU as a recognized accelerator vendor in TorchServe, along with an official user guide. This addition will enable seamless integration and usage of TorchServe in environments equipped with RBLN NPUs.", "url": "https://github.com/pytorch/serve/issues/3416", "state": "open", "labels": [], "created_at": "2025-05-08T00:49:45Z", "updated_at": "2025-05-08T00:49:45Z", "comments": 0, "user": "rebel-ysseo" }, { "repo": "pytorch/pytorch", "number": 153108, "title": "Introduce unbacked friendly is_known_contiguous and use it instead of is_contiguous in all locations where there is a general path for not know_contiguous", "body": "title. \n\ncc @chauhang @penguinwu @ezyang @bobrenjc93", "url": "https://github.com/pytorch/pytorch/issues/153108", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamic shapes", "data dependent error" ], "created_at": "2025-05-07T23:10:19Z", "updated_at": "2025-09-27T01:23:17Z", "user": "laithsakka" }, { "repo": "pytorch/executorch", "number": 10745, "title": "How to use tokenizer.json in ExecuTorch Android demo (without tokenizer.model)?", "body": "### \ud83d\udcda The doc issue\n\nI'm trying to deploy a language or vision-language model on Android using the ExecuTorch Android demo app.\nThe model I'm working with only provides tokenizer.json, but the current Android implementation appears to expect a tokenizer.model file instead.\n\nIs tokenizer.model mandatory for the ExecuTorch demo app?\n\nIf I only have a tokenizer.json file (from HuggingFace), is there any recommended way to convert or load it in the app?\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @kirklandsign @cbilgin", "url": "https://github.com/pytorch/executorch/issues/10745", "state": "closed", "labels": [ "triaged", "module: android" ], "created_at": "2025-05-07T03:22:03Z", "updated_at": "2025-05-07T21:33:46Z", "user": "jordanqi" }, { "repo": "pytorch/torchtitan", "number": 1169, "title": "how to inference with pretrained model?", "body": "hi, after pretrain/sft with torchtitan, how to inference with the checkpoint? does the repo provide the inference code? thank you.", "url": "https://github.com/pytorch/torchtitan/issues/1169", "state": "closed", "labels": [], "created_at": "2025-05-06T10:28:50Z", "updated_at": "2025-08-21T03:18:05Z", "user": "dragen1860" }, { "repo": "pytorch/torchtitan", "number": 1168, "title": "How to use fsdp2 cpu_offload?", "body": "I am currently using `cpuOffloadPolicy` in the following way:\n```py\n transformer_cls_to_wrap = list()\n for layer_class in transformer_cls_names_to_wrap:\n transformer_cls = get_module_class_from_name(model_to_wrap, layer_class)\n if transformer_cls is not None:\n transformer_cls_to_wrap.append(transformer_cls)\n if len(transformer_cls_to_wrap) == 0:\n raise NotImplementedError(\"len(transformer_cls_to_wrap) == 0, please check the wrapping rules!\")\n mp_policy = MixedPrecisionPolicy(\n param_dtype=torch.bfloat16,\n reduce_dtype=torch.float32,\n )\n fsdp_kwargs = {\n \"reshard_after_forward\": True,\n \"mp_policy\": mp_policy,\n \"offload_policy\": CPUOffloadPolicy() if self.args.adam_offload else OffloadPolicy(),\n }\n\n for cls_to_wrap in transformer_cls_to_wrap:\n for module in model_to_wrap.modules():\n if isinstance(module, cls_to_wrap):\n fully_shard(module, **fsdp_kwargs)\n for name, module in model_to_wrap.named_modules():\n if 'lm_head' in name:\n fully_shard(module, **fsdp_kwargs)\n\n fully_shard(model_to_wrap, **fsdp_kwargs)\n\n # cast model into fp32 to create optimizer with fp32 states\n # https://github.com/pytorch/torchtitan/issues/1133#issuecomment-2824429682\n model_to_wrap = model_to_wrap.to(torch.float32)\n\n if is_meta_initialized(model_to_wrap):\n model.to_empty(device='cuda')\n\n return model\n```\nThe model is created from huggingface pretrained model, but I got the following error when doing `clip_grad_norm`:\n\n```\ngrad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=self.grad_clip)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/nn/utils/clip_grad.py\", line 30, in _no_grad_wrapper\n[rank4]: return func(*args, **kwargs)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/nn/utils/clip_grad.py\", line 105, in clip_grad_norm_\n[rank4]: clip_coef = max_norm / (total_norm + 1e-6)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/_tensor.py\", line 39, in wrapped\n[rank4]: return f(*args, **kwargs)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/_tensor.py\", line 1032, in __rdiv__\n[rank4]: return self.reciprocal() * other\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/_compile.py\", line 32, in inner\n[rank4]: return disable_fn(*args, **kwargs)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 632, in _fn\n[rank4]: return fn(*args, **kwargs)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_api.py\", line 340, in __torch_dispatch__\n[rank4]: return DTensor._op_dispatcher.dispatch(\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py\", line 181, in dispatch\n[rank4]: self.redistribute_local_args(\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py\", line 317, in redistribute_local_args\n[rank4]: resharded_local_tensor = redistribute_local_tensor(\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_redistribute.py\", line 195, in redistribute_local_tensor\n[rank4]: new_local_tensor = partial_spec._reduce_value(\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_ops/_math_ops.py\", line 126, in _reduce_value\n[rank4]: reduced_tensor = super()._reduce_value(tensor, mesh, mesh_dim)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/placement_types.py\", line 599, in _reduce_value\n[rank4]: return funcol.all_reduce(\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py\", line 175, in all_reduce\n[rank4]: tensor = torch.ops._c10d_functional.all_reduce(self, reduceOp.lower(), group_name)\n[rank4]: File \"/root/miniconda3/lib/python3.10/site-packages/torch/_ops.py\", line 1116, in __call__\n[rank4]: return self._op(*args, **(kwargs or {}))\n[rank4]: RuntimeError: No backend type associated with device type cpu\n```\n\nIs there anything wrong in my model init device?", "url": "https://github.com/pytorch/torchtitan/issues/1168", "state": "closed", "labels": [ "module: fsdp" ], "created_at": "2025-05-06T07:44:48Z", "updated_at": "2025-05-12T03:29:30Z", "user": "KimmiShi" }, { "repo": "pytorch/xla", "number": 9095, "title": "Support Dynamic Grid in Pallas Kernel", "body": "## \ud83d\ude80 Feature\n\n\nSupport dynamic grid feature of pallas kernel through PyTorch/XLA wrapper. Below is an example of dynamic grid in jax.\n\n```\nimport functools\nimport time\n\nimport jax\nfrom jax._src.pallas.pallas_call import _trace_kernel_to_jaxpr\nimport jax.numpy as jnp\nfrom jax.experimental import pallas as pl\nfrom jax import export\nimport numpy as np\n\n\ndef matmul_kernel(x_ref, y_ref, o_ref):\n block_m, block_l = x_ref.shape\n block_l2, block_n = y_ref.shape\n assert block_l2 == block_l\n assert o_ref.shape == (block_m, block_n)\n @pl.when(pl.program_id(axis=2) == 0)\n def _():\n o_ref[...] = jnp.zeros_like(o_ref)\n\n o_ref[...] += jnp.dot(x_ref[...], y_ref[...])\n\n\n@functools.partial(jax.jit, static_argnames=['block_shape'])\ndef matmul(\n x: jax.Array,\n y: jax.Array,\n m: int,\n n: int,\n l: int,\n *,\n block_shape=(128, 128, 128)\n):\n block_m, block_n, block_l = block_shape\n grid = (m, n, l)\n fused_matmul = pl.pallas_call(\n functools.partial(matmul_kernel),\n out_shape=jax.ShapeDtypeStruct((x.shape[0], y.shape[1]), jnp.float32),\n in_specs=[\n pl.BlockSpec((block_m, block_l), lambda i, j, k: (i, k)),\n pl.BlockSpec((block_l, block_n), lambda i, j, k: (k, j)),\n ],\n out_specs=pl.BlockSpec((block_m, block_n), lambda i, j, k: (i, j)),\n grid=grid,\n debug=False,\n # interpret=jtu.test_device_matches([\"cpu\"]),\n )\n return fused_matmul(x, y)\n\n\nx_shape = (8192, 8192)\ny_shape = (8192, 8192)\n\nn = l = 64\nfor m in range(4, 65, 4):\n key = jax.random.key(m)\n key1, key2 = jax.random.split(key, 2)\n x = jax.random.normal(key1, x_shape, dtype=np.float32).block_until_ready()\n y = jax.random.normal(key2, y_shape, dtype=np.float32).block_until_ready()\n start_time = time.time()\n res = matmul(x, y, m, n, l).block_until_ready()\n end_time = time.time()\n print(\"[1st run] m: \", m, \" time: \", f\"{(end_time - start_time) * 1000:.3f}ms\", flush=True)\n native = (x @ y)[:m * 128]\n assert jax.numpy.allclose(native, res[:m * 128]) \n key = jax.random.key(m + 1000)\n key1, key2 = jax.random.split(key, 2)\n x = jax.random.normal(key1, x_shape, dtype=np.float32).block_until_ready()\n y = jax.random.normal(key2, y_shape, dtype=np.float32).block_until_ready()\n start_time = time.time()\n res = matmul(x, y, m, n, l).block_until_ready()\n end_time = time.time()\n print(\"[2nd run] m: \", m, \" time: \", f\"{(end_time - start_time) * 1000:.3f}ms\", flush=True)\n```\n\n\n\n\n", "url": "https://github.com/pytorch/xla/issues/9095", "state": "open", "labels": [ "enhancement", "pallas" ], "created_at": "2025-05-05T22:28:33Z", "updated_at": "2025-05-06T12:24:45Z", "comments": 0, "user": "yaochengji" }, { "repo": "pytorch/rl", "number": 2939, "title": "PPO with composite distribution crash before giving the warning on how to fix it.", "body": "This block\nhttps://github.com/pytorch/rl/blob/795e362cb82b3539faa30db771e5b2f1d50f8c8a/torchrl/objectives/ppo.py#L601-L602\ncauses \n```AttributeError: 'Tensor' object has no attribute 'batch_size'```\n\nBefore, the warning on how to fix it is shown.\n\nhttps://github.com/pytorch/rl/blob/795e362cb82b3539faa30db771e5b2f1d50f8c8a/torchrl/objectives/ppo.py#L603-L614\n\nThe order of the 2 blocks needs to be swapped", "url": "https://github.com/pytorch/rl/issues/2939", "state": "closed", "labels": [], "created_at": "2025-05-04T23:31:53Z", "updated_at": "2025-05-20T10:09:02Z", "user": "siegelaaron94" }, { "repo": "pytorch/xla", "number": 9082, "title": "Educate users on mat mul precision", "body": "mat mul precision will be exposed idiomatically to Pytorch in #9081. ", "url": "https://github.com/pytorch/xla/issues/9082", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-05-02T20:03:54Z", "updated_at": "2025-05-21T20:34:32Z", "comments": 0, "user": "yaoshiang" }, { "repo": "pytorch/executorch", "number": 10593, "title": "Advice on how to run the training example in Android", "body": "Hello Team,\n\nWe have followed https://pytorch.org/executorch/main/using-executorch-android.html#building-from-source to build the \"aar\" file.\n\nWe can run the inference example on Android.\n\nWe are wondering how to run the training example on Android.\n\nAre there some flags / some config we need to add to the building procedure (https://github.com/pytorch/executorch/blob/main/scripts/build_android_library.sh)?\n\nThanks!\n\n\ncc @kirklandsign @cbilgin @JacobSzwejbka", "url": "https://github.com/pytorch/executorch/issues/10593", "state": "open", "labels": [ "module: android", "module: training" ], "created_at": "2025-04-30T19:51:03Z", "updated_at": "2025-07-15T22:59:28Z", "user": "YuanTingHsieh" }, { "repo": "pytorch/xla", "number": 9063, "title": "Add explanation of Clang usage after Hermetic CUDA.", "body": "## \ud83d\udcda Documentation\n\nFollow up from: #8665 and #9053 \n\nAfter #8665 is merged, we should add an explanation on the default usage of Clang due to the adoption of Hermetic CUDA. This is somewhat related to #9061.", "url": "https://github.com/pytorch/xla/issues/9063", "state": "open", "labels": [ "documentation" ], "created_at": "2025-04-30T12:17:26Z", "updated_at": "2025-04-30T12:18:12Z", "comments": 0, "user": "ysiraichi" }, { "repo": "pytorch/executorch", "number": 10571, "title": "where is pytorch_tokenizers.tools.llama2c.convert?", "body": "### \ud83d\udc1b Describe the bug\n\nI can not find pytorch_tokenizers.tools.llama2c.convert with command \"python -m pytorch_tokenizers.tools.llama2c.convert -t ../tokenizer.model -o ../tokenizer.bin\" according to docs. the env\n I use is built by \"pip install executorch\"\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.7.0+cu126\nIs debug build: False\nCUDA used to build PyTorch: 12.6\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: Could not collect\nLibc version: glibc-2.35\n\nPython version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 11.5.119\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090\nNvidia driver version: 565.57.01\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: GenuineIntel\nModel name: 13th Gen Intel(R) Core(TM) i9-13900KF\nCPU family: 6\nModel: 183\nThread(s) per core: 2\nCore(s) per socket: 24\nSocket(s): 1\nStepping: 1\nCPU max MHz: 5800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5990.40\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 896 KiB (24 instances)\nL1i cache: 1.3 MiB (24 instances)\nL2 cache: 32 MiB (12 instances)\nL3 cache: 36 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Mitigation; Clear Register File\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] executorch==0.6.0\n[pip3] numpy==2.2.5\n[pip3] nvidia-cublas-cu12==12.6.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.6.80\n[pip3] nvidia-cuda-nvrtc-cu12==12.6.77\n[pip3] nvidia-cuda-runtime-cu12==12.6.77\n[pip3] nvidia-cudnn-cu12==9.5.1.17\n[pip3] nvidia-cufft-cu12==11.3.0.4\n[pip3] nvidia-curand-cu12==10.3.7.77\n[pip3] nvidia-cusolver-cu12==11.7.1.2\n[pip3] nvidia-cusparse-cu12==12.5.4.2\n[pip3] nvidia-cusparselt-cu12==0.6.3\n[pip3] nvidia-nccl-cu12==2.26.2\n[pip3] nvidia-nvjitlink-cu12==12.6.85\n[pip3] nvidia-nvtx-cu12==12.6.77\n[pip3] onnxruntime==1.21.0\n[pip3] optree==0.15.0\n[pip3] torch==2.7.0\n[pip3] torchao==0.10.0\n[pip3] torchaudio==2.7.0\n[pip3] torchvision==0.22.0\n[pip3] triton==3.3.0\n[conda] executorch 0.6.0 pypi_0 pypi\n[conda] numpy 2.2.5 pypi_0 pypi\n[conda] nvidi", "url": "https://github.com/pytorch/executorch/issues/10571", "state": "closed", "labels": [ "module: llm" ], "created_at": "2025-04-30T03:15:59Z", "updated_at": "2025-05-08T06:20:26Z", "user": "hayyaw" }, { "repo": "pytorch/xla", "number": 9056, "title": "Fix the contribution instructions for creating PRs", "body": "## \ud83d\udcda Documentation\n\nhttps://github.com/pytorch/xla/blob/master/CONTRIBUTING.md suggests to clone the original PyTorch/XLA repo directly. However, doing so makes it impossible to create PRs later unless the user has write permission to the repo. Instead, it should ask the users to fork the repo first, and then work against their fork. This allows creating PRs without having write access to the original repo.\n\nWhile at this, we can also clarify the steps for creating PRs.", "url": "https://github.com/pytorch/xla/issues/9056", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-04-29T18:23:43Z", "updated_at": "2025-05-07T13:37:33Z", "comments": 0, "user": "zhanyong-wan" }, { "repo": "pytorch/vision", "number": 9042, "title": "Make the C++ backend of the torchvision wheel usable for C++ development", "body": "### \ud83d\ude80 The feature\n\nCurrently, the torchvision wheel packages the C++ DSO as `_C.so` for python bindings.\n\nWe'd like the python wheel to have the C++ backend be standalone, so it can be extracted/used by C++ applications, like is done today for the PyTorch wheels.\n\nThis means:\n\n- export DSO as `libtorchvision.so` instead of `_C.so`\n- do not hardlink `libtorchvision.so` against `libtorch_python.so`.\n - _maybe `_C.so` is kept for symbols that require `libtorch_python.so` ?_\n- export cpp headers\n- export CMake configs\n\n\n### Motivation, pitch\n\nC++ developers can currently use the distributed PyTorch wheels to develop C++ native applications against libtorch, as libraries, headers, and cmake configs are available in the wheels.\n\nC++ developers who also need to use torchvision cannot leverage the standard `vision` wheel the same way even though all C++ symbols are available in `_C.so`. Instead, they must build libtorchvision C++ from source which is more cumbersome, requires extra dev packages to be installed, especially for cuda support.\n\n\n### Additional context\n\n
\n see ld links for torchvision 0.22.0+cu128 (wheel) \n\n```sh\nlibc.so.6\nlibc10.so\nlibc10_cuda.so\nlibcudart.so.12\nlibdl.so.2\nlibgcc_s.so.1\nlibm.so.6\nlibpthread.so.0\nlibrt.so.1\nlibstdc++.so.6\nlibtorch.so\nlibtorch_cpu.so\nlibtorch_cuda.so\nlibtorch_python.so # requires python\nlinux-vdso.so.1\n```\n\n
\n\n
\n see ld links for c++ source build of torchvision \n\n> no link against `libtorch_python.so`\n\n```sh\nlibc.so.6\nlibc10.so\nlibc10_cuda.so\nlibcudart.so.12\nlibdl.so.2\nlibgcc_s.so.1\nlibm.so.6\nlibpthread.so.0\nlibrt.so.1\nlibstdc++.so.6\nlibtorch.so\nlibtorch_cpu.so\nlibtorch_cuda.so\nlinux-vdso.so.1\n```\n\n
\n\n
\n example of a cpp torchvision installation with files needed for C++ development \n\n> The install tree below can be imported for building with CMake with:\n\n```\ncmake ... -D TorchVision_ROOT=\"$torch_vision_install_dir\" # Or add to CMAKE_PREFIX_PATH\n```\n\n```cmake\nfind_package(TorchVision)\n```\n\n```tree\n\u251c\u2500\u2500 include\n\u2502 \u2514\u2500\u2500 torchvision\n\u2502 \u251c\u2500\u2500 io\n\u2502 \u2502 \u2514\u2500\u2500 image\n\u2502 \u2502 \u251c\u2500\u2500 cpu\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 common_jpeg.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 common_jpeg.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 common_png.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_gif.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_gif.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_image.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_image.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_jpeg.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_jpeg.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_png.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_png.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 encode_jpeg.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 encode_jpeg.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 encode_png.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 encode_png.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 exif.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 giflib\n\u2502 \u2502 \u2502 \u2502 \u251c\u2500\u2500 dgif_lib.c\n\u2502 \u2502 \u2502 \u2502 \u251c\u2500\u2500 gif_hash.c\n\u2502 \u2502 \u2502 \u2502 \u251c\u2500\u2500 gif_hash.h\n\u2502 \u2502 \u2502 \u2502 \u251c\u2500\u2500 gif_lib.h\n\u2502 \u2502 \u2502 \u2502 \u251c\u2500\u2500 gif_lib_private.h\n\u2502 \u2502 \u2502 \u2502 \u251c\u2500\u2500 gifalloc.c\n\u2502 \u2502 \u2502 \u2502 \u2514\u2500\u2500 openbsd-reallocarray.c\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 read_write_file.cpp\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 read_write_file.h\n\u2502 \u2502 \u251c\u2500\u2500 cuda\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 decode_jpeg_cuda.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 encode_decode_jpegs_cuda.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 encode_jpegs_cuda.cpp\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 encode_jpegs_cuda.h\n\u2502 \u2502 \u251c\u2500\u2500 image.cpp\n\u2502 \u2502 \u251c\u2500\u2500 image.h\n\u2502 \u2502 \u2514\u2500\u2500 image_read_mode.h\n\u2502 \u251c\u2500\u2500 macros.h\n\u2502 \u251c\u2500\u2500 ops\n\u2502 \u2502 \u251c\u2500\u2500 autocast\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 deform_conv2d_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 nms_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_align_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_pool_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 roi_align_kernel.cpp\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 roi_pool_kernel.cpp\n\u2502 \u2502 \u251c\u2500\u2500 autograd\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 deform_conv2d_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_align_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_pool_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 roi_align_kernel.cpp\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 roi_pool_kernel.cpp\n\u2502 \u2502 \u251c\u2500\u2500 cpu\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 deform_conv2d_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 nms_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_align_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_pool_kernel.cpp\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 roi_align_common.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 roi_align_kernel.cpp\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 roi_pool_kernel.cpp\n\u2502 \u2502 \u251c\u2500\u2500 cuda\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 cuda_helpers.h\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 deform_conv2d_kernel.cu\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 nms_kernel.cu\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_align_kernel.cu\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ps_roi_pool_kernel.cu\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 roi_align_kernel.cu\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 roi_pool_kernel.cu\n\u2502 \u2502 \u251c\u2500\u2500 deform_conv2d.cpp\n\u2502 \u2502 \u251c\u2500\u2500 deform_conv2d.h\n\u2502 \u2502 \u251c\u2500\u2500 nms.cpp\n\u2502 \u2502 \u251c\u2500\u2500 nms.h\n\u2502 \u2502 \u251c\u2500\u2500 ops.h\n\u2502 \u2502 \u251c\u2500\u2500 ps_roi_align.cpp\n\u2502 \u2502 \u251c\u2500\u2500 ps_roi_align.h\n\u2502 \u2502 \u251c\u2500\u2500 ps_roi_pool.cpp\n\u2502 \u2502 \u251c\u2500\u2500 ps_roi_pool.h\n\u2502 \u2502 \u251c\u2500\u2500 roi_align.cpp\n\u2502 \u2502 \u251c\u2500\u2500 roi_align.h\n\u2502 \u2502 \u251c\u2500\u2500 roi_pool.cpp\n\u2502 \u2502 \u2514", "url": "https://github.com/pytorch/vision/issues/9042", "state": "open", "labels": [], "created_at": "2025-04-29T15:04:25Z", "updated_at": "2025-05-19T23:58:53Z", "comments": 5, "user": "agirault" }, { "repo": "pytorch/torchchat", "number": 1536, "title": "Improve Tokenizer New Type Onboarding", "body": "### \ud83d\ude80 The feature, motivation and pitch\n---\nAs a sequel to https://github.com/pytorch/torchchat/issues/1518 where we added an enum for tokenizer types to simplify `TokenizerArgs __post_init__`, we need to further improve it to simplify new tokenizer type onboarding:\n\n### Tasks\n---\n- Move TokenizerType to a centralized place\n - We now have two of them: https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/dist_run.py#L67-L69 https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/torchchat/cli/builder.py#L241-L245\n- Check all getters of tokenizer types\n - It may be able to be simplified as inline https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/torchchat/generate.py#L368\n- Add documentation for future tokenizer onboard.\n - We may need to point people to update the model validation logic: https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/torchchat/cli/builder.py#L290-L322\n---\nTo test, run a model with each tokenizer type:\n- python torchchat.py generate llama2\n- python torchchat.py generate llama3\n- python torchchat.py generate granite-code\n\ncc @Jack-Khuu @byjlw ", "url": "https://github.com/pytorch/torchchat/issues/1536", "state": "open", "labels": [ "good first issue", "actionable", "triaged" ], "created_at": "2025-04-28T18:31:33Z", "updated_at": "2025-05-13T17:54:18Z", "comments": 3, "user": "zhenyan-zhang-meta" }, { "repo": "pytorch/torchtitan", "number": 1150, "title": "[Feature] Support validation", "body": "For some workloads, it is really important to perform validation on a different dataset every n iterations. \n\nThis seems reasonably straight forward to add to the training loop and training specs, while being kept as optional.\n\nIs there any plan to support this functionality in the near future?", "url": "https://github.com/pytorch/torchtitan/issues/1150", "state": "closed", "labels": [], "created_at": "2025-04-28T11:01:47Z", "updated_at": "2025-08-21T03:17:19Z", "comments": 4, "user": "CarlosGomes98" }, { "repo": "pytorch/torchtitan", "number": 1147, "title": "[Question] FSDP+TP CUDA_DEVICE_MAX_CONNECTIONS", "body": "In Megatron repo https://github.com/NVIDIA/Megatron-LM/blob/4429e8ebe21fb011529d7401c370841ce530785a/megatron/training/arguments.py#L779\n\nIt\u2019s recommended that FSDP should use larger values of `CUDA_DEVICE_MAX_CONNECTIONS` but Megatron TP requires it to be 1. Is it also the case for torch implementation of TP using DTensor? \n\nHow should I configure the environment variable when using torch implementation of FSDP(2) and/or TP/CP/SP?", "url": "https://github.com/pytorch/torchtitan/issues/1147", "state": "open", "labels": [ "documentation", "question", "module: fsdp" ], "created_at": "2025-04-27T20:48:50Z", "updated_at": "2025-04-29T21:54:07Z", "user": "ChenchaoZhao" }, { "repo": "pytorch/pytorch", "number": 152100, "title": "What is the difference between normal_tensor.storage().use_count() and viewed_tensor's?", "body": "In the test2() below, why is b.storage().use_count() still 2 even when I deleted the source tensor a?\n```\nimport torch\n\ndef test1():\n print(\"=============== test 1 ===============\")\n a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)\n b = a.view(-1)\n\n # b.storage().use_count() is 2\n\ndef test2():\n print(\"=============== test 2 ===============\")\n a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)\n b = a.view(-1)\n\n del a\n # b.storage().use_count() is 2\n\ndef test3():\n print(\"=============== test 3 ===============\")\n a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)\n b = a.view(-1)\n\n del b\n # a.storage().use_count() is 1\n\ntest1()\ntest2()\ntest3()\n```\nI thought use_count=2 was because a and b each referenced the storage once, and deleting either tensor would make the use_comunt be 1, but that's not the case.", "url": "https://github.com/pytorch/pytorch/issues/152100", "state": "closed", "labels": [], "created_at": "2025-04-24T12:54:21Z", "updated_at": "2025-04-25T07:39:39Z", "user": "CLiqing" }, { "repo": "pytorch/audio", "number": 3901, "title": "2.7.0 release tag", "body": "### \ud83d\ude80 The feature\n\nAlthough there is a 2.7.0 release on PyPI, there is no release of the source code on GitHub. Can we get a 2.7.0 release tagged?\n\n### Motivation, pitch\n\nPackage managers like Spack build from source code, not from pre-compiled wheels. This is especially important for libraries like torchaudio which get frequent bug fixes as PRs but don't always get those PRs merged due to lack of maintenance.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3901", "state": "closed", "labels": [], "created_at": "2025-04-24T09:54:48Z", "updated_at": "2025-04-24T15:25:16Z", "comments": 2, "user": "adamjstewart" }, { "repo": "pytorch/torchtitan", "number": 1141, "title": "Meet Error when using AMD server (MI250)", "body": "Hi, when I using torchtitan on AMD server (Mi250), it reports the following errors:\n\n![Image](https://github.com/user-attachments/assets/54046f8f-f183-4006-99b5-1730cae0bf1b).\n\nDoes torchtitan support AMD server like MI250?\n\nThanks.", "url": "https://github.com/pytorch/torchtitan/issues/1141", "state": "closed", "labels": [], "created_at": "2025-04-24T07:48:10Z", "updated_at": "2025-04-25T08:46:06Z", "comments": 5, "user": "StillKeepTry" }, { "repo": "pytorch/torchtitan", "number": 1133, "title": "How to correctly use FSDP2 do mixed precision training?", "body": "Hi, I am currently doing this way:\n```py\nmodel = AutoModel.from_pretrained(...)\n\n# make sure model is in fp32, so we have a fp32 mater weight in optimizer\nmodel.to(torch.float32)\n\nmp_policy = MixedPrecisionPolicy(\n param_dtype=torch.bfloat16,\n reduce_dtype=torch.float32,\n)\nfsdp_kwargs = {\n \"reshard_after_forward\": True,\n \"mp_policy\": mp_policy,\n }\n\nfor cls_to_wrap in transformer_cls_to_wrap:\n for module in model.modules():\n if isinstance(module, cls_to_wrap):\n fully_shard(module, **fsdp_kwargs)\n\nfully_shard(model, **fsdp_kwargs)\n\n```\n\nThe first question is: is this correct? As far as I understand, the model param is in fp32, and optimizer states will also be fp32, the fwd and bwd pass will use bf16.\n\nI am wondering if I can init fsdp with a bf16 model and then transfer this FSDP module into fp32? As In this way, it will take less CPU memory when loading Large LLMs. like the following demo:\n\n```py\nmodel = AutoModel.from_pretrained(...)\n\n# just for demo, make sure model is in bf16\nmodel.to(torch.bfloat16)\n\nmp_policy = MixedPrecisionPolicy(\n param_dtype=torch.bfloat16,\n reduce_dtype=torch.float32,\n)\nfsdp_kwargs = {\n \"reshard_after_forward\": True,\n \"mp_policy\": mp_policy,\n }\n\nfor cls_to_wrap in transformer_cls_to_wrap:\n for module in model.modules():\n if isinstance(module, cls_to_wrap):\n fully_shard(module, **fsdp_kwargs)\n\nfully_shard(model, **fsdp_kwargs)\n\nmodel.to(torch.float32)\n```", "url": "https://github.com/pytorch/torchtitan/issues/1133", "state": "closed", "labels": [], "created_at": "2025-04-23T06:55:40Z", "updated_at": "2025-04-27T10:03:20Z", "user": "KimmiShi" }, { "repo": "pytorch/torchtitan", "number": 1132, "title": "FSDP2 reduce_scatter_reduce_op for context parallelism", "body": "Hi,\n\nFSDP2 reduce_scatter by default seems to take the average over the entire shard world, which consists of dp_shard and cp. Averaging gradients over dp_shard makes sense, but I wonder if sum is the better reduce op for CP?\n\nLogically, it seems to me gradient should be agnostic to the choice of CP.\n\nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/1132", "state": "closed", "labels": [ "question" ], "created_at": "2025-04-23T01:44:19Z", "updated_at": "2025-04-24T16:39:05Z", "user": "dingqingy" }, { "repo": "pytorch/xla", "number": 9026, "title": "Where to find TPU-dependent compile-pipeline/optimizations in XLA?", "body": "## \u2753 Questions and Help\n\nI'm diving into the XLA source code to understand the compilation pipeline for the TPU backend and any TPU-dependent optimizations. However, I couldn't find details about the TPU compilation pipeline in xla/service dir, while CPU and GPU pipelines seem more visible. I see some cost-model-based fusion in GPU backend, so I wonder where are the equivalent optimizations done for TPU backend?", "url": "https://github.com/pytorch/xla/issues/9026", "state": "closed", "labels": [], "created_at": "2025-04-23T01:42:17Z", "updated_at": "2025-04-23T12:07:58Z", "comments": 0, "user": "Bolzano983" }, { "repo": "pytorch/torchtitan", "number": 1126, "title": "fully_shard() for huggingface model: pytorch caches too much GPU memory ", "body": "Dear Community,\n\nI'm working on fine-tuning the Qwen2-VL model using `fully_shard()` and wrote a script for it. However, I noticed that GPU memory usage stays high (around 50GB to 60GB) even as I scale up the number of GPUs. Besides, it will run into OOM when I try to fine tune 72B model with 128 GPUs.\n\nI'm wondering if there might be any issues with my code or configuration. I'd really appreciate any insights or suggestions you might have. Thanks in advance!\n\nMy code:\n\n```\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import Dataset, DataLoader\nfrom transformers import Qwen2VLForConditionalGeneration, Qwen2VLProcessor, AutoModelForVision2Seq, AutoConfig\nfrom qwen_vl_utils import process_vision_info\nfrom peft import LoraConfig, get_peft_model\nfrom datasets import load_dataset\nimport numpy as np\nfrom PIL import Image\nimport io\nimport logging\nimport os\n\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport torch.distributed as dist\nimport torch.distributed.checkpoint as dcp\nfrom torch.distributed.device_mesh import init_device_mesh\nfrom transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLDecoderLayer, Qwen2VLVisionBlock\nfrom torch.distributed._composable.fsdp import fully_shard\nfrom torch.distributed import init_process_group, destroy_process_group\nfrom torch.distributed.checkpoint import DefaultLoadPlanner, DefaultSavePlanner\nfrom torch.distributed._composable.fsdp import (\n CPUOffloadPolicy,\n fully_shard,\n MixedPrecisionPolicy,\n)\n\n\n# Set up logging\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(__name__)\n\n# init dist\ndistributed_backend = \"nccl\" # gloo for cpu\ndist.init_process_group(distributed_backend)\n\nlocal_rank = int(os.environ[\"LOCAL_RANK\"])\nworld_size = int(os.environ[\"WORLD_SIZE\"])\ndevice = torch.device(f\"cuda:{local_rank}\")\ntorch.cuda.set_device(device)\n\n\n# model_name = \"Qwen/Qwen2-VL-2B-Instruct\"\n# revision = \"895c3a49bc3fa70a340399125c650a463535e71c\"\nmodel_name = \"Qwen/Qwen2-VL-7B-Instruct\"\nrevision = \"a28a094eb66a9f2ac70eef346f040d8a79977472\"\n# model_name = \"Qwen/Qwen2-VL-72B-Instruct\"\n# revision = \"f9b556a74d58e6d9915f73227c21045c87342b42\"\n\ndataset_id = \"HuggingFaceM4/ChartQA\"\nprocessor = Qwen2VLProcessor.from_pretrained(model_name, \n revision=revision,\n )\n\n\n# Configuration\nclass Config:\n dataset_id = \"HuggingFaceM4/ChartQA\"\n output_dir = \"/tmp_ckpt\"\n batch_size = 2\n num_epochs = 3\n learning_rate = 5e-5\n max_seq_length = 512\n lora_rank = 32\n lora_alpha = 64\n lora_dropout = 0.1\n device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n\n\n\nsystem_message = \"\"\"You are a Vision Language Model specialized in interpreting visual data from chart images.\nYour task is to analyze the provided chart image and respond to queries with concise answers, usually a single word, number, or short phrase.\nThe charts include a variety of types (e.g., line charts, bar charts) and contain colors, labels, and text.\nFocus on delivering accurate, succinct answers based on the visual information. Avoid additional explanation unless absolutely necessary.\"\"\"\n\ndef format_data(sample):\n return [\n {\n \"role\": \"system\",\n \"content\": [{\"type\": \"text\", \"text\": system_message}],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": sample[\"image\"],\n },\n {\n \"type\": \"text\",\n \"text\": sample[\"query\"],\n },\n ],\n },\n {\n \"role\": \"assistant\",\n \"content\": [{\"type\": \"text\", \"text\": sample[\"label\"][0]}],\n },\n ]\n\n# Training function\ndef train_model(model, train_loader, optimizer, config):\n model.train()\n total_steps = len(train_loader) * config.num_epochs\n step = 0\n\n scaler = torch.amp.GradScaler(\"cuda\", enabled=True)\n\n for epoch in range(config.num_epochs):\n total_loss = 0\n for batch_idx, batch in enumerate(train_loader):\n\n inputs, labels = batch\n inputs = inputs.to(config.device)\n labels = labels.to(config.device)\n\n # Mixed precision training\n loss = model(**inputs, labels=labels).loss\n loss.backward() # no scaler\n optimizer.step()\n optimizer.zero_grad()\n \n step += 1\n logger.info(f\"Epoch {epoch+1}/{config.num_epochs}, Step {step}/{total_steps}, Loss: {loss.item():.4f}\")\n\n del loss\n\n\n\n# Create a data collator to encode text and image pairs\ndef collate_fn(examples):\n # Get the texts and images, and apply the chat template\n texts = [\n processor.apply_chat_template(example, tokenize=False) for example in examples\n ] # Prepare texts for processing\n image_inputs = [process", "url": "https://github.com/pytorch/torchtitan/issues/1126", "state": "open", "labels": [ "question", "module: fsdp" ], "created_at": "2025-04-21T21:37:43Z", "updated_at": "2025-05-13T05:09:52Z", "user": "mingdianliu" }, { "repo": "pytorch/pytorch", "number": 151829, "title": "profile for torch.add(x, x) where x is a zero-sized tensor looks bogus", "body": "```py\nfrom torch.profiler import profile, record_function, ProfilerActivity\n\nimport torch\n\nx = torch.randn(0)\n\nwith profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:\n with record_function(\"model_inference\"):\n x + x\n\n\nprint(prof.key_averages().table(sort_by=\"cpu_time_total\", row_limit=10))\n```\n\nGives:\n```\nIn [7]: print(prof.key_averages().table(sort_by=\"cpu_time_total\", row_limit=10))\n----------------------------- ------------ ------------ ------------ ------------ ------------ ------------\n Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls\n----------------------------- ------------ ------------ ------------ ------------ ------------ ------------\n aten::matmul 0.46% 8.994us 62.32% 1.213ms 606.382us 2\n aten::dot 61.72% 1.201ms 61.86% 1.204ms 601.884us 2\n model_inference 6.61% 128.555us 8.13% 158.251us 158.251us 1\n aten::to 1.04% 20.242us 5.30% 103.077us 3.221us 32\n aten::_to_copy 2.19% 42.586us 4.26% 82.835us 2.589us 32\n aten::ones 2.08% 40.453us 2.87% 55.895us 13.974us 4\n aten::add 2.32% 45.200us 2.59% 50.328us 12.582us 4\n aten::abs 1.27% 24.757us 2.20% 42.744us 21.372us 2\n aten::__lshift__ 0.67% 12.990us 1.76% 34.283us 34.283us 1\n aten::pow 1.40% 27.282us 1.58% 30.817us 10.272us 3\n----------------------------- ------------ ------------ ------------ ------------ ------------ ------------\n```\nwhich seems really bizarre\n\ncc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise", "url": "https://github.com/pytorch/pytorch/issues/151829", "state": "closed", "labels": [ "oncall: profiler" ], "created_at": "2025-04-21T20:53:57Z", "updated_at": "2025-06-07T23:58:54Z", "user": "zou3519" }, { "repo": "pytorch/ao", "number": 2086, "title": "How to automatically install the latest TorchAO nightly wheel", "body": "When I try to install TorchAO the same way I install the nightly torch wheel (pip3 install torchao --index-url https://download.pytorch.org/whl/nightly/cpu), I end up getting version 0.10.0 of TorchAO, instead of the expected https://download.pytorch.org/whl/nightly/cpu/torchao-0.11.0.dev20250418+cpu-py3-none-any.whl for example.\n\nI'd like to know how to automatically install the latest TorchAO nightly wheel, and why the latest TorchAO nightly build is only available for Python3.9?\n\nlog:\n(torch27) [xxx@xxx localdisk]$ pip3 install torchao --index-url https://download.pytorch.org/whl/nightly/cpu\nLooking in indexes: https://download.pytorch.org/whl/nightly/cpu\nCollecting torchao\n Using cached https://download.pytorch.org/whl/nightly/cpu/torchao-0.10.0%2Bcpu-py3-none-any.whl.metadata (14 kB)\nUsing cached https://download.pytorch.org/whl/nightly/cpu/torchao-0.10.0%2Bcpu-py3-none-any.whl (710 kB)\nInstalling collected packages: torchao\nSuccessfully installed torchao-0.10.0+cpu\n\n\n", "url": "https://github.com/pytorch/ao/issues/2086", "state": "open", "labels": [ "triaged", "distribution" ], "created_at": "2025-04-21T06:48:43Z", "updated_at": "2025-04-29T22:28:47Z", "user": "MingxuZh" }, { "repo": "pytorch/pytorch", "number": 151746, "title": "[AotInductor][Export][Triton] how to export custom triton kernels when use torch.export.export", "body": "### \ud83d\udc1b Describe the bug\n\nour framework is based on torch, and includes some custom triton kernels. \nin inference phase, we try use different gpu type(such as training on H100, inference on L40). so we should load exported model and call aoti_compile_and_package to generate aot model based on inference gpu, but error with below msg when call torch.load:\n```\ntorch._export.serde.serialize.SerializeError: Unsupported target type for node Node(target='torch.ops.triton_kernel.add.default', inputs=[NamedArgument(name='x', arg=Argument(as_tensor=TensorArgument(name='linear')), kind=1), NamedArgument(name='y', arg=Argument(as_tensor=TensorArgument(name='mul')), kind=1)], outputs=[Argument(as_tensor=TensorArgument(name='add'))], metadata={'stack_trace': ' File \"/usr/local/app/torch_learn/export/model_export.py\", line 72, in forward\\n output = triton_add(dense_output, bias)\\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/_library/custom_ops.py\", line 671, in __call__\\n return self._opoverload(*args, **kwargs)\\n', 'nn_module_stack': 'L__self__,,__main__.SimpleModel', 'source_fn_stack': 'add_default,torch.ops.triton_kernel.add.default',\n'torch_fn': 'add.default_1;OpOverload.add.default'}, is_hop_single_tensor_return=None): \n```\nIn my understanding, torch need source code of triton kernels when load exported_model. \nbut our framwork is big, and in some cases, user may define their custom triton kernels. \nit's diffcult for us to obtain user source code and download this big framework in inference gpu machine. \n\nany suggestions?\n\n\n\nthe simple model code is:\n```python\nimport torch\nimport torch.nn as nn\nimport torch\nimport triton\nimport triton.language as tl\n\n\n@triton.jit\ndef add_kernel(\n x_ptr, y_ptr, output_ptr,\n n_elements,\n BLOCK_SIZE: tl.constexpr,\n):\n pid = tl.program_id(axis=0)\n block_start = pid * BLOCK_SIZE\n offsets = block_start + tl.arange(0, BLOCK_SIZE)\n mask = offsets < n_elements\n x = tl.load(x_ptr + offsets, mask=mask)\n y = tl.load(y_ptr + offsets, mask=mask)\n output = x + y\n tl.store(output_ptr + offsets, output, mask=mask)\n\n\n@torch.library.triton_op(\"triton_kernel::add\", mutates_args={})\ndef triton_add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:\n\tn_elements = x.numel()\n\toutput = torch.empty_like(x)\n\n\tBLOCK_SIZE = 1024\n\tgrid = (triton.cdiv(n_elements, BLOCK_SIZE),)\n\n\ttorch.library.wrap_triton(add_kernel)[grid](\n\t\tx, y, output,\n\t\tn_elements,\n\t\tBLOCK_SIZE,\n\t)\n\n\treturn output\n\n\nclass SimpleModel(nn.Module):\n\n def __init__(self, input_dim, hidden_dim):\n super(SimpleModel, self).__init__()\n self.dense = nn.Linear(input_dim, hidden_dim)\n\n def forward(self, x):\n dense_output = self.dense(x)\n bias = torch.ones_like(dense_output) * 0.5\n output = triton_add(dense_output, bias)\n return output\n\n\ndef main():\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n input_dim = 10\n hidden_dim = 20\n batch_size = 16\n\n model = SimpleModel(input_dim, hidden_dim).to(device)\n\n x = torch.randn(batch_size, input_dim, device=device)\n\n with torch.no_grad():\n output = model(x)\n\n exported_model = torch.export.export(\n model,\n (x,),\n )\n\n torch.export.save(exported_model, \"exported_model.pt\")\n\n\nif __name__ == \"__main__\":\n main()\n\n```\nrun this code, a exported_model is in `./exported_model.pt`\n\nthen run aot export code:\n```python\nimport torch\n\ntorch.set_default_device(\"cuda\")\n\n\nsaved_exported_program = torch.export.load(f\"exported_model.pt\")\ntorch._inductor.aoti_compile_and_package(\n saved_exported_program,\n package_path=f\"aot_model.pt2\",\n)\n```\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.7.0+cu118\nIs debug build: False\nCUDA used to build PyTorch: 11.8\nROCM used to build PyTorch: N/A\n\nGCC version: (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)\nClang version: 9.0.1 (Red Hat 9.0.1-2.module_el8.2.0+309+0c7b6b03)\nCMake version: version 3.19.0\nLibc version: glibc-2.28\n\nPython version: 3.9.16 (main, Dec 11 2024, 20:47:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] (64-bit runtime)\nPython platform: Linux-5.4.119-1-tlinux4-0010.3-x86_64-with-glibc2.28\nIs CUDA available: True\nCUDA runtime version: 11.8.89\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA A10\nGPU 1: NVIDIA A10\nGPU 2: NVIDIA A10\nGPU 3: NVIDIA A10\n\nNvidia driver version: 470.141.03\ncuDNN version: Probably one of the following:\n/usr/lib/libcudnn.so.8.9.7\n/usr/lib/libcudnn_adv_infer.so.8.9.7\n/usr/lib/libcudnn_adv_train.so.8.9.7\n/usr/lib/libcudnn_cnn_infer.so.8.9.7\n/usr/lib/libcudnn_cnn_train.so.8.9.7\n/usr/lib/libcudnn_ops_infer.so.8.9.7\n/usr/lib/libcudnn_ops_train.so.8.9.7\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nThread(s) per core: 2\nCore(s) per socke", "url": "https://github.com/pytorch/pytorch/issues/151746", "state": "open", "labels": [ "oncall: pt2", "export-triaged", "oncall: export", "module: aotinductor", "module: user triton" ], "created_at": "2025-04-19T13:26:03Z", "updated_at": "2025-04-25T23:11:04Z", "user": "zzq96" }, { "repo": "pytorch/executorch", "number": 10314, "title": "This document\uff08https://pytorch.org/executorch/stable/demo-apps-android.html#running-the-app\uff09 is out of date. Where is examples/demo-apps/android/ExecuTorchDemo?", "body": "https://pytorch.org/executorch/stable/demo-apps-android.html#running-the-app\n\n![Image](https://github.com/user-attachments/assets/73116d1b-fb01-4263-9adc-ae1aeb8e7a06)\n\n![Image](https://github.com/user-attachments/assets/7076302d-364d-4b71-b990-6cec92fe52a0)\n\ncc @mergennachin @iseeyuan @lucylq @helunwencser @tarun292 @kimishpatel @jackzhxng", "url": "https://github.com/pytorch/executorch/issues/10314", "state": "closed", "labels": [ "module: examples" ], "created_at": "2025-04-19T09:36:52Z", "updated_at": "2025-12-23T20:39:22Z", "user": "Kennems" }, { "repo": "pytorch/xla", "number": 9002, "title": "Update debugger documentation to demonstrate lldb", "body": "It's possible lldb is faster than gdb. Feature request is to explore if that is true, and if so, write docs on how to use lldb command line and lldb in VSCode.\n\nThis is an enhancement of #8997 ", "url": "https://github.com/pytorch/xla/issues/9002", "state": "open", "labels": [ "documentation" ], "created_at": "2025-04-18T16:28:50Z", "updated_at": "2025-04-21T12:33:58Z", "comments": 0, "user": "yaoshiang" }, { "repo": "pytorch/xla", "number": 8997, "title": "Add guide to debugging", "body": "For now, it can cover just PyTorch pending #8996 ", "url": "https://github.com/pytorch/xla/issues/8997", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-04-17T18:30:31Z", "updated_at": "2025-04-20T08:01:29Z", "comments": 0, "user": "yaoshiang" }, { "repo": "pytorch/TensorRT", "number": 3478, "title": "\u2753 [Question] Is SAM2 supported when compiling with the Dynamo backend on JetPack 6.1 or 6.2?", "body": "## \u2753 Question\nWill SAM2 be compatible with the Dynamo backend on JetPack 6.1/6.2?\n\nAre there any workarounds for the TensorRT version mismatch?\n\n## What you have already tried\n\nHere are my attempts and issues encountered, my device is jetson AGX Orin, I only compile the ImageEncoder (Hiera & FPN which remove position_encoding) of SAM2, the SAM2 code is from https://github.com/chohk88/sam2/tree/torch-trt:\n\n\n**_JetPack 6.1 + PyTorch 2.5 (from https://developer.download.nvidia.cn) + Torch-TensorRT 2.5_**\n\nTried compiling SAM2 but encountered errors.\n\nObserved that the PyTorch 2.5 documentation does not mention SAM2 support, likely indicating SAM2 is not yet adapted for this version.\n\n**_JetPack 6.1 + PyTorch 2.6 (from https://pypi.jetson-ai-lab.dev/jp6/cu126) + Torch-TensorRT 2.6_**\n\nInstalled PyTorch 2.6 from [jp6/cu126](https://pypi.jetson-ai-lab.dev/jp6/cu126) and Torch-TensorRT 2.6.\n\nImporting torch_tensorrt failed with ModuleNotFoundError: No module named 'tensorrt.plugin'.\n\nRoot cause: Torch-TensorRT 2.6 requires TensorRT 10.7, but JetPack 6.1 provides only TensorRT 10.3.\n\nFound no straightforward way to upgrade TensorRT within JetPack 6.1 due to dependency conflicts.\n\n_**Cross-Platform Attempt: Compile on x86 + Run on JetPack 6.1**_\n\nCompiled SAM2 on x86 with Torch-TensorRT 2.6 and exported the model.\n\nTried running it on JetPack 6.1 with Torch-TensorRT 2.5.\n\nFailed unsurprisingly due to serialization version incompatibility between 2.6 and 2.5.\n\n", "url": "https://github.com/pytorch/TensorRT/issues/3478", "state": "open", "labels": [ "question" ], "created_at": "2025-04-17T08:32:07Z", "updated_at": "2025-06-28T07:09:31Z", "user": "AyanamiReiFan" }, { "repo": "pytorch/xla", "number": 8993, "title": "Is there a way to attach metadata to a layer in a way that is included in the StableHLO export?", "body": "## \u2753 Questions and Help\n\nI am looking at a use case where metadata about a trained model's layers needs to be attached to the StableHLO export. I am using `exported_program_to_stablehlo`\n\nOne option I had considered is exporting the data completely separately from `exported_program_to_stablehlo` (say, by writing some random json to disk), but then I don't know how to connect the written metadata back to the stableHLO export, because the layer names do not appear to be attached to the generated StableHLO ops such.\n\nAnother option I tried was to attach the metadata directly to the torch nodes before calling `exported_program_to_stablehlo`, but I can't figure out how to do so in a way that results in the metadata being exported as MLIR attributes. It would suffice to export the attributes as, e.g., an op attribute with a given string name and value.\n\nCould someone advise on whether this is possible, or suggest an alternative? (Or add a feature that would support this?)", "url": "https://github.com/pytorch/xla/issues/8993", "state": "open", "labels": [ "question", "stablehlo" ], "created_at": "2025-04-17T06:04:47Z", "updated_at": "2025-04-25T00:44:25Z", "user": "j2kun" }, { "repo": "pytorch/tutorials", "number": 3332, "title": "Tutorial mention of batch samples as features?", "body": "Hello kindly confirm if it is correct to say that the batch_size =64 will give 64 features and 64 labels. Arent there 28 by 28 features and 64 samples ? \n\n\n\"Image\"\n", "url": "https://github.com/pytorch/tutorials/issues/3332", "state": "open", "labels": [], "created_at": "2025-04-17T02:35:14Z", "updated_at": "2025-04-17T02:35:58Z", "comments": 0, "user": "monaja" }, { "repo": "pytorch/xla", "number": 8986, "title": "When trying to run this code with connection to tpu in google colab i had this error: AssertionError: 4 results for replica 0", "body": "## \u2753 Questions and Help\n\nWhen trying to run this code in google colab:\n\n```import os\nimport torch_xla\nimport torch_xla.core.xla_model as xm\nimport torch_xla.distributed.xla_multiprocessing as xmp\nimport torch_xla.runtime as xr\nimport torchvision\nimport multiprocessing as mp\n\nos.environ['TPU_NUM_DEVICES'] = '8'\nos.environ ['XLA_USE_SPMD'] = '1'\nos.environ ['XLA_TENSOR_ALLOCATOR_MAXSIZE'] = '8G'\n\nlock = mp.Manager().Lock()\n\ndef _mp_fn(i, lock, device):\n with lock:\n pass\n\n print(f\"Process {i}: device = {device} (BEFORE RETURN)\")\n return i, device\n\nif __name__ == '__main__':\n nprocs = None \n device = xm.xla_device()\n print(f\"Main process device: {device}\") \n\n results = xmp.spawn(_mp_fn, args=(lock, device), start_method='fork', nprocs=nprocs) \n print(\"Results:\")\n for key, value in results.items():\n print(f\" Key: {key}, Value: {value}\")\n\n for i, device in results.items():\n print('process', i, device)```\n\nI get this error:\n\n```Main process device: xla:0\nProcess 0: device = xla:0 (BEFORE RETURN)\nProcess 0: device = xla:0 (BEFORE RETURN)Process 0: device = xla:0 (BEFORE RETURN)\n\nProcess 0: device = xla:0 (BEFORE RETURN)\n---------------------------------------------------------------------------\nAssertionError Traceback (most recent call last)\n in ()\n 27 print(f\"Main process device: {device}\")\n 28 \n---> 29 results = xmp.spawn(_mp_fn, args=(lock, device), start_method='fork', nprocs=nprocs) #\u043f\u0435\u0440\u0435\u0434\u0430\u0435\u043c device\n 30 print(\"Results:\")\n\n3 frames\n/usr/local/lib/python3.11/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)\n 37 return None.\n 38 \"\"\"\n---> 39 return pjrt.spawn(fn, nprocs, start_method, args)\n 40 \n 41 \n\n/usr/local/lib/python3.11/dist-packages/torch_xla/_internal/pjrt.py in spawn(fn, nprocs, start_method, args)\n 211 % nprocs)\n 212 \n--> 213 run_multiprocess(spawn_fn, start_method=start_method)\n 214 \n 215 \n\n/usr/local/lib/python3.11/dist-packages/torch_xla/_internal/pjrt.py in run_multiprocess(fn, start_method, *args, **kwargs)\n 171 result.items() for result in process_results))\n 172 \n--> 173 return _merge_replica_results(replica_results)\n 174 \n 175 \n\n/usr/local/lib/python3.11/dist-packages/torch_xla/_internal/pjrt.py in _merge_replica_results(replica_results)\n 37 ordinal for ordinal, _ in replica_results)\n 38 replica, num_results = replica_counts.most_common(1)[0]\n---> 39 assert num_results == 1, f'{num_results} results for replica {replica}'\n 40 \n 41 return dict(replica_results)\n\nAssertionError: 4 results for replica 0```\n\nAs first I tryed many diffrent versions, but it didn't help:\n\n```!pip install -U pip\n!pip install cloud-tpu-client==0.10\n\n!pip install torch~=2.1.0 'torch_xla[tpu]~=2.1.0' \\\n -f https://storage.googleapis.com/libtpu-releases/inde6x.html \\\n -f https://storage.googleapis.com/libtpu-wheels/index.html```\n\nWhat should I do, how to fix it. I tried many different ways to connect tpu but I still couldn't connect normally and start training", "url": "https://github.com/pytorch/xla/issues/8986", "state": "closed", "labels": [ "question", "xla:tpu" ], "created_at": "2025-04-16T11:56:22Z", "updated_at": "2025-04-18T12:11:07Z", "user": "Neckto0" }, { "repo": "pytorch/audio", "number": 3899, "title": "Segmentation fault (core dumped) in torchaudio.io.AudioEffector", "body": "### \ud83d\udc1b Describe the bug\n\nOccasionally, a core dump error may occur with a specific audio file as input, which a Python exception cannot capture.\n\nThis error is rare, but when it does occur, the entire Python process will be killed. It only happens with some \u201dspecial audio\u201d. Unfortunately, I did not find out what the special was.\n\nHow to reproduce:\n1. Download the numpy array that causes the core dump in my environment.\n\n[a.npy.zip](https://github.com/user-attachments/files/19736212/a.npy.zip)\n\n2. Run the following code:\n\n```python\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport numpy\nfrom torchaudio.io import AudioEffector, CodecConfig\nimport torch\n\nmodule = AudioEffector(\nformat='ogg',\nencoder='opus',\ncodec_config=CodecConfig(qscale=1),\npad_end=True,)\n\n\naudio = numpy.load('./a.npy')\n\n\noutput = module.apply(torch.from_numpy(audio), 44100).numpy()\n```\n\n\n```\n[W414 21:10:43.989426875 encode_process.cpp:179] Warning: \"opus\" encoder is selected. Enabling '-strict experimental'. If this is not desired, please provide \"strict\" encoder option with desired value. (function operator())\n[1] 2613659 segmentation fault (core dumped) python debug.py\n```\n\nMy python and package versions:\n```\nnumpy 2.0.2\ntorch 2.6.0\ntorch-complex 0.4.4\ntorchaudio 2.6.0\n```\n\n\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.6.0+cu124\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.2 LTS (x86_64)\nGCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version: Could not collect\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: Could not collect\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090\nNvidia driver version: 550.120\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen 9 9950X 16-Core Processor\nCPU family: 26\nModel: 68\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 0\nFrequency boost: enabled\nCPU(s) scaling MHz: 67%\nCPU max MHz: 5752.0000\nCPU min MHz: 600.0000\nBogoMIPS: 8599.98\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze\nVirtualization: AMD-V\nL1d cache: 768 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 16 MiB (16 instances)\nL3 cache: 64 MiB (2 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected", "url": "https://github.com/pytorch/audio/issues/3899", "state": "open", "labels": [], "created_at": "2025-04-14T13:20:04Z", "updated_at": "2025-04-14T13:20:56Z", "comments": 0, "user": "LiChenda" }, { "repo": "pytorch/audio", "number": 3898, "title": "forcing other not allowed frequencies to be accepted", "body": "I'm trying to work with frequencies below 20hz, preferably at 18,98hz but as the documentation says it only supports above 4000, 8000, and 9000. Even though is there a way to force torch to work with my desire frequency?? please", "url": "https://github.com/pytorch/audio/issues/3898", "state": "open", "labels": [], "created_at": "2025-04-13T15:31:41Z", "updated_at": "2025-04-13T15:31:41Z", "comments": 0, "user": "andrewessel" }, { "repo": "pytorch/xla", "number": 8968, "title": "Alternative to torch.select_mask", "body": "## \u2753 Questions and Help\n\nMost of the time we can adapted routines to avoid graph recompilations, however there is instance where this is a bit tricky. \n\nWhen computing a masked mean, we are currently using sum and valids as follows:\n\n```\nreplaced = input_tensor*is_valid\nsum_valid = replaced.sum()\nn_valid = is_valid.sum(dtype=input_tensor.dtype)\nreturn torch.nan_to_num(sum_valid / n_valid)\n```\n\nWhere valid is 0 or 1 if the entry is in the dataset. \n\nThis effectively calculates the mean while ignoring zeros. This works well when the data is close to full in most examples. However in some instance we insert sparse data that have a reduced is_valid that is consistent across the data set. The result is that the 0 entries in the is_valid are reinforced, and when testing against the test of sparse data, it won't predict the other entries. \n\nTo avoid this - we have historically used torch.select_mask, which only selects the non-zero entries and the gradients only back prop. through those - basically we don't get the reinforced 0-0. \n\nI'm wondering if there is an alternative or work around to torch.select_mask as this increases computation time by ~6X because of the frequent recompilations.\n\nThank you again for this awesome tool and let me know if you have any questions. \n", "url": "https://github.com/pytorch/xla/issues/8968", "state": "closed", "labels": [ "question" ], "created_at": "2025-04-13T14:38:55Z", "updated_at": "2025-05-01T20:31:05Z", "user": "ttdd11" }, { "repo": "pytorch/TensorRT", "number": 3469, "title": "\u2753 [Question] How wo you export a triton kernel with model to a serialized engine that can be run in c++?", "body": "## \u2753 Question\n\n\nHow wo you export a triton kernel with model to a serialized engine that can be run in c++?\n\n## What you have already tried\nRead through python examples.\n\n\n\n## Environment\n\n> Build information about Torch-TensorRT can be found by turning on debug messages\n\n - PyTorch Version (e.g., 1.0):\n - CPU Architecture:\n - OS (e.g., Linux):\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\n - Build command you used (if compiling from source):\n - Are you using local sources or building from archives:\n - Python version:\n - CUDA version:\n - GPU models and configuration:\n - Any other relevant information:\n\n## Additional context\n\n\n", "url": "https://github.com/pytorch/TensorRT/issues/3469", "state": "open", "labels": [ "question" ], "created_at": "2025-04-11T16:53:33Z", "updated_at": "2025-12-12T01:58:55Z", "user": "cmgreen210" }, { "repo": "pytorch/torchtitan", "number": 1093, "title": "why is shard(1) in the colwiseparallel for lm head?", "body": "I found ColwiseParallel here for output linear layer has input_layout Shard(1). In that way, the input will be sharded accross different devices in the sequence dimension, and also the linear layer's output dimension (e.g., vocab dimension) has also been distributed? Is that something desired? Because on my understanding, it should be ColwiseParallel(input_layouts=Replicate(), output_layouts=Shard(-1) if loss_parallel else Replicate(), use_local_output=not loss_parallel)\n\n\n`\nparallelize_module(\n\n model,\n tp_mesh,\n {\n \"tok_embeddings\": RowwiseParallel(\n input_layouts=Replicate(),\n output_layouts=Shard(1),\n ),\n \"norm\": SequenceParallel(),\n \"output\": ColwiseParallel(\n input_layouts=Shard(1),\n output_layouts=Shard(-1) if loss_parallel else Replicate(),\n use_local_output=not loss_parallel,\n ),\n },\n\n", "url": "https://github.com/pytorch/torchtitan/issues/1093", "state": "closed", "labels": [], "created_at": "2025-04-11T11:20:02Z", "updated_at": "2025-04-11T11:46:04Z", "comments": 0, "user": "wimh966" }, { "repo": "pytorch/torchtitan", "number": 1092, "title": "Step Time Increase Leading to NCCL Timeout with FSDP2", "body": "**Description**\nI am encountering an issue when using fsdp2 where step time significantly increases after a certain number of steps, leading to NCCL timeouts. Initially, each step takes around 2 seconds, as shown in the earlier logs. However, after reaching step 1800, most processes experience a noticeable increase in step time except for one. This behavior causes errors such as:\n```\n[rank3]:[E410 14:46:34.629385703 ProcessGroupNCCL.cpp:684] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\n[rank3]:[E410 14:46:34.629438241 ProcessGroupNCCL.cpp:698] [Rank 3] To avoid data inconsistency, we are taking the entire process down.\n[rank3]:[E410 14:46:34.630696293 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=319682, OpType=_ALLGATHER_BASE, NumelIn=65667328, NumelOut=525338624, Timeout(ms)=100000) ran for 138169 milliseconds before timing out.\n```\n\nThe discrepancy in step time across processes seems to result in the NCCL operations timing out.\n\n**Observations**\nAt earlier steps (e.g., step 10), the step time is approximately 2.5 seconds across all processes.\n\nBy later steps (e.g., step 1800), most processes experience longer step times except for one process, leading to the timeout error.\n\nMy training configuration (in TOML) is as follows:\n\n```\n[metrics]\nlog_freq = 1\nenable_tensorboard = true\nsave_tb_folder = \"tb\"\n\n[optimizer]\nname = \"AdamW\"\nlr = 1.5e-4\n\n[training]\nbatch_size = 1\nseq_len = 4096\nwarmup_steps = 2000\nmax_norm = 1.0\nsteps = 15000\ndata_parallel_replicate_degree = 1\ndata_parallel_shard_degree = -1\ntensor_parallel_degree = 1\ncompile = false\n\n[experimental]\ncontext_parallel_degree = 1\npipeline_parallel_degree = 1\n\n[checkpoint]\nenable_checkpoint = true\nfolder = \"checkpoint\"\ninterval_type = \"steps\"\ninterval = 15000\nmodel_weights_only = false\nexport_dtype = \"float32\"\nasync_mode = \"disabled\"\n```\n\nAre there any recommended solutions to solve this?\n\n![Image](https://github.com/user-attachments/assets/7b22f8ef-d16b-497d-9689-ccbc0b15bf24)\n\n![Image](https://github.com/user-attachments/assets/eabbb96e-5920-4410-b0a8-242fee7347f3)", "url": "https://github.com/pytorch/torchtitan/issues/1092", "state": "closed", "labels": [ "question" ], "created_at": "2025-04-11T10:50:55Z", "updated_at": "2025-04-14T05:24:10Z", "user": "xhwang22" }, { "repo": "pytorch/torchtitan", "number": 1091, "title": "FSDP2 root level parameter management", "body": "Hi,\n\nI am curious about the design decision of managing both token embeddings and the final output layer at the root fsdp level instead of treating them as different layers like other transformer blocks?\n\nThis coupled management seems to unshard the final output layer too early and reshard the token embedding too late in forward for example.\n\nAlso for the optimization (see [here](https://github.com/pytorch/torchtitan/blob/main/torchtitan/models/llama3/parallelize_llama.py#L369)) that disables `reshard_after_forward` for the last transformer block layer, would it be more appropriate to perform this optimization on the final linear layer instead of the last transformer block?\n\nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/1091", "state": "closed", "labels": [ "question", "module: fsdp" ], "created_at": "2025-04-11T01:54:57Z", "updated_at": "2025-07-29T02:40:22Z", "user": "dingqingy" }, { "repo": "pytorch/pytorch", "number": 150967, "title": "[MPS] `where`: silent incorrectness when cond is not contiguous", "body": "### \ud83d\udc1b Describe the bug\n\n\n```python\n\ndevice = \"mps\"\ndiff = torch.tensor([[True, True], [True, True]], dtype=torch.bool)\ndiff = diff.T\ntarget = torch.tensor([[0, 0], [0, 1]])\n\nrcpu = torch.where(diff, target, 0)\n\ndiffmps = diff.to(device)\ntargetmps = target.to(device)\n\nrmps = torch.where(diffmps, targetmps, 0)\n\nprint(rcpu)\nprint(rmps)\n```\n\n```\ntensor([[0, 0],\n [0, 1]])\ntensor([[0, 0],\n [0, 0]], device='mps:0')\n```\n\n\n\n### Versions\n\nNightly\n\n```\nPyTorch version: 2.8.0a0+git00c921c\nIs debug build: True\nCUDA used to build PyTorch: None\nROCM used to build PyTorch: N/A\n\nOS: macOS 13.7.1 (arm64)\nGCC version: Could not collect\nClang version: 18.1.5\nCMake version: version 4.0.0\nLibc version: N/A\n\nPython version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)\nPython platform: macOS-13.7.1-arm64-arm-64bit\nIs CUDA available: False\nCUDA runtime version: No CUDA\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: No CUDA\nNvidia driver version: No CUDA\ncuDNN version: No CUDA\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: False\n\nCPU:\nApple M1 Max\n```\n\ncc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen", "url": "https://github.com/pytorch/pytorch/issues/150967", "state": "closed", "labels": [ "triaged", "module: correctness (silent)", "module: mps" ], "created_at": "2025-04-09T23:13:38Z", "updated_at": "2025-04-13T20:44:52Z", "user": "qqaatw" }, { "repo": "pytorch/torchtitan", "number": 1081, "title": "Torch.compile and TP during multiresolution Training", "body": "is it correct to assume that we should only enable torch.compile in single resolution training or when we have the same sequence lengths to avoid recompiles and slow down?", "url": "https://github.com/pytorch/torchtitan/issues/1081", "state": "open", "labels": [ "question", "module: torch.compile" ], "created_at": "2025-04-09T18:08:41Z", "updated_at": "2025-04-10T15:05:57Z", "user": "nighting0le01" }, { "repo": "pytorch/pytorch", "number": 150891, "title": "[ONNX] How to export Llama4", "body": "### \ud83d\udc1b Describe the bug\n\nI am trying to do an onnx export for the Llama 4 Scout model but it fails saying:\n`RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache`\n\nThe error traceback:\n```\nTraceback (most recent call last):\n File \"/proj/work/sdey/examples/llama4/llama4_scout.py\", line 80, in \n torch.onnx.export(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 375, in export\n export(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py\", line 502, in export\n _export(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1564, in _export\n graph, params_dict, torch_out = _model_to_graph(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1113, in _model_to_graph\n graph, params, torch_out, module = _create_jit_graph(model, args)\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py\", line 997, in _create_jit_graph\n graph, torch_out = _trace_and_get_graph_from_model(model, args)\n File \"/proj/work/sdey//venv/lib/python3.10/site-packages/torch/onnx/utils.py\", line 904, in _trace_and_get_graph_from_model\n trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py\", line 1500, in _get_trace_graph\n outs = ONNXTracedModule(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py\", line 139, in forward\n graph, out = torch._C._create_graph_by_tracing(\n File \"/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py\", line 133, in wrapper\n out_vars, _ = _flatten(outs)\nRuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache\n```\n\nThis occurs for higher version of `transformers > 4.44.2`\n\nCode to reproduce:\n```\nimport torch\nfrom transformers import AutoProcessor,AutoModelForImageTextToText, pipeline\n\nprocessor = AutoProcessor.from_pretrained(\"meta-llama/Llama-4-Scout-17B-16E\")\nmodel = AutoModelForImageTextToText.from_pretrained(\"meta-llama/Llama-4-Scout-17B-16E\",torch_dtype=torch.bfloat16)\n\n\nurl1 = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg\"\nurl2 = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png\"\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"url\": url1},\n {\"type\": \"image\", \"url\": url2},\n {\"type\": \"text\", \"text\": \"Can you describe how these two images are similar, and how they differ?\"},\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n)\n\ntorch.onnx.export(\n model,\n (inputs[\"input_ids\"], inputs[\"pixel_values\"], inputs[\"attention_mask\"]),\n \"llama4_scout.onnx\",\n do_constant_folding=False,\n training= torch.onnx.TrainingMode.EVAL,\n export_params=False)\n```\n\n### Versions\n\nPyTorch version: 2.5.1\nIs debug build: False\nCUDA used to build PyTorch: Could not collect\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (aarch64)\nGCC version: (GCC) 13.3.0\nClang version: 14.0.0-1ubuntu1.1\nCMake version: version 3.29.6\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-6.5.0-1019-nvidia-64k-aarch64-with-glibc2.35\nIs CUDA available: False\nCUDA runtime version: 11.5.119\nCUDA_MODULE_LOADING set to: N/A\nGPU models and configuration: GPU 0: NVIDIA GH200 480GB\nNvidia driver version: 550.90.07\ncuDNN version: Probably one of the following:\n/usr/lib/aarch64-linux-gnu/libcudnn.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.3.0\n/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.3.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: aarch64\nCPU op-mode(s): 64-bit", "url": "https://github.com/pytorch/pytorch/issues/150891", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2025-04-09T00:11:49Z", "updated_at": "2025-11-13T10:13:08Z", "user": "srijanie03" }, { "repo": "pytorch/xla", "number": 8948, "title": "Torch-XLA not compatible with static python", "body": "## \u2753 Questions and Help\nI am trying to use Torch-XLA v2.3.0 but it fails with:\n```\nline 7, in \n import _XLAC\nImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n```\n\nI noticed this message [here](https://github.com/pytorch/xla/blob/9e23ca853331aa229dcdba2473d20ca5af2d620d/docs/source/contribute/bazel.md?plain=1#L71):\n```\nBazel brings in [pybind11](https://github.com/pybind/pybind11) embeded\npython and links against it to provide `libpython` to the plugin using\nthis mechanism. Python headers are also sourced from there instead of\ndepending on the system version. These are satisfied from the\n`\"@pybind11//:pybind11_embed\"`, which sets up compiler options for\nlinking with `libpython` transitively.\n```\nwhich suggests XLA is pulling in a two year old version of pybind11_bazel, which gets its python binary/library/headers/paths by inspecting the copy of the interpreter installed on the operating system. During this probing pybind11_bazel explicitly asks the python interpreter to give it the linker flags it would need to embed the interpreter in its code, leading to that dependency. This renders it unusable with static python.\n\nIs there a way to make this work/could you provide a different build of Torch-XLA which is compatible with static python?", "url": "https://github.com/pytorch/xla/issues/8948", "state": "open", "labels": [ "question" ], "created_at": "2025-04-07T18:25:43Z", "updated_at": "2025-04-23T14:32:47Z", "user": "drewjenks01" }, { "repo": "pytorch/pytorch", "number": 150741, "title": "how to install pytorch with cuda 12.2 and py3.12", "body": "### \ud83d\udc1b Describe the bug\n\nI wanna know how to install pytorch with CUDA12.2\n\n### Versions\n\nI used the following command , and many issue occured\n\nconda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia", "url": "https://github.com/pytorch/pytorch/issues/150741", "state": "closed", "labels": [], "created_at": "2025-04-06T14:33:59Z", "updated_at": "2025-04-07T14:34:48Z", "user": "goactiongo" }, { "repo": "pytorch/data", "number": 1471, "title": "torchdata or torchdata-contrib?", "body": "my team has been implementing quite several utilities. some are close to core features, some other are more advanced and utilities. for example, their class names and features are like:\n\n\n```python\nclass RoundRobinNode(BaseNode[T]):\n \"\"\"A node that cycles through multiple datasets in a round-robin way.\n```\n\n```python\nclass FileListNode(BaseNode[Dict]):\n \"\"\"Node that lists files from any supported filesystem (local, S3) matching specified patterns.\n\n Uses fsspec to provide universal file access capabilities for both local and remote files.\n\n Features:\n - Lists files from supported filesystems (local, S3)\n - Supports glob patterns for file matching\n - Maintains state for checkpointing and resumption\n```\n\n```python\nclass FileReaderNode(BaseNode[Dict]):\n \"\"\"Universal node that reads file contents from any supported filesystem.\n\n Uses smart_open to support local files, S3, HTTP, and more file systems.\n\n```\n\n ```python\nclass TextStreamDecodeNode(BaseNode[Dict]):\n \"\"\"Node that streams text files line by line from any source.\n\n This node combines functionality of file reading and line-by-line processing,\n supporting both local and remote (S3, HTTP, etc.) files via smart_open.\n\n Features:\n - Streams files line-by-line (memory efficient)\n - Supports local files, S3, HTTP, and more\n - Handles compressed files (.gz, .bz2) transparently\n - Maintains state for checkpointing and resumption\n - Preserves metadata from source nodes\n```\n\n```python\nclass HuggingFaceDatasetStreamNode(BaseNode[dict]):\n \"\"\"\n Node that streams examples from a HuggingFace dataset.\n\n Output format:\n {\n \"data\": {...}, # Original dataset item\n \"metadata\": {\n \"dataset_name\": \"squad\",\n \"split\": \"train\",\n \"index\": 42\n }\n }\n\n Input: None (configured with dataset name and split at initialization)\n Output: Dict containing example data and metadata\n```\n\n```python\nclass JsonlStreamNode(TextStreamDecodeNode):\n \"\"\"Node that streams JSONL files and parses each line as JSON.\n\n This node extends TextStreamDecodeNode to add JSON parsing for each line.\n It maintains the same state management and streaming capabilities while adding\n JSONL-specific processing.\n```\n\nand some more.\n\nconservatively, i'd say these can be part of, say, `torchdata-contrib`. but i'd like to hear from the maintainers. where would you suggest drawing the line? any other suggestions would be great, too. ", "url": "https://github.com/meta-pytorch/data/issues/1471", "state": "open", "labels": [], "created_at": "2025-04-06T01:11:33Z", "updated_at": "2025-05-12T21:38:37Z", "comments": 2, "user": "keunwoochoi" }, { "repo": "pytorch/torchtitan", "number": 1058, "title": "Issue of using fully_shard (FSDP2) for Huggingface model: Cannot copy out of meta tensor; no data!", "body": "Dear community,\n\nThanks for introducing FSDP2 to Pytorch. I am meeting with an issue using fully_shard for Huggingface model. Just want to know if you have any insights into this issue.\n\nThe code is inherited from [#743 ](https://github.com/pytorch/torchtitan/issues/743)\n\n```\nimport os\n\nimport torch\nfrom torch.distributed import init_process_group, destroy_process_group\nfrom torch.distributed._composable.fsdp import fully_shard\nfrom transformers import AutoConfig, AutoModelForCausalLM\nfrom transformers.models.gpt_neox.modeling_gpt_neox import GPTNeoXLayer\nfrom accelerate import init_empty_weights, load_checkpoint_and_dispatch\n\ndef get_num_params(model: torch.nn.Module, exclude_embedding: bool = False) -> int:\n num_params = sum(p.numel() for p in model.parameters())\n if exclude_embedding:\n num_params -= model.tok_embeddings.weight.numel()\n return num_params\n\ndef setup(local_rank, world_size):\n device = torch.device(f\"cuda:{local_rank}\")\n torch.cuda.set_device(device)\n init_process_group(\"nccl\", rank=local_rank, world_size=world_size)\n\ndef load():\n local_rank = int(os.environ[\"LOCAL_RANK\"])\n world_size = int(os.environ[\"WORLD_SIZE\"])\n setup(local_rank, world_size)\n\n model_name = \"EleutherAI/pythia-2.8b\"\n config = AutoConfig.from_pretrained(model_name)\n \n with init_empty_weights():\n model = AutoModelForCausalLM.from_config(config)\n \n for module in model.modules():\n if isinstance(module, GPTNeoXLayer):\n fully_shard(module)\n \n model = fully_shard(model, reshard_after_forward=True)\n model.to_empty(device='cuda')\n\n\nif __name__ == \"__main__\":\n load()\n```\n\n\nThe error is below:\n\n```\n[rank0]: Traceback (most recent call last):\n[rank0]: File \"/workspace/NCCL/report_issue.py](/NCCL/report_issue.py)\", line 41, in \n[rank0]: load()\n[rank0]: File \"/workspace/NCCL/report_issue.py](/NCCL/report_issue.py)\", line 34, in load\n[rank0]: fully_shard(module)\n[rank0]: File \"/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/contract.py\", line 107, in wrapper\n[rank0]: updated = func(module, *args, **kwargs)\n[rank0]: File \"/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/fsdp/fully_shard.py\", line 114, in fully_shard\n[rank0]: _move_states_to_device(params, buffers, device, mesh_info)\n[rank0]: File \"/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/fsdp/_fsdp_init.py\", line 143, in _move_states_to_device\n[rank0]: tensor.data = [tensor.to](http://tensor.to/)(device)\n[rank0]: NotImplementedError: Cannot copy out of meta tensor; no data!\n```\n\n\nPython command:\n`torchrun --nnodes=1 --nproc_per_node=8 reproduce.py`", "url": "https://github.com/pytorch/torchtitan/issues/1058", "state": "closed", "labels": [ "question", "module: checkpoint", "module: distributed_state_dict" ], "created_at": "2025-04-05T01:48:49Z", "updated_at": "2025-04-15T23:08:01Z", "user": "mingdianliu" }, { "repo": "pytorch/xla", "number": 8940, "title": "User built torch-xla wheel fails on import", "body": "## \u2753 Questions and Help\nAfter following the build instructions in CONTRIBUTING.md, and then running `python setup.py bdist_wheel` inside of `pytorch/xla`, a wheel is generated for `torch-xla`\n\nAfter installing that wheel in the environment of a different project this error appears upon import:\n```\nTraceback (most recent call last):\n ...\n File \"my_project/venv/lib/python3.10/site-packages/torch_xla/__init__.py\", line 20, in \n import _XLAC\nImportError: my_project/venv/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN5torch4lazy13MetricFnValueB5cxx11Ed\n```\n\nAll help is greatly appreciated.", "url": "https://github.com/pytorch/xla/issues/8940", "state": "closed", "labels": [ "question", "build" ], "created_at": "2025-04-04T20:14:21Z", "updated_at": "2025-04-09T14:38:59Z", "user": "LPanosTT" }, { "repo": "pytorch/torchtitan", "number": 1055, "title": "Is the currnet configuration system over-engineered?", "body": "It seems that a training job in TorchTitan is currently defined using a combination of TOML and Python.\n\nWhen users launch a training job, they are expected to provide a TOML file that specifies the model.name:\n\nhttps://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/models/llama/train_configs/debug_model.toml#L23-L24\n\nAt the same time, the referenced model.name must already be registered via a TrainSpec object:\n\nhttps://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/models/llama/__init__.py#L63-L65\n\nMy first question is: Why not move fields of `JobConfig` (serialized to the TOML file) into `TrainSpec`? That would eliminate the need for a separate `JobConfig` class and simplify the interface.\n\nMoreover, the registration mechanism itself may not be necessary. In AXLearn, another LLM training framework, users can launch a training job like this (simplified for conceptual clarity):\n\n```shell\naxlearn.train --experiment-config-model=text.gpt --experiment-config-name=llama3b\n```\n\nThen the trainer simply loads the config dynamically:\n\n```python\nem = importlib.import_module(\"experiments.\" + \"text.gpt\")\ntrainer_config: Trainer.Config = em.named_trainer_config(\"llama3b\")\n```\n\nPlease be aware that all configuration information (corresponding to JobConfig and TrainSpec) are returned by a the innvocation to the function `experiments.text.gpt.named_trainer_config(\"llama3b\")`.\n\nThis approach eliminates the need for explicit registration logic such as:\n\nhttps://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/config_manager.py#L770-L771\n\nand\n\nhttps://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/config_manager.py#L784-L785\n\nbecause the configuration modules can be imported dynamically from `experiments/text/gpt/*.py`.", "url": "https://github.com/pytorch/torchtitan/issues/1055", "state": "open", "labels": [ "question" ], "created_at": "2025-04-03T22:46:39Z", "updated_at": "2025-04-04T01:21:27Z", "user": "wangkuiyi" }, { "repo": "pytorch/torchtitan", "number": 1054, "title": "Clarify PP split point documentation.", "body": "### Bug description\n\nThe current documentation is as follows. \n\n```\n self.parser.add_argument(\n \"--parallelism.pipeline_parallel_split_points\",\n type=string_list,\n nargs=\"+\",\n default=[],\n help=\"\"\"\n Specify comma-separated names of modules to use as the beginning of a split point.\n\n e.g. \"layers.0,layers.2\" will cause the model to be split into 3 stages,\n the first containing all the layers up to layers.0,\n the second containing layers.0 and up to layers.2,\n the third containing layers.2 and all the remaining layers.\n\n Note: fully-automated splitting may be enabled in the future,\n but currently the split points must be specified manually.\"\"\",\n )\n```\n\nThe above description seems to indicate that layer.0 is present in both the first and second stages, layer.2 is present in both second and third stages. Can someone please clarify inclusivity ? \n\n\n### Versions\n\nhead of master", "url": "https://github.com/pytorch/torchtitan/issues/1054", "state": "closed", "labels": [ "question" ], "created_at": "2025-04-03T22:36:08Z", "updated_at": "2025-08-21T03:09:16Z", "user": "githubsgi" }, { "repo": "pytorch/serve", "number": 3409, "title": "Why Use TorchScript Format Models?", "body": "When customizing handler.py, we can load any format of model in the initialize function without needing to package the model into a .mar file. Why do the tutorials recommend converting the model to TorchScript format and packaging it together with handler.py into a .mar file?", "url": "https://github.com/pytorch/serve/issues/3409", "state": "open", "labels": [], "created_at": "2025-04-03T09:00:11Z", "updated_at": "2025-04-03T09:00:11Z", "comments": 0, "user": "CongSuxu" }, { "repo": "pytorch/torchtitan", "number": 1044, "title": "How are the TP, CP, and PP marked in PyTorch profiler traces ?", "body": "How are TP, CP and PP labelled in PyTorch profiler traces ? FSDP appears to be clearly marked. ", "url": "https://github.com/pytorch/torchtitan/issues/1044", "state": "open", "labels": [], "created_at": "2025-04-02T22:27:30Z", "updated_at": "2025-04-03T18:04:41Z", "comments": 1, "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 150523, "title": "[Question] How to load extremely large model checkpoint for FSDP wrapped model?", "body": "Hello,\n\nWe tried to train DeepSeek v3 model with the parallelism of `FSDP+Expert Parallel`. It works well with random initialized weights. But if we want do SFT or RLHF, we need to load the 670B model weights from https://huggingface.co/deepseek-ai/DeepSeek-V3-0324/tree/main\n\nSo, does PyTorch has ways to load extremely large model weight checkpoint for FSDP wrapped model?\n\ncc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o", "url": "https://github.com/pytorch/pytorch/issues/150523", "state": "closed", "labels": [ "oncall: distributed", "triaged", "module: fsdp" ], "created_at": "2025-04-02T08:05:12Z", "updated_at": "2025-05-08T16:28:40Z", "user": "zigzagcai" }, { "repo": "pytorch/torchtitan", "number": 1035, "title": "Profiling only a select group of ranks", "body": "Is it possible to profile only a select group of ranks. Becomes hard to handle the large number of files when there are many ranks. I understand that there could be imbalances when only a few ranks are profiled. Do not know if there are ways to profile , but not dump the profile output file. ", "url": "https://github.com/pytorch/torchtitan/issues/1035", "state": "open", "labels": [], "created_at": "2025-03-31T20:02:30Z", "updated_at": "2025-08-21T03:10:16Z", "comments": 3, "user": "githubsgi" }, { "repo": "pytorch/xla", "number": 8906, "title": "Profiler and `use_spmd()` order.", "body": "## \ud83d\udcda Documentation\n\nIn #8057, [@zjjott mentioned](https://github.com/pytorch/xla/issues/8057#issuecomment-2408428441) that `xp.start_server(...)` should be used after `use_spmd()`. I didn't find it written anywhere in the documentation. So, is this actually true? If so, we should write this down somewhere.\n\ncc @miladm @tengyifei @bhavya01 \n\n", "url": "https://github.com/pytorch/xla/issues/8906", "state": "open", "labels": [ "distributed", "documentation" ], "created_at": "2025-03-31T15:34:03Z", "updated_at": "2025-03-31T21:29:39Z", "comments": 6, "user": "ysiraichi" }, { "repo": "pytorch/torchtitan", "number": 1034, "title": "Context parallel on Turing GPUs?", "body": "As the title suggests, is torchtitan CP supported on Turing GPU?\n\nI got the error `RuntimeError: No available kernel. Aborting execution.` using the default `run_train.sh` script with CP changed to 2.\n\nI know Turing GPUs don't have flash attention support yet, but I read the torchtitan CP blog post [here](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082), and it seems like the memory-efficient attention backend would work with CP? \n\nIf this is the case, could you share how to enable this backend in torchtitan? I tried to wrap this [line](https://github.com/pytorch/torchtitan/blob/main/torchtitan/models/llama/model.py#L258) with `with sdpa_kernel(SDPBackend.EFFICIENT_ATTENTION):`, but the error persists.\n\nThanks", "url": "https://github.com/pytorch/torchtitan/issues/1034", "state": "open", "labels": [ "question", "module: context parallel" ], "created_at": "2025-03-31T09:36:47Z", "updated_at": "2025-08-21T03:11:02Z", "user": "dingqingy" }, { "repo": "pytorch/tutorials", "number": 3308, "title": "\ud83d\udca1 [REQUEST] - Pruning tutorial: clarify how to achieve comparable performance to non-pruned?", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nIn the pruning tutorial https://pytorch.org/tutorials/intermediate/pruning_tutorial.html,\nthe method of pruning that is implemented appears to be completely random. \"In this example, we will prune at random 30% of the connections...\"\n\nBut isn't the goal of pruning produce a smaller network with nearly the same capabilities as the original? \nI don't see anything in the tutorial about checking the performance of the new network, or how to intelligently prune the network in order to achieve the goal of pruning. The tutorial takes a randomly-initialized network, randomly prunes it, and then... \n\n...it just suddenly ends...?\n\nIs the idea that we're supposed to just keep iteratively trying random pruning until something finally works ok? That sounds unbearably undirected and inefficient. Did I miss something crucial while reading the tutorial?\n\n**Requesting:** Clarification on how to achieve the \"goal\" of pruning: intelligently pruning the network to achieve comparable capabilities.\nJust telling me I can define my own pruning function isn't enough, because...it's a tutorial, I don't know what such a function should entail. \n\n\n\n### Existing tutorials on this topic\n\nhttps://pytorch.org/tutorials/intermediate/pruning_tutorial.html\n\n### Additional context\n\n\"In this example, we will prune at random 30% of the connections \"\n\nWhy/how will that help achieve the goal of pruning? Won't it just randomly turn off parts of the network with no regard to its effect on performance? (This application seems more like Dropout than actual pruning.)", "url": "https://github.com/pytorch/tutorials/issues/3308", "state": "open", "labels": [], "created_at": "2025-03-30T15:49:41Z", "updated_at": "2025-03-30T15:49:41Z", "user": "drscotthawley" }, { "repo": "pytorch/xla", "number": 8900, "title": "Reset Peak Memory Usage", "body": "## \ud83d\ude80 Feature\n<!-- A clear and concise description of the feature proposal -->\nProvides a method to reset peak used memory size to current memory being used or 0.\n\n## Motivation\n\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\n\nPyTorch/XLA offers the function xm.get_memory_info which gives you details about memory usage, including bytes_used and peak_bytes_used.\n\nWhen you run several computational graphs one after another, like A, B, and C, and if graph A uses more memory than graph B, it becomes tricky to accurately determine the memory footprint of B. It would be really useful to have a way to reset the peak memory usage after A has finished running.\n\nA practical example of this is in vLLM. The process of loading a model (let's call this step A) often consumes more memory than the actual size of the model's weights due to how the XLA compiler works. Then, when you want to profile the memory usage during the model's execution (step B), the peak_bytes_used will reflect the higher memory usage from the loading phase. This makes the memory profiling for the execution phase less meaningful if you can't reset the peak memory measurement after the model has been loaded.\n\n", "url": "https://github.com/pytorch/xla/issues/8900", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-03-28T05:29:44Z", "updated_at": "2025-03-28T05:29:44Z", "comments": 0, "user": "yaochengji" }, { "repo": "pytorch/torchtitan", "number": 1027, "title": "Linear layer weights are in float32 ?", "body": "### Bug description\n\nI am seeing Linear layer weights in float32 ( wq.weight.dtype torch.float32 ) even after setting the following. \n\nmixed_precision_param = \"bfloat16\"\nmixed_precision_reduce = \"bfloat16\"\n\nIs that expected or I hit upon a bug ? \n\n### Versions\n\n1. Yes. \n\n2. See the description section . \n3. It is easy to check that by adding logger lines . See below. \n```\n # Non-PP forward / backward\n with self.train_context(optional_context_parallel_ctx):\n assert len(model_parts) == 1\n logger.info(f\"Linear wq.weight.dtype {model_parts[0].layers['0'].attention.wq.weight.dtype}\")\n last_layer = str(len(model_parts[0].layers) -1 )\n logger.info(f\"Linear wq.weight.dtype {model_parts[0].layers[last_layer].attention.wq.weight.dtype}\")\n pred = model_parts[0](inputs)\n loss = self.train_spec.loss_fn(pred, labels)\n # pred.shape=(bs, seq_len, vocab_size)\n # need to free to before bwd to avoid peaking memory\n del pred\n loss.backward()\n```", "url": "https://github.com/pytorch/torchtitan/issues/1027", "state": "closed", "labels": [ "question" ], "created_at": "2025-03-28T01:35:22Z", "updated_at": "2025-05-08T21:15:49Z", "user": "githubsgi" }, { "repo": "pytorch/torchtitan", "number": 1026, "title": "Any plan to add Llama 1B and/or 3B models ?", "body": "Wondering if there is any plan to add the 1B and/or 3B models to the TorchTitan set of example models ? It is probably fairly straight forward to do that , if I am not missing anything, Another toml file and adds at a few places. The optimizer and lr_scheduler section may requires some trial and error. ", "url": "https://github.com/pytorch/torchtitan/issues/1026", "state": "open", "labels": [], "created_at": "2025-03-28T01:21:59Z", "updated_at": "2025-04-01T18:29:00Z", "comments": 4, "user": "githubsgi" }, { "repo": "pytorch/xla", "number": 8899, "title": "The Stable Diffusion notebook is broken.", "body": "## \ud83d\udcda Documentation\n\nThe README points to a [Stable Diffusion notebook](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) to help a user get started. However, this notebook cannot be run successfully:\n\n1. The `import torch_xla` step results in an error:\n```\noduleNotFoundError Traceback (most recent call last)\n/tmp/ipykernel_27/3499457412.py in <module>\n----> 1 import torch_xla\n 2 torch_xla.__version__\n\nModuleNotFoundError: No module named 'torch_xla'\n```\n\nThis can be fixed by\n```\n!pip install torch~=2.6.0 'torch_xla[tpu]~=2.6.0' \\\n -f https://storage.googleapis.com/libtpu-releases/index.html \\\n -f [https://storage.googleapis.com/libtpu-wheels/index.html](https://storage.googleapis.com/libtpu-wheels/index.html%60)\n```\n\n2. Later, the `image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]` step failed with\n```\n/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:894: FutureWarning: `callback` is deprecated and will be removed in version 1.0.0. Passing `callback` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`\n deprecate(\n\u2007\u20072%\n\u20071/50\u2007[01:16<1:02:27,\u200776.48s/it]\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n[<ipython-input-10-049c86b52afd>](https://localhost:8080/#) in <cell line: 0>()\n 2 # xm.mark_step compiles and executes the graph after each iteration.\n 3 # The first few steps will be much slower than the rest.\n----> 4 image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]\n 5 image\n\n1 frames\n[/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py](https://localhost:8080/#) in __call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)\n 1068 if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):\n 1069 progress_bar.update()\n-> 1070 if callback is not None and i % callback_steps == 0:\n 1071 step_idx = i // getattr(self.scheduler, \"order\", 1)\n 1072 callback(step_idx, t, latents)\n\nTypeError: unsupported operand type(s) for %: 'int' and 'NoneType'\n```", "url": "https://github.com/pytorch/xla/issues/8899", "state": "open", "labels": [ "bug", "documentation" ], "created_at": "2025-03-27T22:38:23Z", "updated_at": "2025-11-13T00:44:20Z", "comments": 0, "user": "zhanyong-wan" }, { "repo": "pytorch/vision", "number": 9008, "title": "Torchvision bounding boxes do not match the images, becuase the bboxes are from the pre-cropped, pre-resized version.", "body": "### \ud83d\udc1b Describe the bug\n\nCelebA bounding boxes were calculated on the so called \"in-the-wild\" images, prior to cropping and resizing. But torchvision.datasets returns the version that is cropped to 178x218. So for example, on the ninth image, the bbox is outside the image size. \n\nCODE TO REPRO\n\n```\nfrom torchvision import datasets\n\nceleba = datasets.CelebA(root=\"./celeba\", target_type=\"bbox\", download=True, split=\"train\")\n\nprint(celeba[8])\n\n```\n\n(<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=178x218>,\n tensor([600, 274, 343, 475]))\n\n\n\n### Versions\n\ncollect_env.py crashed on me but here's the version:\n\n```$ uv pip show torchvision\nUsing Python 3.12.8 environment at: XXX\nName: torchvision\nVersion: 0.21.0\nLocation: XXX\nRequires: numpy, pillow, torch\nRequired-by:\n```", "url": "https://github.com/pytorch/vision/issues/9008", "state": "open", "labels": [], "created_at": "2025-03-27T17:34:15Z", "updated_at": "2026-01-05T16:15:54Z", "comments": 6, "user": "yaoshiang" }, { "repo": "pytorch/xla", "number": 8884, "title": "BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. Error", "body": "Hello,\nI'm trying to train my Transformer Encoder-Decoder model on google Colab `v2-8` TPUs. My code like this:\n\n```python\nimport torch.distributed as dist\nimport torch_xla.core.xla_model as xm\nimport torch_xla.distributed.parallel_loader as pl\nimport torch_xla.distributed.xla_multiprocessing as xmp\nimport torch_xla.distributed.xla_backend\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport torch_xla as xla\n\ndef _mp_fn(rank,world_size):\n dist.init_process_group(\"xla\",init_method=\"xla://\")\n model = Transformer(TransformerEncoderConfig(),TransformerDecoderConfig())\n model.to(xm.xla_device())\n\n ddp_model = DDP(model,gradient_as_bucket_view=True)\n optimizer = torch.optim.AdamW(ddp_model.parameters(),lr=0.00001)\n criterion = torch.nn.CrossEntropyLoss()\n xla_train_loader = pl.MpDeviceLoader(dataloader,xm.xla_device())\n for fens,moves in xla_train_loader:\n with xla.step():\n fens,moves = fens.to(xla.device()),moves.to(xla.device())\n inputs = moves[:,:-1]\n labels = moves[:,1:]\n optimizer.zero_grad()\n outputs = ddp_model(fens,moves)\n loss = criterion(outputs.permute(0,2,1),labels)\n loss.backward()\n xm.optimizer_step(optimizer)\n xm.mark_step()\n\nif __name__ == \"__main__\":\n xla.launch(_mp_fn)\n```\n\nand this code raises the following error: `BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.`\n\nWhat is the problem, is it due to dataloader behaviour or wrong model assignment to device? Could you help me with this if i achieve this i want to train the model with whole dataset in google TPU Research Program. So i'm open to any suggestion with working with TPUs.", "url": "https://github.com/pytorch/xla/issues/8884", "state": "open", "labels": [ "bug", "needs reproduction" ], "created_at": "2025-03-25T23:52:17Z", "updated_at": "2025-03-26T21:38:26Z", "comments": 2, "user": "oayk23" }, { "repo": "pytorch/xla", "number": 8883, "title": "[RFC] Use shard_as to improve sharding and avoid OOM", "body": "# \ud83d\ude80 Use shard_as to improve sharding and avoid OOM\n\n## Summary\n\n2D sharding propagation is harder than 1D sharding propagation due to\nincompatible sharding. This problem is worse in a `scan` / XLA `While` op, and\nthe <code>[shard_as][shard_as]</code> GSPMD feature seems to help.\n\n\n## Motivation\n\nThis proposal is primarily to improve the sharding propgation of\n`torch_xla.experimental.scan`.\n\nWhen the decoder layer is wrapped in an XLA `While` op through\n`torch_xla.experimental.scan`, Llama 3 8B trains a-okay with gbs 16 on a v6e-8\nTPU, but we still get a OOM when scaling to Llama 3.1 405B on v6e-256 with 2D\n(FSDP + TP) sharding.\n\nBy inspecting the memory profiles, we can infer the following:\n\n* The OOM occurs during the `scan` in the backward pass (judging from the\n referenced body computation)\n* The OOM occurs because the compiler emits a convolution (convolution.171)\n whose output shape is [1, 4K, 16K].\n* That output tensor is then all-reduced over the FSDP axis (judging from the\n replica groups), keeping the shape unchanged.\n* The all-reduced tensor gets written to a `[126, 4K, 16K]` stacked output\n tensor. This tensor is too large to materialize in a single chip so\n compilation fails. Note that 126 is the number of layers in Llama 3.1 405B.\n\nWe deduced that the convolution before the all-reduce is computing the gradient\nfor the weight tensor of the\n<code>[o_proj operation in self attention][o_proj]</code>:\n\n```python\n # Code snippet of Llama self attention\n attn_output = attn_output.transpose(1, 2).contiguous()\n attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)\n attn_output = self.o_proj(attn_output) # <--- here\n return attn_output\n```\n\nDuring the backward pass, we will compute `grad_o_proj` which is a matmul of a\n2D sharded input with a 2D sharded `attn_output`. Based on the profile, this\ngradient tensor is only 1D sharded: its shape is `[1, 4K, 16K]`, where `16K` is\nthe size of the embedding dim. We expect it to have the shape of `[1, 4K, 4K]`.\n\n\n## Breakdown of the problem\n\nWhen GSPMD propagates 2D sharding annotations over a matmul, and the contraction\ndim has matching sharding annotations:\n\n```math\nA[M_X, N_Y] \\cdot B[N_Y , M_X] = C[M_?, M_?]\n```\n\n(using [scaling book sharding notation][scaling-book])\n\nDimension $N$ is contracted away. The mesh axis $X$ also disappears. Based on my\nunderstanding of the GSPMD paper, the result will only be 1D sharded, barring\ninfluence from any other operations. Therefore $C$ is only 1D sharded. Since $C$\nis a gradient tensor and `scan` outputs a stacked array of all gradients for all\n126 Llama 3.1 405B layers during the backward pass, this 1D sharding goes on to\n\"infect\" the stacked array with a leading dim size of 126, resulting in an array\nof shape `[126, 4K, 16K]`, which is too large to fit in HBM.\n\n\n## Pitch\n\nI followed the [HLO spec][shard_as] the [JAX implementation][shard_alike] to add\na `shard_as` function to PyTorch/XLA and use it in `scan` during the backward pass.\n[PR here](https://github.com/pytorch/xla/pull/8879). `shard_as` will ensure that the inputs have the same sharding after GSPMD sharding propagation. Specifically, instead of scanning over the decoder layer's backward pass during the backward of scan,\nwe'll scan over a wrapper that adds additional sharding constraints to shard\nthe gradients the same way as their corresponding inputs:\n\n```python\n# This backward pass wrapper calls the original backward pass of a layer, and then use `shard_as` to ensure that\n# the carry is sharded the same as grad_carry, and the grad_x (gradient for input) is sharded the same as the\n# first element of the stacked input array.\ndef _backward_shard_alike(carry, x, backward, init, xs):\n grad_carry, grad_x = backward(carry, x)\n # Propagate sharding between forward inputs and backward outputs.\n _, grad_carry = shard_as(init, grad_carry)\n _, grad_x = shard_as(tree_map(lambda v: v[0], xs), grad_x)\n return grad_carry, grad_x\n```\n\nThe PR also has a unit test that checks the result of sharding propagation and\nfails if we remove the `shard_as` usage from `scan`.\n\n\n## Alternatives\n\nRather than using `shard_as`, we could expose a keyword argument on `scan` that\ntakes in the intended sharding annotation of all the weights during the backward\npass of a layer. Potentially, the user may specify that the gradient for the\n`o_proj` weight should be sharded a certain way. There are some drawbacks:\n\n- Since `scan` lowers the combine function using AOTAutograd into a functional\n graph, we can't tell the tensors from each other. We don't even know what is\n the variable name that corresponds to some specific output of an FX graph\n extracted by AOTAutograd.\n- SPMD and `scan` are orthogonal concerns and it's a code smell to expose both\n APIs in one function.\n\nIn contrast, `shard_as` doesn't require telling tensors apart. It just says to\nconstrain the sharding of the N gradient tensors to be the same as the N input\ntensors.\n\n\n## Additi", "url": "https://github.com/pytorch/xla/issues/8883", "state": "closed", "labels": [ "enhancement", "distributed" ], "created_at": "2025-03-25T22:14:04Z", "updated_at": "2025-03-29T03:26:36Z", "comments": 0, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8876, "title": "Missing torch-xla-gpu-plugin", "body": "A user reported the following issue:\n\nwe have been trying to use `torch-xla` nightly builds to get around some of the slowness issues seen in torch-xla 2.5. We found `torch-xla` nightly builds for GPU under `gs://pytorch-xla-releases/wheels/cuda/12.6`, however these don\u2019t contain `torch-xla-gpu-plugin` (this was present for older `torch-xla` versions e.g. `gs://pytorch-xla-releases/wheels/cuda/12.1/torch_xla_cuda_plugin-2.6.0-py3-none-any.whl`). Is there any location that contains the cuda plugin nightly builds for torch-xla 2.8.0?\n", "url": "https://github.com/pytorch/xla/issues/8876", "state": "open", "labels": [ "xla:gpu" ], "created_at": "2025-03-24T18:03:36Z", "updated_at": "2025-04-02T14:25:22Z", "comments": 11, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8874, "title": "Contribution suggestion?", "body": "## \u2753 Questions and Help\nI want to have a deeper understanding of pytorch/xla by contributing to it. I notice that the majority of the [issues with \"good first issue\" tag](https://github.com/pytorch/xla/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) are of one kind (i.e. Op info test) and are created long time ago. I am not sure if they are still relevant. Other than that, i don't know how to find good issues to work with. Can i assume that any issue from the issue list without assignee are open for a all contributors? Or, should there be a tag for all the issues that are available for public contribution? \n\nThanks!", "url": "https://github.com/pytorch/xla/issues/8874", "state": "closed", "labels": [ "question" ], "created_at": "2025-03-24T17:03:26Z", "updated_at": "2025-03-24T18:24:49Z", "user": "iwknow" }, { "repo": "pytorch/pytorch", "number": 149826, "title": "How to handle dynamic output size with torch.onnx.export (through dynamo) for Resize", "body": "### \ud83d\udc1b Describe the bug\n\nI would like to export with torch.onnx.export (through dynamo) some code that contains a resize operation. The output width and height is dynamic. An example model is as follows:\n```\nimport torch\n\n\nclass Model(torch.nn.Module):\n\n def __init__(self):\n super().__init__()\n\n def forward(self, x, size):\n y = torch.nn.functional.interpolate(x, size=size.tolist())\n return y\n\n\nmodel = Model()\nx = torch.rand(1, 3, 400, 500)\nsize = torch.tensor([1024, 1024]).to(torch.int32)\ny = model(x, size)\n\nonnx_model = torch.onnx.export(model, (x, size), dynamo=True)\n```\nThe code throws the following error:\n```\n<class 'RuntimeError'>: /pytorch/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:5615: SymIntArrayRef expected to contain only concrete integers\n\nWhile executing %upsample_nearest2d : [num_users=1] = call_function[target=torch.ops.aten.upsample_nearest2d.vec](args = (%x, [%_local_scalar_dense, %_local_scalar_dense_1], None), kwargs = {})\nOriginal traceback:\nFile \"/tmp/test.py\", line 11, in forward\n y = torch.nn.functional.interpolate(x, size=size.tolist())\n```\nThe interpolate function doesn't accept a tensor as argument, so I somehow has to convert it to a List. That fails with the error as shown. I can hardcode the list to a fixed sizes, but then I cannot accept images with different size at inference time. \n\nHow can I address this issue? \n\n### Error logs\n\n_No response_\n\n### Versions\n\nCollecting environment information...\nPyTorch version: 2.6.0+cu124\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 24.04.2 LTS (x86_64)\nGCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0\nClang version: 19.1.1 (1ubuntu1~24.04.2)\nCMake version: version 3.28.3\nLibc version: glibc-2.39\n\nPython version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)\nPython platform: Linux-6.11.0-19-generic-x86_64-with-glibc2.39\nIs CUDA available: True\nCUDA runtime version: 12.0.140\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU\nNvidia driver version: 550.120\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 22\nOn-line CPU(s) list: 0-21\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) Ultra 7 155H\nCPU family: 6\nModel: 170\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 4\nCPU(s) scaling MHz: 22%\nCPU max MHz: 4800.0000\nCPU min MHz: 400.0000\nBogoMIPS: 5990.40\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 544 KiB (14 instances)\nL1i cache: 896 KiB (14 instances)\nL2 cache: 18 MiB (9 instances)\nL3 cache: 24 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-21\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disab", "url": "https://github.com/pytorch/pytorch/issues/149826", "state": "closed", "labels": [ "module: onnx", "triaged", "oncall: pt2" ], "created_at": "2025-03-23T09:27:37Z", "updated_at": "2025-04-24T15:17:47Z", "user": "FabianSchuetze" }, { "repo": "pytorch/pytorch", "number": 149771, "title": "How to remove the \u201cinternal api\u201d notice?", "body": "### \ud83d\udcda The doc issue\n\nWhat is the option that will remove this notice?\n > This page describes an internal API which is not intended to be used outside of the PyTorch codebase and can be modified or removed without notice. \n\nWe would like to remove it for https://pytorch.org/docs/stable/onnx_dynamo.html and a few onnx pages. \n\n@svekars \n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @svekars @sekyondaMeta @AlannaBurke", "url": "https://github.com/pytorch/pytorch/issues/149771", "state": "closed", "labels": [ "module: docs", "triaged" ], "created_at": "2025-03-21T22:46:30Z", "updated_at": "2025-03-27T22:02:25Z", "user": "justinchuby" }, { "repo": "pytorch/torchx", "number": 1021, "title": "Suggested way to get timestamp of the job submission?", "body": "## Description\nHi team, I am looking for a way to get the exact timestamp when the command `torchx run` is being run. Is there a formal way that is scheduler / component agnostic? The timestamp should be accessible from the training app. \n\n\n## Motivation/Background\nThe actual use case is to calculate the overhead between job launch to the actual time when training container spin up and finishes the first batch. \n\n\n## Detailed Proposal\n<!-- provide a detailed proposal -->\n\n\n## Alternatives\n<!-- discuss the alternatives considered and their pros/cons -->\n\n\n## Additional context/links\n<!-- link to code, documentation, etc. -->\n", "url": "https://github.com/meta-pytorch/torchx/issues/1021", "state": "open", "labels": [], "created_at": "2025-03-20T21:58:47Z", "updated_at": "2025-03-20T21:59:41Z", "comments": 0, "user": "HanFa" }, { "repo": "pytorch/pytorch", "number": 149586, "title": "UserWarning: Dynamo does not know how to trace the builtin `None.pybind11_object.__new__.`", "body": "### \ud83d\udc1b Describe the bug\n\nI'm filing an issue since this is a Python built-in (granted the error message implies that it is not since it references PyBind11, but I'm opening an issue anyway since it is caused by using returning/using `None` in a compiled function).\n\n### Versions\n\n2.7.0a0+gitebd087e\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @xmfan @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng", "url": "https://github.com/pytorch/pytorch/issues/149586", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "module: higher order operators", "module: compiled autograd", "module: pt2-dispatcher", "module: flex attention" ], "created_at": "2025-03-20T00:32:49Z", "updated_at": "2025-03-21T19:28:30Z", "user": "cora-codes" }, { "repo": "pytorch/xla", "number": 8862, "title": "Replace `xm.mark_step` with `torch_xla.sync()` in examples and tests", "body": "`torch_xla.sync()` is easier to spell than `xm.mark_step()`. We should at least replace `mark_step` in all public examples.", "url": "https://github.com/pytorch/xla/issues/8862", "state": "closed", "labels": [ "enhancement", "usability", "documentation" ], "created_at": "2025-03-19T22:25:09Z", "updated_at": "2025-05-16T17:56:25Z", "comments": 1, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8861, "title": "Document the difference between `device=` vs `.to(device)`", "body": "## \ud83d\udcda Documentation\n\nThere's a subtle difference between `torch.foo(device=xla)` vs `torch.foo().to(xla)` and we should document this in a FAQ section or similar. The first one runs the `foo` on the TPU. The second one runs the `foo` on the CPU and then moves the buffer to the TPU.", "url": "https://github.com/pytorch/xla/issues/8861", "state": "closed", "labels": [ "enhancement", "good first issue", "documentation" ], "created_at": "2025-03-19T22:23:19Z", "updated_at": "2025-06-12T06:07:46Z", "comments": 2, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8859, "title": "Improve `torch_xla.compile` documentation", "body": "## \ud83d\udcda Documentation\n\nThe best doc I could find that mentions this is https://pytorch.org/xla/release/r2.5/eager_mode.html. However, `torch_xla.compile` is usable separate from PyTorch/XLA eager mode and we should make this more front-and-center compared to mark_step.", "url": "https://github.com/pytorch/xla/issues/8859", "state": "closed", "labels": [ "enhancement", "good first issue", "documentation" ], "created_at": "2025-03-19T22:15:04Z", "updated_at": "2025-05-30T04:11:41Z", "comments": 0, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8858, "title": "Document the difference between tracing time and execution time", "body": "## \ud83d\udcda Documentation\n\nIf we write a loop like\n\n```\nstart = time.time()\nfor step in range(num_steps):\n run_model()\n xm.mark_step()\nend = time.time()\n```\n\nThen `end - start` will only measure the tracing time. We'll need to do `torch_xla.sync(wait=True)` to block on device execution to measure the execution time.\n\nWe should document this in some \"common FAQs/sharp edges\" maybe", "url": "https://github.com/pytorch/xla/issues/8858", "state": "closed", "labels": [ "enhancement", "good first issue", "documentation" ], "created_at": "2025-03-19T22:13:49Z", "updated_at": "2025-05-30T04:10:37Z", "comments": 4, "user": "tengyifei" }, { "repo": "pytorch/torchtitan", "number": 987, "title": "Is EP (Expert Parallelism) coming ?", "body": "Currently TorchTitan supports PP, CP, FSDP, PP parallelisms. Is there a plan to support Expert Parallelism (EP) ? Along the same line, see some DeepSeek files in the repo. Is there a plan to support DeepSeek training on TorchTitan ?\n", "url": "https://github.com/pytorch/torchtitan/issues/987", "state": "closed", "labels": [ "question" ], "created_at": "2025-03-19T21:41:14Z", "updated_at": "2025-03-24T17:21:20Z", "user": "githubsgi" }, { "repo": "pytorch/torchtitan", "number": 986, "title": "Is a PP+FSDP+TP config + toml available for pre-training 405B model ?", "body": "Would appreciate if someone can share a toml file to do PP+FSDP+TP for 405B model. ", "url": "https://github.com/pytorch/torchtitan/issues/986", "state": "closed", "labels": [], "created_at": "2025-03-19T21:35:43Z", "updated_at": "2025-08-21T03:11:32Z", "comments": 3, "user": "githubsgi" }, { "repo": "pytorch/vision", "number": 8986, "title": "Speed up JPEG decoding by allowing resize during decode", "body": "### \ud83d\ude80 The feature\n\nTorchvision's `read_image` currently decodes JPEG images at full resolution. However, both `libjpeg` and `libjpeg-turbo` support decoding at lower resolutions (1/2, 1/4, 1/8 of the original size).\n\nIntroducing a `size_hint` parameter would allow users to specify an approximate target size, with `torchvision` selecting the closest larger available scale factor and downscale the JPEG image during decoding.\n\nExample Usage:\n```python\nfrom torchvision.io.image import decode_image\ntensor = decode_image(\"image.jpeg\", size_hint=(224, 224))\n```\n\n\n### Motivation, pitch\n\n- Many ML pipelines process images at fixed sizes (e.g., 224x224 for ImageNet models). Decoding large images only to downscale them later is inefficient.\n- This can improve memory usage as we do not need to hold the full-sized image in the memory.\n- Pillow provides a similar feature via [`Image.draft`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.draft), allowing for approximate size-based decoding.\n\n### Alternatives\n\n- Using Pillow for decoding with downscaling, but torchvision\u2019s native decoder is typically faster than decoding using Pillow and then converting to tensor.\n- Decode and then resize, but this is inefficient, see benchmark below.\n\n\n### Additional context\n\n## Benchmark\n\nWe implemented a proof-of-concept and ran performance tests on decoding a 1920x1080 image into 960x540.\nWe compared the following:\n\n- Use existing `decode_jpeg` and resize after.\n- Patch `decode_jpeg` to allow `libjpeg` / `libjpeg-turbo` downscaling via the `size_hint` parameters.\n\nBenchmark results (1000 iters):\n```\n9.91s call .../test_jpeg.py::test_torchvision_image_load_with_resize_960_540\n4.00s call .../test_jpeg.py::test_fastjpeg_image_load_with_size_hint_960_540\n```\n~2.5X speed up.\n\nI'm happy to contribute a patch if people consider this useful.\n", "url": "https://github.com/pytorch/vision/issues/8986", "state": "open", "labels": [], "created_at": "2025-03-19T19:08:46Z", "updated_at": "2025-04-29T07:32:47Z", "comments": 3, "user": "gyf304" }, { "repo": "pytorch/xla", "number": 8853, "title": "Have documentation to point to all our environment variables and their meaning", "body": "## \ud83d\udcda Documentation\n\nPrepare a documentation to point to all our environment variables and their meaning. This world should be a forcing function to (1) make the yaml file up to date (2) rename it to something like `env_vraiable_definitions.yaml`, (3) start a workstream to trim down on these env variables to avoid usability pain.\n\nhttps://github.com/pytorch/xla/blob/master/configuration.yaml\n\n\n@tengyifei @yaoshiang for viz and support", "url": "https://github.com/pytorch/xla/issues/8853", "state": "open", "labels": [ "usability", "documentation" ], "created_at": "2025-03-19T00:23:51Z", "updated_at": "2025-03-19T00:26:22Z", "comments": 1, "user": "miladm" }, { "repo": "pytorch/TensorRT", "number": 3446, "title": "ValueError: Invalid input type <class 'bool'> encountered when compiling FLUX.1-dev model with Torch-TensorRT", "body": "## \u2753 Question\n\nWhen trying to compile the FLUX.1-dev model using Torch-TensorRT following the official example/blog post, I'm encountering a `ValueError` during the `torch_tensorrt.dynamo.compile()` step. The error suggests there's an issue with input parsing where it's encountering a boolean value that it doesn't know how to handle.\n\n## What you have already tried\n\nI'm following the exact steps from the example provided in the documentation (https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_export_flux_dev.html). I've:\n1. Successfully loaded the FLUX.1-dev model\n2. Defined the dynamic shapes properly\n3. Created dummy inputs with the recommended dimensions\n4. Successfully exported the model using `_export`\n5. Attempted to compile with Torch-TensorRT using the same parameters shown in the example\n\nThe error occurs specifically at the compilation step:\n\n```python\ntrt_gm = torch_tensorrt.dynamo.compile(\n ep,\n inputs=dummy_inputs,\n enabled_precisions={torch.float32},\n truncate_double=True,\n min_block_size=1,\n use_fp32_acc=True,\n use_explicit_typing=True,\n)\n```\n\n## Environment\n\n> Build information about Torch-TensorRT can be found by turning on debug messages\n\n - PyTorch Version (e.g., 1.0): 2.6.0\n - CPU Architecture: \n - OS (e.g., Linux): Linux\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): \n - Build command you used (if compiling from source):\n - Are you using local sources or building from archives:\n - Python version: 3.11.10\n - CUDA version: cuda_12.4.r12.4/compiler.34097967_0\n - GPU models and configuration: A100\n - Any other relevant information:\n\n## Additional context\n\nThe error message specifically points to an issue with boolean input types:\n\n```\nValueError: Invalid input type <class 'bool'> encountered in the dynamo_compile input parsing. Allowed input types: {torch_tensorrt.Input, torch.Tensor, list, tuple, dict}\n```\n\nIt looks like the `return_dict=False` parameter in my dummy inputs is causing the issue since it's a boolean value. The example shows that this should be supported, but the error suggests that booleans aren't handled correctly in the input parsing logic.\n\nFull traceback:\n```\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n/workspace/flux-dev-tensorrt.ipynb Cell 4 line 1\n----> <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> trt_gm = torch_tensorrt.dynamo.compile(\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a> ep,\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2'>3</a> inputs=dummy_inputs,\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3'>4</a> enabled_precisions={torch.float32},\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4'>5</a> truncate_double=True,\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5'>6</a> min_block_size=1,\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=6'>7</a> use_fp32_acc=True,\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=7'>8</a> use_explicit_typing=True,\n <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=8'>9</a> )\n\nFile /usr/local/lib/python3.11/dist-packages/torch_tensorrt/dynamo/_compiler.py:606, in compile(exported_program, inputs, arg_inputs, kwarg_inputs, device, disable_tf32, assume_dynamic_shape_support, sparse_weights, enabled_precisions, engine_capability, debug, num_avg_timing_iters, workspace_size, dla_sram_size, dla_local_dram_size, dla_global_dram_size, truncate_double, require_full_compilation, min_block_size, torch_executed_ops, torch_executed_modules, pass_through_build_failures, max_aux_streams, version_compatible, optimization_level, use_python_runtime, use_fast_partitioner, enable_experimental_decompositions, dryrun, hardware_compatible, timing_cache_path, lazy_engine_init, cache_built_engines, reuse_cached_engines, engine_cache_dir, engine_cache_size, custom_engine_cache, use_explicit_typing, use_fp32_acc, refit_identical_engine_weights, strip_engine_weights, immutable_weights, enable_weight_streaming, **kwargs)\n 603 arg_inputs = [arg_inputs] # type: ignore\n 605 # Prepare torch_trt inputs\n--> 606 trt_arg_inputs: Sequence[Input] = prepare_inputs(arg_inputs)\n 607 trt_kwarg", "url": "https://github.com/pytorch/TensorRT/issues/3446", "state": "open", "labels": [ "question" ], "created_at": "2025-03-18T21:55:16Z", "updated_at": "2025-03-21T23:57:54Z", "user": "yachty66" }, { "repo": "pytorch/xla", "number": 8847, "title": "How to compile torch-xla form source?", "body": "## \u2753 Questions and Help\nI have reviewed the relevant materials on torch-xla but have not found a clear guide on how to compile torch-xla from source. The instructions mentioned on [this page](https://pytorch.org/xla/master/contribute/bazel.html) are somewhat disorganized. Could you provide a detailed compilation process? I need to build it from source to verify my modifications. Thanks\nNow I am use python setup.py develop to build from source code , but encounter ERROR as follows: \nthe command is \nXLA_CUDA=1 python setup.py install , and i am use the torch-xla v2.5.1\n\n![Image](https://github.com/user-attachments/assets/ac8787a1-c50c-4480-96b1-76f325876af6)\n", "url": "https://github.com/pytorch/xla/issues/8847", "state": "open", "labels": [ "question", "build" ], "created_at": "2025-03-18T02:31:05Z", "updated_at": "2025-03-24T17:40:13Z", "user": "south-ocean" }, { "repo": "pytorch/xla", "number": 8846, "title": "Need a documentation page that always hosts the latest stable documentation", "body": "## \ud83d\udcda Documentation\n\nPyTorch has https://pytorch.org/docs/stable/index.html that always contains the documentation for the latest stable branch.\n\nThe same URL variant doesn't work for PyTorch/XLA https://pytorch.org/xla/release/stable/index.html\n\n", "url": "https://github.com/pytorch/xla/issues/8846", "state": "open", "labels": [ "enhancement", "documentation" ], "created_at": "2025-03-18T00:19:41Z", "updated_at": "2025-05-01T07:46:15Z", "comments": 3, "user": "tengyifei" }, { "repo": "pytorch/vision", "number": 8980, "title": "nvjpeg missing from all linux GPU wheel build jobs", "body": "Linux CUDA: https://github.com/pytorch/vision/actions/runs/13901104094/job/38892841516?pr=8601\nLinux aarch64 CUDA: https://github.com/pytorch/vision/actions/runs/13901104115/job/38892844332?pr=8601\n\nFailing the smoke test part with:\n\n```\n+ echo 'pytorch/vision/test/smoke_test.py found'\n+ conda run -p /__w/_temp/conda_environment_13901104115 python pytorch/vision/test/smoke_test.py\n/__w/_temp/conda_environment_13901104115/lib/python3.9/site-packages/torchvision/io/image.py:14: UserWarning: Failed to load image Python extension: 'libnvjpeg.so.12: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?\n\n```", "url": "https://github.com/pytorch/vision/issues/8980", "state": "closed", "labels": [], "created_at": "2025-03-17T15:05:04Z", "updated_at": "2025-03-18T11:28:18Z", "comments": 1, "user": "NicolasHug" }, { "repo": "pytorch/pytorch", "number": 149315, "title": "How to Retain Computational Graph in torch.func.jvp() for Parameter Gradients?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n## Help Needed: Making `torch.func.jvp` Work with `torch.autograd.grad`\n\nHi all,\n\nThanks so much for all the functionalities of pytorch! I'm trying to make the following code valid (and efficient):\n\n```python\noutput_values, output_grads = torch.func.jvp(model, input_value, input_grads)\ntorch.autograd.grad(output_values, tuple(model.parameters()), grad_outputs=output_grads)\n```\n\nOne way to phrase it is that we have a function $f: \\mathbb{R}^d \\times \\mathbb{R}^m \\to \\mathbb{R}^p$. Then, given $(x, t_x) \\in \\mathbb{R}^{d}\\times \\mathbb{R}^{d}$, the goal is to compute: $y = f(x,w)$, the tangent vector $t_y = D_1 f(x, w).t_x$ and the gradient $t_w = D_2 f(x, w)^T.t_y$, in order to materialize the mapping: $((x, t_x), w) \\to ((y, t_y), t_w)$.\n\nCurrently, the code fails because `torch.func.jvp()` does not retain the computational graph of the forward pass, which makes sense for the dual vectors associated with the input. However, for example, I know it's possible to efficiently decouple the computation of input gradients and weight gradients by selectively extracting parts of the computational graph. \n\nI'd like to do something similar here. My goal is to develop a procedure that achieves this while requiring only a single forward pass (and freeing unnecessary memory).\n\nWould you have any insights on how to implement this efficiently? I believe it's related to [this paper](https://arxiv.org/pdf/2402.14212), which provides a solution in JAX, but I think it should also be possible in PyTorch.\n\nAny guidance or suggestions would be greatly appreciated\u2014thanks in advance for your help!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345", "url": "https://github.com/pytorch/pytorch/issues/149315", "state": "open", "labels": [ "module: autograd", "triaged", "module: functorch" ], "created_at": "2025-03-17T12:10:21Z", "updated_at": "2025-06-24T14:30:39Z", "user": "edouardoyallon" }, { "repo": "pytorch/pytorch", "number": 149096, "title": "How to determine which part of torch.compile undergoes recompiling after caching", "body": "### \ud83d\udc1b Describe the bug\n\nThanks for the helpful blog: https://dev-discuss.pytorch.org/t/how-to-bring-compile-time-down-to-zero-our-plans-and-direction-may-14th-edition/2089\n\nI am currently caching all 3 stages of the compiler but only seeing ~50% reduction in compile time. \nHow do I determine which part of the compilation is not being properly cached or recompiled every time?\n\n\nP.S. I am interested in finding which part of the process recompiles and any techniques to avoid recompilation not mentioned here: https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html#dealing-with-recompilations\n\n### Error logs\n\n_No response_\n\n### Versions\n\ntorch 2.5\nCUDA 12.4\nGPU = A10G\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/149096", "state": "open", "labels": [ "triaged", "oncall: pt2" ], "created_at": "2025-03-13T02:33:58Z", "updated_at": "2025-03-13T06:40:24Z", "user": "janak2" }, { "repo": "pytorch/pytorch", "number": 149094, "title": "How to skip backward specific steps in torch.compile", "body": "### \ud83d\udc1b Describe the bug\n\nI couldn't find much documentation around how we can skip backward specific-steps in torch.compile/AOT autograd.\nSome info would be helpful.\n\n### Error logs\n\n_No response_\n\n### Versions\n\nNA\n\ncc @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/149094", "state": "open", "labels": [ "triaged", "oncall: pt2" ], "created_at": "2025-03-13T02:12:44Z", "updated_at": "2025-03-17T23:55:31Z", "user": "janak2" }, { "repo": "pytorch/executorch", "number": 9180, "title": "Convert model.safetensors in order to be able to execute it with ExecuteTorch: how to prepare the example input and dynamic shape information?", "body": "Hi! \n\nI've trained for fine-tuning the Bert model to use it for Named Entity Recognition.\n\nNow I want to convert the resulting model.safetensors in order to be able to execute it with ExecuteTorch. Thanks to the explanation of a kind guy : https://dev-discuss.pytorch.org/t/what-is-the-correct-future-proof-way-of-deploying-a-pytorch-python-model-in-c-for-inference/2775/11?u=raphael10-collab ,\nI've learned that, in order to export the torch.nn.Module into aExportedProgram, I need first to prepare the example input and dynamic shape information. \n\nSo.... my question is: which dynamic shape information should I use, since the model.safetensors I produced is just a fine-tuning of the Bert Model? \n Should I use the shapes from here: https://github.com/google-research/bert/blob/master/modeling.py#L389 : input_ids: int32 Tensor of shape [batch_size, seq_length] containing word ids ? \n\nThis the code I used to fine-tune Bert model for NER task:\n\n`BERT-NER.py` : \n\n # https://github.com/tozameerkhan/Fine-Tuning-BERT-for-Named-Entity-Recognition/blob/main/BERTfineTunningFinal.ipynb\n \n # 1. Setup and Installation\n \n import datasets\n import numpy as np\n import pandas as pd\n import matplotlib.pyplot as plt\n import seaborn as sns\n from transformers import BertTokenizerFast\n from transformers import DataCollatorForTokenClassification\n from transformers import TrainingArguments, Trainer, EarlyStoppingCallback\n from transformers import logging as hf_logging\n from transformers import pipeline\n import json\n from pprint import pprint\n from torchmetrics.text.bert import BERTScore\n \n \n bertscore = BERTScore()\n \n hf_logging.set_verbosity_info() #to display informational messages.\n \n from transformers import AutoModelForTokenClassification\n \n import warnings\n warnings.filterwarnings('ignore')\n \n import matplotlib.pyplot as plt\n plt.style.use(\"fivethirtyeight\")\n \n \n # 2. Data Exploration (EDA)\n \n # Load Dataset\n \n conll2003 = datasets.load_dataset(\"conll2003\", trust_remote_code=True)\n conll2003\n \n # Convert to DataFrame\n train_df = pd.DataFrame(conll2003['train'])\n validation_df = pd.DataFrame(conll2003['validation'])\n test_df = pd.DataFrame(conll2003['test'])\n \n # Data Overview\n \n print(train_df.head())\n print(f\"Number of sentences in the training set: {len(train_df)}\")\n print(f\"Number of sentences in the validation set: {len(validation_df)}\")\n print(f\"Number of sentences in the test set: {len(test_df)}\")\n \n label_list = conll2003[\"train\"].features[\"ner_tags\"].feature.names\n print(label_list)\n \n # Distribution of Sentence Lengths\n train_df['sentence_length'] = train_df['tokens'].apply(len)\n plt.figure(figsize=(10, 6))\n sns.histplot(train_df['sentence_length'], bins=30, kde=True)\n plt.title('Distribution of Sentence Lengths in Training Set')\n plt.xlabel('Sentence Length')\n plt.ylabel('Frequency')\n plt.show()\n \n # Distribution of Named Entity Tags\n ner_tags = conll2003['train'].features['ner_tags'].feature.names\n tag_counts = [0] * len(ner_tags)\n for tags in train_df['ner_tags']:\n for tag in tags:\n tag_counts[tag] += 1\n \n plt.figure(figsize=(12, 6))\n sns.barplot(x=ner_tags, y=tag_counts)\n plt.title('Distribution of Named Entity Tags in Training Set')\n plt.xlabel('Named Entity Tag')\n plt.ylabel('Count')\n plt.xticks(rotation=45)\n plt.show()\n \n # 3. Data Preparation\n \n # Tokenization and Label Alignment\n \n #load a pre-trained tokenizer.\n tokenizer = BertTokenizerFast.from_pretrained(\"bert-base-uncased\")\n \n example_1 = conll2003['train'][0]\n tokenized_input = tokenizer(example_1[\"tokens\"], is_split_into_words=True)\n tokens = tokenizer.convert_ids_to_tokens(tokenized_input[\"input_ids\"])\n word_ids = tokenized_input.word_ids()\n print(\"word_ids :: \",word_ids)\n ''' As we can see, it returns a list with the same number of elements as our processed input ids, \n mapping special tokens to None and all other tokens to their respective word.'''\n print()#Function to tokenize and align labels with respect to the tokens.\n def tokenize_and_align_labels(examples, label_all_tokens=True):\n tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True)\n labels = []\n for i, label in enumerate(examples['ner_tags']):\n word_ids = tokenized_inputs.word_ids(batch_index=i)\n previous_word_idx = None\n label_ids = []\n for word_idx in word_ids:\n if word_idx is None:\n label_ids.append(-100)\n elif word_idx != previous_word_idx:\n label_ids.append(label[word_idx])\n else:\n label_ids.append(label[word_idx] if label_all_tokens else -100)\n previous_word_idx = word_idx\n ", "url": "https://github.com/pytorch/executorch/issues/9180", "state": "open", "labels": [ "module: user experience" ], "created_at": "2025-03-12T09:17:50Z", "updated_at": "2025-12-18T21:55:01Z", "user": "raphael10-collab" }, { "repo": "pytorch/vision", "number": 8962, "title": "Missing Windows Wheel for torchvision==0.11.2+cu111", "body": "Hello Torchvision team,\n\nWe are attempting to install specific versions with CUDA 11.1 using .whl files from [torch_stable.html](https://download.pytorch.org/whl/cu111/torch_stable.html). \n\nHowever, we can't find the required wheel for torchvision==0.11.2+cu111 for Windows (win_amd64.whl).\n\nCould you provide guidance on how to obtain this package or upload the Windows wheel for torchvision 0.11.2 with CUDA 11.1 support?\n\nThank you for your assistance.", "url": "https://github.com/pytorch/vision/issues/8962", "state": "closed", "labels": [], "created_at": "2025-03-11T22:04:31Z", "updated_at": "2025-03-28T13:10:02Z", "comments": 2, "user": "huang3527" }, { "repo": "pytorch/torchtitan", "number": 951, "title": "Nan's on step 1 of 405B model training", "body": "Anyone has any tip of how to debug/prevent nan's on step 1 during FSDP+TP training of the 405B model on 256 GPU's on the C4 dataset ? ", "url": "https://github.com/pytorch/torchtitan/issues/951", "state": "closed", "labels": [], "created_at": "2025-03-11T07:00:12Z", "updated_at": "2025-03-28T01:47:28Z", "comments": 12, "user": "githubsgi" }, { "repo": "pytorch/xla", "number": 8809, "title": "MarkShardingFunction causes OOM when applied to model parameters", "body": "When tested in https://github.com/AI-Hypercomputer/torchprime/pull/144/files, if we shard parameters with `MarkShardingFunction.apply`, that causes Mixtral to OOM. Gradient HLO arrays end up living much longer than needed.\n\nShard both activations and model parameters with `MarkShardingFunction`: http://shortn/_vvNPYfxSe3\nShard activation with `MarkShardingFunction` and shard model parameters with `xs.mark_sharding`: http://shortn/_6OxaSdjJzQ\n\nAnother clue is that if I change `MarkShardingFunction` to be not in-place, then the OOM goes away:\n\n```\nclass MarkShardingFunction(torch.autograd.Function):\n \"\"\"\n Autograd function to mark_sharding on intermediate tensors and the gradient\n of the intermediate tensors during backward pass.\n\n Usage:\n new_tensor = MarkShardingFunction.apply(tensor, mesh, ('axis_1', 'axis_2'))\n\n This is required to guide GSPMD sharding propagation better during the\n backward pass as during complicated workloads the compiler can introduce extra\n collectives that can hurt performance.\n \"\"\"\n\n @staticmethod\n def forward(\n ctx, torch_tensor: torch.Tensor, mesh: Mesh, partition_spec: tuple\n ) -> torch.Tensor:\n o = mark_sharding(torch_tensor.clone(), mesh, partition_spec)\n ctx.partition_spec = partition_spec\n ctx.mesh = mesh\n return o.global_tensor\n\n @staticmethod\n def backward(ctx, grad_output: torch.Tensor) -> torch.Tensor:\n partition_spec = ctx.partition_spec\n mesh = ctx.mesh\n o = mark_sharding(grad_output.clone(), mesh, partition_spec)\n return o.global_tensor, None, None\n```", "url": "https://github.com/pytorch/xla/issues/8809", "state": "closed", "labels": [ "performance" ], "created_at": "2025-03-08T06:14:48Z", "updated_at": "2025-03-17T04:03:08Z", "comments": 3, "user": "tengyifei" }, { "repo": "pytorch/ao", "number": 1850, "title": "What the dtype of input in Float8Linear backward?", "body": "In Float8Linear forward input is saved in high precision, \n\n<img width=\"605\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/b2f4fdff-79e6-4274-8e68-9bf7947f5003\" />\nWhy not save input in float8? I don't know if I understand this correctly.", "url": "https://github.com/pytorch/ao/issues/1850", "state": "closed", "labels": [ "question" ], "created_at": "2025-03-07T07:33:01Z", "updated_at": "2025-03-10T16:28:11Z", "user": "yh8899" }, { "repo": "pytorch/pytorch", "number": 148747, "title": "How can I use inductor aot_compile to support a MoE network?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nDeepseek has sparked a wave of enthusiasm for the design of Moe (Mixture of Experts) network architectures. I am often asked how to accelerate the inference of an Moe network. Undoubtedly, I thought of using Inductor's aot_compile to compile it into a dynamic library and then calling it in C++ for acceleration.\n\nUnfortunately, the process of selecting experts in Moe is different from that of a typical dense network. This part of the syntax is more like an extension of PyTorch, closer to Python's syntax, and cannot be traced. Below is a simple demo I wrote. I would like to know if the developers of Inductor have any plans to support Moe networks?\n\n\n```Python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass Expert(nn.Module):\n def __init__(self, input_dim, output_dim):\n super(Expert, self).__init__()\n self.linear = nn.Linear(input_dim, output_dim)\n\n def forward(self, x):\n return self.linear(x)\n\nclass MoE(nn.Module):\n def __init__(self, input_dim, output_dim, num_experts=10, top_k=2):\n super(MoE, self).__init__()\n # Eight experts for gating\n self.other_experts = nn.ModuleList([Expert(input_dim, output_dim) for _ in range(num_experts - 2)])\n # Gate network to choose top_k experts\n self.gate = nn.Linear(input_dim, num_experts - 2)\n # Final output layer\n self.final_linear = nn.Linear((top_k) * output_dim, output_dim)\n\n def forward(self, x):\n # Compute gating scores\n gate_scores = self.gate(x)\n topk_scores, topk_indices = torch.topk(gate_scores, 2, dim=-1)\n \n # Collect outputs from selected experts based on gating\n selected_expert_outputs = torch.stack(\n [torch.stack([self.other_experts[i](x[idx]) for i in topk_indice], dim = 0) for idx, topk_indice in enumerate(topk_indices)], dim=0\n )\n\n # Flatten and pass through final linear layer\n all_expert_outputs = selected_expert_outputs.view(x.size(0), -1)\n output = self.final_linear(all_expert_outputs) \n return output\n\n\nif __name__ == \"__main__\":\n # Example usage\n input_dim = 128\n output_dim = 64\n moe = MoE(input_dim, output_dim)\n\n x = torch.randn(32, input_dim) # Batch size of 32\n output = moe(x)\n print(output.shape) # Expected output shape: [32, 64]\n\n\n export_model = torch.export.export(\n mod=moe,\n args=tuple([torch.randn(32, input_dim)]),\n dynamic_shapes={\"x\": {0: torch.export.Dim(\"batch\", min=1, max=1024)}},\n )\n```\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi", "url": "https://github.com/pytorch/pytorch/issues/148747", "state": "closed", "labels": [ "oncall: pt2", "export-triage-review", "oncall: export", "module: aotinductor" ], "created_at": "2025-03-07T07:04:07Z", "updated_at": "2025-05-24T02:21:21Z", "user": "sujuyu" }, { "repo": "pytorch/pytorch", "number": 148713, "title": "[torch.export] How to export with the model having *args and **kwargs as forward signature?", "body": "This is the original model code:\n\n```python\nfrom diffusers.models import AutoencoderKL\nimport torch\n\nmodel_name = \"black-forest-labs/FLUX.1-dev\"\nhf_safetensor = True\nmodel_opts = {'torch_dtype': torch.float16}\nmodel = AutoencoderKL.from_pretrained(model_name, subfolder=\"vae\", use_safetensors=hf_safetensor, force_download=True, **model_opts).to(\"cpu\")\nmodel.forward = model.decode # This turns model forward signature to *args and **kwargs\ninputs = torch.randn(1, 16, 128, 128, dtype=torch.float32, device=\"cpu\")\n\nB, H, W = torch.export.Dim(\"B\"), torch.export.Dim(\"H\"), torch.export.Dim(\"W\")\ndynamic_shapes = ({0:B, 2:H, 3:W},)\ntorch.export.export(\n model,\n (inputs,),\n dynamic_shapes=dynamic_shapes,\n strict=False\n)\n```\n\nNo matter what data structure I turn inputs or dynamic_shapes to, it mismatches.\n\nA simple and not so much making sense example could be like this:\n```python\nimport torch\nimport torch.nn as nn\nimport torch.onnx\n\nclass AddModel(nn.Module):\n def __init__(self):\n super(AddModel, self).__init__()\n\n def forward(self, x):\n return torch.sigmoid(x)\n\nclass WrappedModel(nn.Module):\n def __init__(self, model):\n super(WrappedModel, self).__init__()\n self.model = model\n\n def forward(self, *arga, **kwargs):\n return self.model(*arga, **kwargs)\n\n# Instantiate the model\nmodel = WrappedModel(AddModel())\n\n# Set the model to evaluation mode\nmodel.eval()\n\n# Create dynamic input tensors\nx = torch.randn(2, 3)\n\n# Define dynamic axes for ONNX export\ndynamic_shapes = ({0: torch.export.Dim.AUTO, 1: torch.export.Dim.AUTO},)\n\ntorch.export.export(\n model,\n (x,),\n dynamic_shapes=dynamic_shapes,\n strict=False\n)\n```\n\n\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/148713", "state": "closed", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2025-03-06T23:01:17Z", "updated_at": "2025-03-07T01:47:05Z", "user": "titaiwangms" }, { "repo": "pytorch/pytorch", "number": 148634, "title": "README doesn't explain how to run tests in the \"Test PyTorch\" section", "body": "### \ud83d\udcda The doc issue\n\nREADME needs to have the \"Test PyTorch\" section after the [Install PyTorch](https://github.com/pytorch/pytorch#install-pytorch) section in the README.\n\nTesting is the next step after building PyTorch.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/148634", "state": "closed", "labels": [], "created_at": "2025-03-06T04:32:44Z", "updated_at": "2025-03-06T17:58:19Z", "user": "yurivict" }, { "repo": "pytorch/xla", "number": 8799, "title": "Re-enable CPU test `test/test_python_ops.py -k TestPythonOps` for `uint8` dtype", "body": "To unblock bumping libtpu pin, we have to disable this test: https://github.com/pytorch/xla/pull/8788/files\n\nThis test fails with a LLVM memory allocation error on the CPU.\n\nWe should report this bug upstream and re-enable it after a fix is there.\n\nFailed run: https://github.com/pytorch/xla/actions/runs/13668949609/job/38217578967?pr=8788\n\nError:\n\n```\nE0000 00:00:1741156332.106836 21429 execution_engine.cc:53] LLVM compilation error: Cannot allocate memory\n ./test/run_tests.sh: line 51: 21120 Segmentation fault (core dumped) python3 \"$@\"\n```\n", "url": "https://github.com/pytorch/xla/issues/8799", "state": "closed", "labels": [ "bug", "libtpu" ], "created_at": "2025-03-05T19:40:50Z", "updated_at": "2025-05-05T00:25:18Z", "comments": 0, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8792, "title": "Generating stablehlo.composite and running it through PJRT", "body": "## \u2753 Questions and Help\n\nFollowing the example from the [docs](https://pytorch.org/xla/release/r2.6/features/stablehlo.html#preserving-high-level-pytorch-operations-in-stablehlo-by-generating-stablehlo-composite), I tried to use `StableHLOCompositeBuilder` to generate a `stablehlo.composite` op with the difference that I want to actually run it through PJRT instead of exporting it.\nIs there a way of doing this currently or are there any future plans regarding it?\n\nThis is my example code: \n\n```python\n\nimport os\nos.environ['XLA_STABLEHLO_COMPILE'] = '1'\n\nimport torch\nimport torch.nn.functional as F\nfrom torch_xla import stablehlo\nfrom torch_xla.experimental.mark_pattern_utils import StableHLOCompositeBuilder\n\nclass M(torch.nn.Module):\n\n def __init__(self):\n super().__init__()\n self.q_proj = torch.nn.Linear(128, 128, bias=False)\n self.k_proj = torch.nn.Linear(128, 128, bias=False)\n self.v_proj = torch.nn.Linear(128, 128, bias=False)\n self.b = StableHLOCompositeBuilder(\"test.sdpa\", {\"scale\": 0.25, \"other_attr\": \"val\"})\n\n def forward(self, x):\n q = self.q_proj(x)\n k = self.k_proj(x)\n v = self.v_proj(x)\n q, k, v = self.b.mark_inputs(q, k, v)\n attn_out = F.scaled_dot_product_attention(q, k, v, scale=0.25)\n attn_out = self.b.mark_outputs(attn_out)\n attn_out = attn_out + x\n return attn_out\n\ndevice = \"xla\"\n\ninput_args = torch.randn((10, 8, 128)).to(device)\nmodel = M().to(device)\nout = model(input_args)\nprint(out)\n\n```\n\n```\nWARNING:root:Found CUDA without GPU_NUM_DEVICES. Defaulting to PJRT_DEVICE=CUDA with GPU_NUM_DEVICES=1\n\nloc(\"select.69\"): error: 'stablehlo.select' op using value defined outside the region\n\n...\n\nRuntimeError: torch_xla/csrc/runtime/stablehlo_helper.cc:109 : Check failed: status.ok()\n*** Begin stack trace ***\n\ttsl::CurrentStackTrace()\n\ttorch_xla::ConvertHloToStableHlo(xla::HloModuleProto const*, mlir::ModuleOp*)\n\ttorch_xla::runtime::PjRtComputationClient::Compile(std::vector<torch_xla::runtime::ComputationClient::CompileInstance, std::allocator<torch_xla::runtime::ComputationClient::CompileInstance> >)\n\ttorch_xla::XLAGraphExecutor::Compile(std::vector<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> >, std::allocator<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > > >&, absl::lts_20230802::Span<std::string const>, torch::lazy::LazyGraphExecutor::SyncTensorCollection const&, torch::lazy::LazyGraphExecutor::PostOrderData*, std::vector<torch::lazy::Value, std::allocator<torch::lazy::Value> > const&)\n\ttorch_xla::XLAGraphExecutor::SyncTensorsGraphInternal(std::vector<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> >, std::allocator<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > > >*, absl::lts_20230802::Span<std::string const>, torch::lazy::LazyGraphExecutor::SyncTensorsConfig const&, bool)\n\ttorch_xla::XLAGraphExecutor::SyncTensorsGraph(std::vector<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> >, std::allocator<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > > >*, absl::lts_20230802::Span<std::string const>, bool, bool, bool)\n\n...\n\n*** End stack trace ***\nMHLO -> StableHLO conversion failed.\nStableHLO Module from MHLO -> StableHLO conversion is not leagal.Please open a github issue to PyTorch/XLA.\n\n```\n\nI used torch-xla 2.5.1 for the example above but I get similar error with 2.6\n```\ntorch 2.5.1\ntorch-xla 2.5.1\n```\n", "url": "https://github.com/pytorch/xla/issues/8792", "state": "open", "labels": [ "bug", "stablehlo" ], "created_at": "2025-03-05T10:45:12Z", "updated_at": "2025-03-06T12:49:08Z", "comments": 1, "user": "sechkova" }, { "repo": "pytorch/torchtitan", "number": 930, "title": "`CheckpointManager.save` with async mode is vulnerable to race conditions", "body": "### Bug description\n\nBased on [[Distributed w/ TorchTitan] Optimizing Checkpointing Efficiency with PyTorch DCP](https://discuss.pytorch.org/t/distributed-w-torchtitan-optimizing-checkpointing-efficiency-with-pytorch-dcp/211250)'s Figure 3, when using async checkpointing via `CheckpointManager` with `AsyncMode.ASYNC`, I would think `CheckpointManager.save` blocks until the model is at least in \"staging\":\n\n![Figure 3 from linked article](https://github.com/user-attachments/assets/789d48f2-1804-435f-85e5-5cd08a17137d)\n\nHowever, running the below reproducer, we see that is not actually happening with `save`, the model is actually not ready and `load` fails.\n\nIs this the expected behavior?\n\nIt seems suboptimal to me, I would think the predictable behavior (given Figure 3) is:\n\n1. `save` with async mode: (1) blocks until the model is in \"staging\", then (2) \"persistence\" takes place asynchronously\n2. Since the model is in \"staging\" after `save`, we can immediately mutate the model\n3. Then if you call `load` before the \"persistence\" is finished, `load` will just have to wait (blocking) a bit longer\n\nDoes this make sense?\n\n<details><summary>Reproducer</summary>\n\nPlease forgive the `TrainState` being overly verbose, I just needed it for this reproducer\n\n```python\nimport tempfile\nfrom collections.abc import Iterator\nfrom dataclasses import dataclass, field\nfrom io import BytesIO\nfrom pathlib import Path\nfrom typing import Any\n\nimport pytest\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader, Dataset\nfrom torchtitan.components.checkpoint import AsyncMode, CheckpointManager\nfrom torchtitan.components.ft import FTManager\nfrom transformers import AutoModelForCausalLM\n\n\n@dataclass\nclass TrainState:\n\n step: int = 0\n global_avg_losses: list[float] = field(default_factory=list)\n global_max_losses: list[float] = field(default_factory=list)\n log_steps: list[int] = field(default_factory=list)\n\n def state_dict(self) -> dict[str, Any]:\n global_avg_losses_bytes = BytesIO()\n torch.save(self.global_avg_losses, global_avg_losses_bytes)\n global_max_losses_bytes = BytesIO()\n torch.save(self.global_max_losses, global_max_losses_bytes)\n log_steps_bytes = BytesIO()\n torch.save(self.log_steps, log_steps_bytes)\n return {\n \"step\": torch.tensor(self.step, dtype=torch.int32),\n \"global_avg_losses\": global_avg_losses_bytes,\n \"global_max_losses\": global_max_losses_bytes,\n \"log_steps\": log_steps_bytes,\n }\n\n def load_state_dict(self, state_dict) -> None:\n self.step = state_dict[\"step\"].item()\n state_dict[\"global_avg_losses\"].seek(0)\n self.global_avg_losses = torch.load(\n state_dict[\"global_avg_losses\"], weights_only=False\n )\n state_dict[\"global_max_losses\"].seek(0)\n self.global_max_losses = torch.load(\n state_dict[\"global_max_losses\"], weights_only=False\n )\n state_dict[\"log_steps\"].seek(0)\n self.log_steps = torch.load(state_dict[\"log_steps\"], weights_only=False)\n\n\nclass MockDataset(Dataset):\n def __len__(self):\n return 10\n\n def __getitem__(self, idx):\n return torch.randn(128)\n\n\n@dataclass\nclass MockCheckpointConfig:\n enable_checkpoint: bool = True\n folder: str = \"checkpoint\"\n interval: int = 1\n async_mode: str = AsyncMode.DISABLED\n keep_latest_k: int = 0\n model_weights_only: bool = False\n export_dtype: str = \"float32\"\n exclude_from_loading: list[str] = field(default_factory=list)\n load_step: int = -1\n\n\n@dataclass\nclass MockFTConfig:\n replica_id: int = 0\n enabled: bool = False\n\n\n@dataclass\nclass MockJobSubConfig:\n dump_folder: str = tempfile.gettempdir()\n\n\n@dataclass\nclass MockJobConfig:\n checkpoint: MockCheckpointConfig = field(default_factory=MockCheckpointConfig)\n fault_tolerance: MockFTConfig = field(default_factory=MockFTConfig)\n job: MockJobSubConfig = field(default_factory=MockJobSubConfig)\n\n\n@pytest.fixture(scope=\"session\", name=\"distributed_setup\")\ndef fixture_distributed_setup() -> Iterator[None]:\n if not torch.distributed.is_initialized():\n torch.distributed.init_process_group(\n backend=\"gloo\",\n # Use a different port as previous runs might have left it in TIME_WAIT state\n init_method=\"tcp://localhost:10998\",\n world_size=1,\n rank=0,\n )\n\n yield\n\n if torch.distributed.is_initialized():\n torch.distributed.destroy_process_group()\n\n\n@pytest.fixture(scope=\"session\", name=\"model\")\ndef fixture_model(\n distributed_setup, # noqa: ARG001\n) -> Iterator[tuple[nn.Module, float]]:\n model = AutoModelForCausalLM.from_pretrained(\n \"Qwen/Qwen2.5-1.5B-Instruct\",\n torch_dtype=torch.bfloat16,\n device_map=\"cpu\", # Use CPU for testing\n )\n\n # Return the original parameter value for verification\n yield model, model.get_input_embeddings().weight[0, 0].item()\n", "url": "https://github.com/pytorch/torchtitan/issues/930", "state": "closed", "labels": [ "question", "module: checkpoint" ], "created_at": "2025-03-05T02:06:09Z", "updated_at": "2025-03-20T18:30:28Z", "user": "jamesbraza" }, { "repo": "pytorch/torchx", "number": 1012, "title": "possible Improvement: Using shutdown() Before close() in `server.py`", "body": "### Description:\n\nWhile reviewing the get_routable_ip_to function in [torchx/apps/serve/serve.py](https://github.com/pytorch/torchx/blob/main/torchx/apps/serve/serve.py#L96), I noticed that the socket is directly closed using s.close(), without calling shutdown() beforehand.\n\n```python3\ndef get_routable_ip_to(addr: str) -> str:\n \"\"\"\n get_routable_ip_to opens a dummy connection to the target HTTP URL and\n returns the IP address used to connect to it.\n \"\"\"\n parsed = urlparse(addr)\n try:\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n s.connect((parsed.hostname, parsed.port or 80))\n return s.getsockname()[0]\n finally:\n s.close()\n\n```\n\n### Question\n\nWould there be any potential downsides or benefits to adding a shutdown(socket.SHUT_RDWR) call before closing the socket in the get_routable_ip_to function?\n\nPossible Benefits\n- Ensures that all pending data is properly discarded before closing, particularly if the socket is still in a half-open state.\n- Prevents potential issues with lingering resources and improves resource management.\n- Aligns with best practices for socket cleanup.\n\n### Reference\nThe Python socket documentation states:\n\n\"close() releases the resource associated with a connection but does not necessarily close the connection immediately. If you want to close the connection in a timely fashion, call shutdown() before close().\" [link](https://docs.python.org/3/library/socket.html#socket.socket.close)\n\nLooking forward to your thoughts!\n\nThanks!\n", "url": "https://github.com/meta-pytorch/torchx/issues/1012", "state": "open", "labels": [], "created_at": "2025-03-04T23:59:09Z", "updated_at": "2025-03-04T23:59:09Z", "comments": 0, "user": "allrob23" }, { "repo": "pytorch/xla", "number": 8786, "title": "How to show PJRT Call Stack", "body": "## \u2753 Questions and Help\nI wounder how to print PJRT Call Stack. Thanks", "url": "https://github.com/pytorch/xla/issues/8786", "state": "open", "labels": [ "question", "openxla" ], "created_at": "2025-03-04T09:32:43Z", "updated_at": "2025-03-07T20:23:32Z", "user": "yuanfz98" }, { "repo": "pytorch/xla", "number": 8784, "title": "how to save weights", "body": "## \u2753 Questions and Help\nHello, I using torchxa to convert model to stablehlo.\nhttps://pytorch.org/xla/master/features/stablehlo.html#torch-export-to-stablehlo\nFollow this page, \nweights, stablehlo = tx.export.exported_program_to_stablehlo(exported)\nprint(stablehlo.mlir_module())\nCan store weights and/or stablehlo object however you like\nBut how to store weights, I don't know. I found weights is a list.\nCould you help me? Thank you!\n\nAnother question, we can save data and functions directory by using torch_xla\uff0c how can I save functions by using torchax?", "url": "https://github.com/pytorch/xla/issues/8784", "state": "closed", "labels": [ "question" ], "created_at": "2025-03-04T06:05:29Z", "updated_at": "2025-03-29T08:35:03Z", "user": "raninbowlalala" }, { "repo": "pytorch/examples", "number": 1319, "title": "Cuda memory usage does not decrease when increasing the number of cuda cards (fsdp_tp_example.py).", "body": "According to the implementation of the source code, I did several experiments to study the script running time and cuda memory occupancy.\n\n- exp1: nproc_per_node=4, nnodes=1 => cuda=2161~2411MB, runtime=63.04s\n- exp2: nproc_per_node=8, nnodes=1 => cuda=2141~2395MB, runtime=70.52s\n- exp3: nproc_per_node=4, nnodes=2 => cuda=2141~2145MB, runtime=233.03s\n\nAccording to the results of the above three experiments, we find that with the increase of the number of graphics cards, the cuda memory usage did not decrease significantly, but the script running time increased.\n\nWhy?\n\nI am looking for the reasons, according to the algorithm principle (FSDP and TP), as the number of video cards increases, the cuda memory and running time should become smaller.\n\n# My Environment\n* Pytorch version: 3.11.7\n* Operating System and version: Linux version 3.10.0-1160.114.2.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) )\n* Installed using source? [yes/no]: yes\n* Are you planning to deploy it using docker container? [yes/no]: no\n* Is it a CPU or GPU environment?: GPU\n* Which example are you using: fsdp_tp_example.py\n* Link to code or data to repro [if any]: https://github.com/pytorch/examples/tree/main/distributed/tensor_parallelism", "url": "https://github.com/pytorch/examples/issues/1319", "state": "open", "labels": [], "created_at": "2025-03-04T04:04:35Z", "updated_at": "2025-03-04T04:59:47Z", "comments": 0, "user": "YangHui90" }, { "repo": "pytorch/serve", "number": 3396, "title": "Why is TorchServe No Longer Actively Maintained?", "body": " Hello, I noticed that the TorchServe GitHub page has been marked as 'Limited Maintenance,' indicating that the project is no longer actively maintained. Could you share the reasons behind this decision? Is it related to the development direction of the PyTorch ecosystem? Additionally, are there any recommended alternative tools or solutions for deploying PyTorch models? \n Thank you for your response!", "url": "https://github.com/pytorch/serve/issues/3396", "state": "open", "labels": [], "created_at": "2025-03-03T02:16:01Z", "updated_at": "2025-04-09T09:29:25Z", "comments": 11, "user": "ily666666" }, { "repo": "pytorch/ao", "number": 1805, "title": "What kind of layers are optimized by torchao on a RTX 4090?", "body": "I am trying to quantize a model and I am running this on a 4090. Since many of the available quantization benchmarks are done on higher gpus, I am trying to establish a baseline perfromance gain I can expect from quantization. \n\nI tried the tutorial at [torchao_demo](https://github.com/ethanshenley/PyTorch-Conference-Recipes/blob/main/torchao_demo.ipynb) on a gpu and it worked great. My model has similar kind of transformer layers with q, k, v projections but I am not able to see the same kind of performance with a large chunk of `aten::_copy()` operations in profile log. \n\nTo debug, I wanted to benchmark on a single linear layer as the majority of modified layers seem to be of this type. But I am not able to see any performance gain in this experiment of mine. I would appreciate if I can get more context into the specific layers that gets optimized by `torchao`. \n\n\n```\n'''\n https://github.com/ethanshenley/PyTorch-Conference-Recipes/blob/main/torchao_demo.ipynb\n'''\nimport gc\nimport psutil\nimport torch\nimport torch.nn as nn\nimport time\n\nfrom torchao.quantization import quantize_, int8_weight_only,float8_weight_only\n\n\ndevice = \"cuda:0\"\ndef get_memory_usage():\n return psutil.Process().memory_info().rss / 1024 / 1024 # in MB\n\ndef run_inference(model, inputs, num_runs=10):\n start_time = time.time()\n for i in range(num_runs):\n with torch.no_grad():\n outputs = model(inputs[i].squeeze())\n torch.cuda.synchronize(device)\n end_time = time.time()\n return (end_time - start_time) / num_runs\n\n# Load model and tokenizer\nbsz = 16\nn_runs = 100\nfor sz in range(1024, 20480, 1024):\n print('====================================================')\n print(f\"Running with linear layer of size {sz}...\")\n model = nn.Linear(sz, sz).to(device)\n inputs = torch.randn(n_runs, bsz, sz).to(device)\n\n print(\"\\nRunning baseline model...\")\n baseline_memory = get_memory_usage()\n baseline_time = run_inference(model, inputs, n_runs)\n print(f\"Baseline - Time: {baseline_time:.4f}s, Memory: {baseline_memory:.2f}MB\")\n\n\n print(\"\\nRunning int8 weight-only quantized model...\")\n model_int8 = nn.Linear(sz, sz).to(device)\n quantize_(model_int8, int8_weight_only())\n int8_memory = get_memory_usage()\n int8_time = run_inference(model_int8, inputs, n_runs)\n print(f\"Int8 Weight-Only - Time: {int8_time:.4f}s, Memory: {int8_memory:.2f}MB\")\n\n print(\"\\nRunning fp8 weight-only quantized model...\")\n model_fp8 = nn.Linear(sz, sz).to(device)\n quantize_(model_fp8, float8_weight_only()) \n fp8_memory = get_memory_usage()\n fp8_time = run_inference(model, inputs, n_runs)\n print(f\"fp8 Weight-Only - Time: {fp8_time:.4f}s, Memory: {fp8_memory:.2f}MB\")\n\n\n print(\"\\nPerformance Improvements:\")\n print(f\"Int8 weight-only speedup: {baseline_time / int8_time:.2f}x\")\n print(f\"Int8 weight-only memory reduction: {baseline_memory / int8_memory:.2f}x\")\n print(f\"fp8 weight-only speedup: {baseline_time / fp8_time:.2f}x\")\n print(f\"fp8 weight-only memory reduction: {baseline_memory / fp8_memory:.2f}x\")\n\n del model, model_int8, model_fp8, inputs\n gc.collect()\n torch.cuda.empty_cache()\n torch.cuda.synchronize(device)\n```\n", "url": "https://github.com/pytorch/ao/issues/1805", "state": "open", "labels": [ "question", "performance", "triaged" ], "created_at": "2025-03-01T00:36:14Z", "updated_at": "2025-05-01T18:36:43Z", "user": "naiveen" }, { "repo": "pytorch/xla", "number": 8776, "title": "Standardize `AllClose` calls from test_aten_xla_tensor tests", "body": "Standardize `AllClose` calls from `test/cpp/test_aten_xla_tensor_*.cpp` tests to be with the same standards.", "url": "https://github.com/pytorch/xla/issues/8776", "state": "open", "labels": [ "enhancement", "documentation" ], "created_at": "2025-03-01T00:14:19Z", "updated_at": "2025-03-05T20:24:41Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8774, "title": "The \"Pytorch/XLA overview\" is very long, goes into advanced topics, and is overall intimidating for new users.", "body": "## \ud83d\udcda Documentation\n\nThe \"Pytorch/XLA overview\" includes many advanced topics that go beyond an \"overview\", including how to specifically convert Stable Diffusion to run on TPUs (which is more of a Guide) and how to profile (which is more of a Tutorial). The result is an intimidating introduction for potential users of PyTorch/XLA.\n\nI'd suggest we break the SD section into a stand-alone guide. And the profiling section into a standalone tutorial, one with a simplified example that has a successful outcome (the current section ends with a \"we found the problems but there's nothing we can do\"). \n\nThe remaining copy can be redrafted into an \"intro\", so that users can hear about some of the benefits of PyTorch/XLA and as a result get encouraged to continue reading and even trying out the platform.\n\n\n", "url": "https://github.com/pytorch/xla/issues/8774", "state": "open", "labels": [ "enhancement", "documentation" ], "created_at": "2025-02-28T20:47:25Z", "updated_at": "2025-06-03T17:34:09Z", "comments": 2, "user": "yaoshiang" }, { "repo": "pytorch/xla", "number": 8773, "title": "Document the virtual device mesh", "body": "## \ud83d\udcda Documentation\n\nWe need to explain what is a \"mesh\". The current documentation in https://pytorch.org/xla/master/perf/spmd_basic.html#mesh doesn't explain it very well. For example, it doesn't say what does specifying `device_ids is almost always np.array(range(num_devices)).` do.", "url": "https://github.com/pytorch/xla/issues/8773", "state": "closed", "labels": [ "enhancement", "documentation" ], "created_at": "2025-02-28T19:16:32Z", "updated_at": "2025-03-16T23:33:32Z", "comments": 1, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8772, "title": "Paramatize test_aten_xla_tensor tests", "body": "## \ud83d\ude80 Feature\nParamatize test_aten_xla_tensor tests. Inspired by https://github.com/pytorch/xla/pull/8734#discussion_r1968768218.\n\nExample of a test_aten_xla_tensor tests: [test_aten_xla_tensor_1](https://github.com/pytorch/xla/blob/2675e6892c6f955fc2baf88d85dfdfa72062273c/test/cpp/test_aten_xla_tensor_1.cpp)\n\n## Motivation\n\nDecrease and simplify the amount of code we have for testing while increasing readability. Right now test_aten_xla_tensor tests are split into 6 distinct files. Each with over 1000 lines each. 2 with over 5000 lines. This makes tests hard to read, and implementing new tests.\n\nParamatization will hopefully:\n1) Significantly decrease the number of lines on the test\n2) Significantly increase readability\n3) Increase speed for developing tests\n\n## Pitch\n\nCollapse tests that are simililar into the same Parameterized test.\n\n## Alternatives\n\nThere are other paramitization methods for C++ that are less clean than INSTANTIATE_TEST_SUITE_P. We could seek these if they are blockers\n\n## Additional context\n\nWe should utilize [INSTANTIATE_TEST_SUITE_P](https://github.com/google/googletest/blob/main/docs/advanced.md#how-to-write-value-parameterized-tests)\n", "url": "https://github.com/pytorch/xla/issues/8772", "state": "open", "labels": [ "enhancement", "usability", "testing" ], "created_at": "2025-02-28T18:19:20Z", "updated_at": "2025-03-06T03:06:31Z", "comments": 2, "user": "pgmoka" }, { "repo": "pytorch/pytorch", "number": 148196, "title": "[inductor][triton] Decide how to deprecate \"old triton versions\"", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nRight now we have a mess of at least 3 \"versions\" of Triton - i.e. commit ranges that we are compatible with.\n\nThis is beneficial for a few reasons:\n* Ability to bisect old versions of Triton\n* Compatibility with users who have different (i.e. old) versions of Triton installed - also fbcode/oss mismatches, \n* Possibly other Triton forks for different hardware, which may be based off of old versions of Triton\n\nBut it has some downsides - mainly messy code trying to handle the various versions of Triton. Also, we don't test the old versions, so there's nothing ensuring that these old code paths are actually still correct. We should probably decide on a policy or a way to determine when we can clean up handling for an old version of Triton.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov", "url": "https://github.com/pytorch/pytorch/issues/148196", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-02-28T17:18:12Z", "updated_at": "2025-03-04T15:39:46Z", "user": "davidberard98" }, { "repo": "pytorch/torchtitan", "number": 903, "title": "[Possible PR discuss] Will a PR of training HF model be welcomed?", "body": "Hi! We are in the process of developing a novel training framework for Reinforcement Learning (RL) following TorchTitan. Recently, we've developed a feature to support direct training from Hugging Face (HF) models and the loading safetensors in online sharded fashion. This may substantially cuts down the cost of adapting a new model. All you have to do is implement the parallelism applying function.\nGiven this, I wonder whether a PR with the relevant code and a training example for training Hugging Face's Llama model is welcomed. I think this addition will be of great benefit to many in the community.\nBy the way, during my testing, I found that the HF Llama model demonstrates competitive TPS when compared to the model implemented in TorchTitan.", "url": "https://github.com/pytorch/torchtitan/issues/903", "state": "open", "labels": [ "huggingface integration", "community help wanted" ], "created_at": "2025-02-28T03:13:40Z", "updated_at": "2025-03-04T08:09:14Z", "comments": 7, "user": "junjzhang" }, { "repo": "pytorch/torchtitan", "number": 902, "title": "Question about triton in deepseek implementtion", "body": "I noticed that some adaptations related to DeepSeek have already been merged. I would like to understand why Triton is being used for implementation. In certain scenarios, such as on ARM architecture or other privateuse1 backends, Triton is not yet fully supported. Have you considered making the use of Triton an optional configuration? @kwen2501 ", "url": "https://github.com/pytorch/torchtitan/issues/902", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-28T02:55:48Z", "updated_at": "2025-08-21T03:13:51Z", "user": "zqwenn" }, { "repo": "pytorch/xla", "number": 8765, "title": "Settle on a consistent logging methodology and document it", "body": "It would be useful for PyTorchXLA to provide easy to use debugging logs. To do so, we need to:\n1) Settle on specific logging methodology\n2) Document it for further use\n3) Document how to activate these logs", "url": "https://github.com/pytorch/xla/issues/8765", "state": "open", "labels": [ "enhancement", "usability", "documentation" ], "created_at": "2025-02-27T19:28:20Z", "updated_at": "2025-03-05T20:19:25Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8764, "title": "\"Too many open files\" error documenting for multi-processing", "body": "In multiprocessing cases, we can get a \"Too many open files\" error from too many processes opening at the same time. This can be confusing as this is a common error for file opening. We should add more information to the error to make this issue easier to track.", "url": "https://github.com/pytorch/xla/issues/8764", "state": "open", "labels": [ "enhancement", "usability", "documentation" ], "created_at": "2025-02-27T19:08:27Z", "updated_at": "2025-03-05T20:19:12Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8763, "title": "Improve Logging methodology and documentation", "body": "Standardized logging method which can be leverage with debugging flags.\n\nAfterwards, document how to get these logs in our documentation.", "url": "https://github.com/pytorch/xla/issues/8763", "state": "open", "labels": [ "enhancement", "usability", "documentation" ], "created_at": "2025-02-27T18:57:29Z", "updated_at": "2025-03-11T16:48:58Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8762, "title": "Centralize API guide docs", "body": "Centralize API guide docs. Right now for users interested in our APIs, there are a couple places they might go to:\n- https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/docs/source/learn/api-guide.rst\n- https://pytorch.org/xla/release/r2.6/learn/api-guide.html\n- https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/API_GUIDE.md?plain=1#L166", "url": "https://github.com/pytorch/xla/issues/8762", "state": "open", "labels": [ "enhancement", "documentation" ], "created_at": "2025-02-27T18:54:49Z", "updated_at": "2025-03-05T20:18:36Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8761, "title": "Create full tutorial example for transitioning Pytorch to Pytorch XLA", "body": "It would be useful for new users to have a basic example showing the differences between the two.", "url": "https://github.com/pytorch/xla/issues/8761", "state": "open", "labels": [ "enhancement", "documentation" ], "created_at": "2025-02-27T18:53:29Z", "updated_at": "2025-03-28T17:54:03Z", "comments": 3, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8760, "title": "Add profiling documentation", "body": "[re: issues/8743](]https://github.com/pytorch/xla/issues/8743#issuecomment-2686428336)\r\n\r\nThis issue has a request for adding documentation on the `start_trace` and `stop_trace` API, but we currently don't have any documentation around profiling. Who can I work with to get some profiling documentation written? Thanks!\r\n ", "url": "https://github.com/pytorch/xla/issues/8760", "state": "open", "labels": [ "enhancement", "documentation" ], "created_at": "2025-02-27T17:48:34Z", "updated_at": "2025-03-12T00:08:59Z", "comments": 3, "user": "mikegre-google" }, { "repo": "pytorch/ao", "number": 1790, "title": "An error was encountered setting torch._dynamo.decorators.mark_unbacked", "body": "\nHello, I want batch set up to be dynamic and I use torch._dynamo.mark_dynamic to set it. But I found that recompile is triggered when batch is 1 and 2. Then I used torch._dynamo.decorators.mark_unbacked but it quantizes incorrectly. Can you look at this problem?\n\nMy environment:\ntorch: 2.5.0\ntorchao: 0.8.0\n\nThis is the minimum repetition code\n```python\nimport torch\n\n\nfrom torchao.quantization.quant_api import (\n quantize_,\n int8_dynamic_activation_int8_weight\n)\ntorch._logging.set_logs(recompiles=True, recompiles_verbose = True)\n\nclass MyModel(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.linear = torch.nn.Linear(128, 256)\n\n def forward(self, x):\n return self.linear(x)\n\nmodel = MyModel().cuda().eval()\nmodel = torch.compile(model, fullgraph=True)\n\n# quant\nquantize_(model, int8_dynamic_activation_int8_weight())\n\nexample_input = torch.randn(2, 64, 128).cuda()\ntorch._dynamo.decorators.mark_unbacked(example_input, 0)\ntorch._dynamo.mark_dynamic(example_input, 0)\nmodel(example_input)\n\nx1 = torch.randn(1, 64, 128).cuda()\nx2 = torch.randn(2, 64, 128).cuda()\n\nprint(\"input shape: \", x1.shape)\nmodel(x1)\nprint(\"input shape: \", x2.shape)\nmodel(x2)\n``` \n\nThis is the error log\n<details>\nW0227 10:58:38.277000 1279033 torch/fx/experimental/symbolic_shapes.py:5124] [0/0] failed during evaluate_expr(Ne(u0, 1), hint=None, size_oblivious=False, forcing_spec=False\nE0227 10:58:38.277000 1279033 torch/fx/experimental/recording.py:298] [0/0] failed while running evaluate_expr(*(Ne(u0, 1), None), **{'fx_node': False})\nTraceback (most recent call last):\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/_dynamo/utils.py\", line 2132, in run_node\n return node.target(*args, **kwargs)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/utils.py\", line 433, in _dispatch__torch_function__\n return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/utils.py\", line 412, in wrapper\n return func(f, types, args, kwargs)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/linear_activation_quantized_tensor.py\", line 126, in _\n return weight_tensor._quantized_linear_op(input_tensor, weight_tensor, bias)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/linear_activation_quantized_tensor.py\", line 83, in _quantized_linear_op\n quantized_tensor = input_quant_func(input_tensor, **quant_kwargs)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_api.py\", line 800, in _int8_symm_per_token_reduced_range_quant\n return to_affine_quantized_intx(\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/dtypes/affine_quantized_tensor.py\", line 250, in from_hp_to_intx\n scale, zero_point = choose_qparams_affine(\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_primitives.py\", line 738, in choose_qparams_affine\n return _choose_qparams_affine(\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/_ops.py\", line 1116, in __call__\n return self._op(*args, **(kwargs or {}))\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_primitives.py\", line 840, in _choose_qparams_affine\n shape_for_reduction, reduction_dims = _get_reduction_params(\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_primitives.py\", line 229, in _get_reduction_params\n if block_size[i] != input_size[i] and block_size[i] > 1:\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/__init__.py\", line 680, in __bool__\n return self.node.bool_()\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py\", line 511, in bool_\n return self.guard_bool(\"\", 0)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py\", line 449, in guard_bool\n r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/recording.py\", line 262, in wrapper\n return retlog(fn(*args, **kwargs))\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py\", line 5122, in evaluate_expr\n return self._evaluate_expr(orig_expr, hint, fx_node, size_oblivious, forcing_spec=forcing_spec)\n File \"/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py\", line 5238, in _evaluate_expr\n raise self._make_data_dependen", "url": "https://github.com/pytorch/ao/issues/1790", "state": "open", "labels": [ "question", "quantize_", "triaged" ], "created_at": "2025-02-27T03:10:43Z", "updated_at": "2025-03-06T19:07:34Z", "user": "songh11" }, { "repo": "pytorch/torchtitan", "number": 897, "title": "Moving train.py to torchtitan submodule makes run_train.sh failed with \"Can not find module\"", "body": "### Bug description\n\nHi team, \n\nI noticed a recent change which moved train.py from the top level fold in the project to torchtitan sub folder. This caused the failure of run_train.sh with following error msg.\n\nIt cased the following error with import message \"from torchtitan.components.checkpoint import CheckpointManager, TrainState\" at the beginning of train.py. This is because the train.py can not find a submodule named \"torchtitan\" cause train.py is already part of torchtitan.\n\nI fixed by some hacky way but looking forward to more suggestions on this\n\n<img width=\"1208\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/3a4358ad-e5a0-4fae-8ebe-1dfb3589da44\" />\n\nThank you!\n\n```\n(/home/jianiw/local/jiani/pytorch-env) [jianiw@devvm7508]~/local/jiani/torchtitan% LOG_RANK=0,1 NGPU=4 ./run_train.sh\n+ NGPU=4\n+ LOG_RANK=0,1\n+ CONFIG_FILE=./torchtitan/models/llama/train_configs/debug_model.toml\n+ overrides=\n+ '[' 0 -ne 0 ']'\n+ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n+ torchrun --nproc_per_node=4 --rdzv_backend c10d --rdzv_endpoint=localhost:0 --local-ranks-filter 0,1 --role rank --tee 3 torchtitan/train.py --job.config_file ./torchtitan/models/llama/train_configs/debug_model.toml\nW0226 15:57:42.491000 2461839 torch/distributed/run.py:763] \nW0226 15:57:42.491000 2461839 torch/distributed/run.py:763] *****************************************\nW0226 15:57:42.491000 2461839 torch/distributed/run.py:763] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \nW0226 15:57:42.491000 2461839 torch/distributed/run.py:763] *****************************************\n[rank0]:Traceback (most recent call last):\n[rank0]: File \"/data/users/jianiw/jiani/torchtitan/torchtitan/train.py\", line 14, in <module>\n[rank0]: from torchtitan.components.checkpoint import CheckpointManager, TrainState\n[rank0]:ModuleNotFoundError: No module named 'torchtitan'\n[rank1]:Traceback (most recent call last):\n[rank1]: File \"/data/users/jianiw/jiani/torchtitan/torchtitan/train.py\", line 14, in <module>\n[rank1]: from torchtitan.components.checkpoint import CheckpointManager, TrainState\n[rank1]:ModuleNotFoundError: No module named 'torchtitan'\nE0226 15:57:44.126000 2461839 torch/distributed/elastic/multiprocessing/api.py:870] failed (exitcode: 1) local_rank: 0 (pid: 2462029) of binary: /home/jianiw/local/jiani/pytorch-env/bin/python\nTraceback (most recent call last):\n File \"/home/jianiw/local/jiani/pytorch-env/bin/torchrun\", line 33, in <module>\n sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())\n File \"/data/users/jianiw/jiani/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 354, in wrapper\n return f(*args, **kwargs)\n File \"/data/users/jianiw/jiani/pytorch/torch/distributed/run.py\", line 889, in main\n run(args)\n File \"/data/users/jianiw/jiani/pytorch/torch/distributed/run.py\", line 880, in run\n elastic_launch(\n File \"/data/users/jianiw/jiani/pytorch/torch/distributed/launcher/api.py\", line 139, in __call__\n return launch_agent(self._config, self._entrypoint, list(args))\n File \"/data/users/jianiw/jiani/pytorch/torch/distributed/launcher/api.py\", line 270, in launch_agent\n raise ChildFailedError(\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \n============================================================\ntorchtitan/train.py FAILED\n------------------------------------------------------------\nFailures:\n[1]:\n time : 2025-02-26_15:57:43\n host : devvm7508.cco0.facebook.com\n rank : 1 (local_rank: 1)\n exitcode : 1 (pid: 2462030)\n error_file: <N/A>\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\n[2]:\n time : 2025-02-26_15:57:43\n host : devvm7508.cco0.facebook.com\n rank : 2 (local_rank: 2)\n exitcode : 1 (pid: 2462032)\n error_file: <N/A>\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\n[3]:\n time : 2025-02-26_15:57:43\n host : devvm7508.cco0.facebook.com\n rank : 3 (local_rank: 3)\n exitcode : 1 (pid: 2462033)\n error_file: <N/A>\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\n------------------------------------------------------------\nRoot Cause (first observed failure):\n[0]:\n time : 2025-02-26_15:57:43\n host : devvm7508.cco0.facebook.com\n rank : 0 (local_rank: 0)\n exitcode : 1 (pid: 2462029)\n error_file: <N/A>\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\n```\n\n### Versions\n\nCurrent main branch after #894 merged (I don't t", "url": "https://github.com/pytorch/torchtitan/issues/897", "state": "closed", "labels": [], "created_at": "2025-02-27T00:11:02Z", "updated_at": "2025-03-23T01:42:01Z", "comments": 3, "user": "jianiw25" }, { "repo": "pytorch/xla", "number": 8757, "title": "Document on how to profile with torch_xla", "body": "## \ud83d\udcda Documentation\n\nI found we don't have a doc/guide on how to profile with torch_xla. We should add this because getting profile is essential for performance analysis.\n", "url": "https://github.com/pytorch/xla/issues/8757", "state": "closed", "labels": [ "enhancement", "documentation" ], "created_at": "2025-02-26T23:27:20Z", "updated_at": "2025-12-02T00:18:03Z", "user": "lsy323" }, { "repo": "pytorch/serve", "number": 3394, "title": "Rename open_inference_grpc.proto package name", "body": "Hi Team,\nStarting from 0.10.0, torchServe introduced [open_inference_grpc.proto](https://github.com/pytorch/serve/blob/v0.10.0/frontend/server/src/main/resources/proto/open_inference_grpc.proto) to allow Pytorch GRPC APIs to follow Kserve open inference V2 protocol. However, I am wondering why the [package name](https://github.com/pytorch/serve/blob/v0.10.0/frontend/server/src/main/resources/proto/open_inference_grpc.proto#L18) used for the proto is different from what's used in [Kserve](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto#L16). Having a different package name would require Pytorch model and non-Pytorch model to use different proto definitions even though they both follow the open inference protocol. I am wondering if it is possible to make the open_inference_grpc.proto within the same package as what is defined in Kserve grpc_predict_v2.proto?\nThank you.", "url": "https://github.com/pytorch/serve/issues/3394", "state": "open", "labels": [], "created_at": "2025-02-26T21:49:57Z", "updated_at": "2025-02-26T21:50:25Z", "comments": 0, "user": "jwang20250226" }, { "repo": "pytorch/FBGEMM", "number": 3737, "title": "How to install this on Windows x64", "body": "I can't pip install FBGEMM, and I've looked through [https://download.pytorch.org/whl/fbgemm-gpu/](https://download.pytorch.org/whl/fbgemm-gpu/), seems like all whl are support for linux (with 'manylinux' in its name) \n\nI just want to use torchrec on Windows, I wonder How to download FBGEMM. \n\nthank you", "url": "https://github.com/pytorch/FBGEMM/issues/3737", "state": "open", "labels": [], "created_at": "2025-02-26T05:58:05Z", "updated_at": "2025-05-09T00:50:07Z", "user": "Elllllllvin" }, { "repo": "pytorch/data", "number": 1456, "title": "Discussion: DCP APIs and broader contracts for rescalability", "body": "After much discussion, it was decided that the best approach to implementing rescalability would be to implement rescaling in the base file reader, in order to maintain low overhead and avoid proliferation of logical shard objects (see #1372 , #1455, [torchtitan PR](https://github.com/pytorch/torchtitan/pull/376)). However, this approach necessitates that all nodes above the base node become rescaling-aware: we must decide what behaviors to support and how to make specifying these behaviors friendly to the user. \n\nI have identified four behaviors that I believe a fully capable rescalable pipeline should support, with some correspondence to the existing placement behaviors of DTensors:\n\n1. Drop on rescale. Certain values, such as scalars and RNG states, cannot be repartitioned and it makes no sense to try. These values should be dropped when rescaling but kept otherwise.\n2. Sharded save, sharded load. Large buffers (for example, a local shuffling buffer) can be pooled into a single DTensor, which is then resharded over a new number of workers when rescaling. DCP is largely built around supporting this particular behavior, but note that we must now handle cases where the number of workers may not divide the length of the buffer evenly, and we also may not know the length of the buffer in advance.\n3. Replicated values. This encompasses any expensive metadata objects that we may want to construct (slowly) once, but load from checkpoint afterwards. These values would ideally be saved from rank 0 only, but loaded back to all workers. DCP supports this behavior for non-DTensor objects.\n4. Sharded save, global load. Any state that cannot be resharded simply via (2), such as logical shard state, which must first be accumulated/divided into global pools of visited vs unvisited shards. Local values are saved from each rank, but accumulated globally on load. DCP supports this behavior for non-DTensor objects, by assigning a unique rank-based key for all such objects and recompiling them manually on load. \n\nNote that while the above 4 behaviors raise some questions on DCP support, the larger question revolves around how we want to expose these options to users and/or incorporate them into existing Datasets or Nodes.", "url": "https://github.com/meta-pytorch/data/issues/1456", "state": "open", "labels": [], "created_at": "2025-02-25T23:14:45Z", "updated_at": "2025-04-21T13:03:30Z", "comments": 2, "user": "daviswer" }, { "repo": "pytorch/pytorch", "number": 147850, "title": "The issue where opt_output in fx_graph_runnable.py is inconsistent with the actual output when testing run_repro(acc=True)", "body": "### \ud83d\udc1b Describe the bug\n\nConclusion\n\u2714 Use .clone() before modifying tensors from expand(), view(), or as_strided().\n\u2714 Ensure tensors are .contiguous() before operations.\n\u2714 Debug with x.is_contiguous() to check memory layout.\n\nIf the issue persists, share a code snippet for further debugging! \ud83d\ude80\n\n### Versions\n\nConclusion\n\u2714 Use .clone() before modifying tensors from expand(), view(), or as_strided().\n\u2714 Ensure tensors are .contiguous() before operations.\n\u2714 Debug with x.is_contiguous() to check memory layout.\n\nIf the issue persists, share a code snippet for further debugging! \ud83d\ude80", "url": "https://github.com/pytorch/pytorch/issues/147850", "state": "closed", "labels": [], "created_at": "2025-02-25T12:23:49Z", "updated_at": "2025-03-03T16:56:35Z", "user": "MovieTrack" }, { "repo": "pytorch/serve", "number": 3393, "title": "map workers and GPUs, deviceIds not considered in ts_config", "body": "lt;dr: using my existing configuration shows no effect when using the \"deviceIds\" property.\n\nI am successfully hosting three diffeerent models on a server with two gpus.\nEach model can be run on a single gpu, but one is more demanding - so I'd like to control the distribution of workers per gpu.\n\nThe deviceIds property seems to be exactly what I'd need for that.\nIt is described [here](https://github.com/pytorch/serve/tree/master/model-archiver#config-file) for the archiver and [here](https://pytorch.org/serve/configuration.html) for either/and the archivers yaml or the model configuration. \nAnd seems to be implemented [here](https://github.com/pytorch/serve/blob/a9e218ae95fe7690c84b555d0fb9021322c9b049/frontend/archive/src/main/java/org/pytorch/serve/archive/model/ModelConfig.java#L81).\n\nHowever, using my existing configuration - which succsessfully controls the worker numbers and timeouts - shows no effect whatsoever when using the deviceIds or deviceType properties. Is this only implemented for the YAML file uppon archiving?\n\nIs there a way to set the deviceIds via the API?\n\nConfiguration excerpt:\n...\n \"defaultVersion\": true,\\\n \"marName\": \"model.mar\",\\\n \"deviceIds\": [1,],\\\n \"minWorkers\": 4,\\\n \"maxWorkers\": 4,\\\n \"batchSize\": 1,\\\n \"maxBatchDelay\": 50,\\\n \"responseTimeout\": 120\\\n...\n\n------------------------------------------------------------------------------------------\nEnvironment headers\n------------------------------------------------------------------------------------------\nTorchserve branch: \n\ntorchserve==0.12.0\ntorch-model-archiver==0.12.0\n\nPython version: 3.10 (64-bit runtime)\nPython executable: /opt/conda/bin/python\n\nVersions of relevant python libraries:\ncaptum==0.6.0\nnumpy==2.2.3\npillow==10.3.0\npsutil==5.9.8\nrequests==2.32.0\ntorch==2.4.0+cu121\ntorch-model-archiver==0.12.0\ntorch-workflow-archiver==0.2.15\ntorchaudio==2.4.0+cu121\ntorchelastic==0.2.2\ntorchserve==0.12.0\ntorchvision==0.19.0+cu121\nwheel==0.42.0\ntorch==2.4.0+cu121\n**Warning: torchtext not present ..\ntorchvision==0.19.0+cu121\ntorchaudio==2.4.0+cu121\n\nJava Version:\n\n\nOS: N/A\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: N/A\nCMake version: version 3.26.4\n\nIs CUDA available: Yes\nCUDA runtime version: 12.1\nNVIDIA GPU models and configuration: \nNVIDIA RTX 4000 Ada Generation\nNVIDIA RTX 4000 Ada Generation\nNvidia driver version: 565.77\nNvidia driver cuda version: 12.7\ncuDNN version: 9.1.0\n\n\nEnvironment:\nlibrary_path (LD_/DYLD_): /usr/local/nvidia/lib:/usr/local/nvidia/lib64", "url": "https://github.com/pytorch/serve/issues/3393", "state": "open", "labels": [], "created_at": "2025-02-25T12:23:11Z", "updated_at": "2025-02-26T14:37:27Z", "comments": 0, "user": "RuDevKu" }, { "repo": "pytorch/data", "number": 1452, "title": "Open for contribution on utility nodes like `Filter`, `Shuffler`, `Header`, `Cycler`?", "body": "Hi, do you think this kind of nodes would be in the scope of Torchdata? Then I'm down to open a PR to add them. with remaining and testing, for sure. \n\n```python\nimport logging\nimport random\nfrom collections import deque\nfrom typing import Any, Callable, Deque, Dict, Optional, TypeVar, Optional\nfrom torchdata.nodes import BaseNode\n\nlogger = logging.getLogger(__name__)\n\nX = TypeVar(\"X\")\nT = TypeVar(\"T\")\nU = TypeVar(\"U\")\n\n\nclass Filter(BaseNode[T]):\n \"\"\"Node that filters items from source node based on predicate function.\"\"\"\n\n SOURCE_KEY = \"source\"\n\n def __init__(self, source_node: BaseNode[T], filter_fn: Callable[[T], bool]):\n super().__init__()\n self.source = source_node\n self.filter_fn = filter_fn\n\n def reset(self, initial_state: Optional[Dict[str, Any]] = None):\n super().reset(initial_state)\n self.source.reset(initial_state.get(self.SOURCE_KEY) if initial_state else None)\n\n def next(self) -> T:\n while True:\n item = next(self.source)\n if self.filter_fn(item):\n return item\n\n def get_state(self) -> Dict[str, Any]:\n return {self.SOURCE_KEY: self.source.state_dict()}\n\n\nclass Shuffler(BaseNode[T]):\n \"\"\"Node that shuffles items from source node using a buffer.\"\"\"\n\n SOURCE_KEY = \"source\"\n\n def __init__(self, source_node: BaseNode[T], buffer_size: int, seed: Optional[int] = None):\n super().__init__()\n if buffer_size < 1:\n raise ValueError(\"Buffer size must be at least 1\")\n self.source = source_node\n self.buffer_size = buffer_size\n self.buffer: Deque[T] = deque()\n self.rng = random.Random(seed)\n self._initial_seed = seed\n\n def reset(self, initial_state: Optional[Dict[str, Any]] = None):\n super().reset(initial_state)\n self.buffer.clear()\n\n if initial_state is not None:\n self.source.reset(initial_state.get(self.SOURCE_KEY))\n self.rng.setstate(initial_state[\"rng_state\"])\n else:\n self.source.reset()\n if self._initial_seed is not None:\n self.rng = random.Random(self._initial_seed)\n\n def _fill_buffer(self) -> bool:\n \"\"\"Fill buffer with items from source. Returns True if any items were added.\"\"\"\n try:\n while len(self.buffer) < self.buffer_size:\n self.buffer.append(next(self.source))\n return True\n except StopIteration:\n return len(self.buffer) > 0\n\n def next(self) -> T:\n if not self.buffer and not self._fill_buffer():\n raise StopIteration\n\n # Randomly select and remove an item from the buffer\n idx = self.rng.randrange(len(self.buffer))\n item = self.buffer[idx]\n self.buffer[idx] = self.buffer[-1]\n self.buffer.pop()\n\n # Try to refill buffer\n self._fill_buffer()\n return item\n\n def get_state(self) -> Dict[str, Any]:\n return {self.SOURCE_KEY: self.source.state_dict(), \"rng_state\": self.rng.getstate()}\n\n\nclass Header(BaseNode[T]):\n \"\"\"Node that yields only the first N items from source node.\"\"\"\n\n SOURCE_KEY = \"source\"\n\n def __init__(self, source_node: BaseNode[T], n: int):\n super().__init__()\n if n < 0:\n raise ValueError(\"n must be non-negative\")\n self.source = source_node\n self.n = n\n self._count = 0\n\n def reset(self, initial_state: Optional[Dict[str, Any]] = None):\n super().reset(initial_state)\n self.source.reset(initial_state.get(self.SOURCE_KEY) if initial_state else None)\n if initial_state is not None:\n self._count = initial_state[\"count\"]\n else:\n self._count = 0\n\n def next(self) -> T:\n if self._count >= self.n:\n raise StopIteration\n\n item = next(self.source)\n self._count += 1\n return item\n\n def get_state(self) -> Dict[str, Any]:\n return {self.SOURCE_KEY: self.source.state_dict(), \"count\": self._count}\n\n\nclass Cycler(BaseNode[T]):\n \"\"\"Node that cycles through source node indefinitely.\"\"\"\n\n SOURCE_KEY = \"source\"\n\n def __init__(self, source_node: BaseNode[T]):\n super().__init__()\n self.source = source_node\n self._cycle_count: int = 0\n\n def reset(self, initial_state: Optional[Dict[str, Any]] = None):\n super().reset(initial_state)\n if initial_state is not None:\n self._cycle_count = initial_state[\"cycle_count\"]\n self.source.reset(initial_state.get(self.SOURCE_KEY))\n else:\n self._cycle_count = 0\n self.source.reset(None)\n\n def next(self) -> T:\n try:\n return next(self.source)\n except StopIteration:\n self._cycle_count += 1\n self.source.reset(None)\n return next(self.source)\n\n def get_state(self) -> Dict[str, Any]:\n return {self.SOURCE_KEY: self.source.state_dict(), \"cycle_count\": self._cycle_count", "url": "https://github.com/meta-pytorch/data/issues/1452", "state": "open", "labels": [], "created_at": "2025-02-25T03:36:59Z", "updated_at": "2025-02-25T05:08:09Z", "comments": 1, "user": "keunwoochoi" }, { "repo": "pytorch/torchtitan", "number": 885, "title": "Possible to integrate DeepEP?", "body": "ref: https://github.com/deepseek-ai/DeepEP", "url": "https://github.com/pytorch/torchtitan/issues/885", "state": "open", "labels": [], "created_at": "2025-02-25T03:24:56Z", "updated_at": "2026-01-05T17:13:54Z", "comments": 5, "user": "airlsyn" }, { "repo": "pytorch/xla", "number": 8740, "title": "Add single processing to Getting Started Instructions", "body": "In our initial README document, we currently only have instructions on multi-processing steps for getting started. We should add information to single processing.", "url": "https://github.com/pytorch/xla/issues/8740", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-02-25T01:15:38Z", "updated_at": "2025-03-27T17:30:35Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/torchtitan", "number": 883, "title": "[Evaluation] Minimal support for downstream tasks", "body": "Hello and thanks for the great work,\nFor now torchtitan only has an evaluation on train loss. Do you have in mind to provide a minimal support for a downstream task like for example a general knowledge score on MMLU?\nThe aim would be to provide the minimum necessary to accomplish a downstream task, a bit like the minimal example with a HuggingFace dataset (c4 in this case) while trying to keep the native pytorch spirit as much as possible. \nIf so, can I participate by initiating a PR?\n", "url": "https://github.com/pytorch/torchtitan/issues/883", "state": "closed", "labels": [ "enhancement", "high priority", "triage review" ], "created_at": "2025-02-24T16:07:57Z", "updated_at": "2025-07-10T12:30:00Z", "comments": 14, "user": "K-H-Ismail" }, { "repo": "pytorch/xla", "number": 8738, "title": "support more op in jaten.py", "body": "## \u2753 Questions and Help\nHi, I want to convert llama2-7b model, and I want to use jlibrary.register_jax_composite to composite some op.\nNow I need to composite below 2 ops: torch.nn.RMSNorm and transformers.models.llama.modeling_llama.LlamaRotaryEmbedding.\nDo you have plan to add above 2 ops in jaten.py?\n\n[xla](https://github.com/pytorch/xla/tree/master)/[torchax](https://github.com/pytorch/xla/tree/master/torchax)/[torchax](https://github.com/pytorch/xla/tree/master/torchax/torchax)/[ops](https://github.com/pytorch/xla/tree/master/torchax/torchax/ops)/jaten.py\n\nAnother question, after using jlibrary.register_jax_composite, I got call op in IR, do you have plan to use composite op replace call op? If have, is there an approximate time for completion?", "url": "https://github.com/pytorch/xla/issues/8738", "state": "closed", "labels": [ "question", "torchxla2" ], "created_at": "2025-02-24T06:10:50Z", "updated_at": "2025-03-04T06:06:09Z", "user": "raninbowlalala" }, { "repo": "pytorch/ao", "number": 1764, "title": "[QST] Tensor subclass serialization", "body": "Pardon the naive question, trying to understand how to implement a basic tensor subclass.\n\nThe problem I'm encountering is that the tensor subclass loses its attributes after calling torch.save on a state dict containing the subclass likely due to the use of `swap_tensors`.\n\nMinimal repro:\n```python\nfrom io import BytesIO\n\nimport torch\nfrom torch._ops import OpOverload\nfrom torchao.dtypes.nf4tensor import _INNER_TENSOR_NAMES_FOR_SHARDING, NF4Tensor, to_nf4\n\naten = torch.ops.aten\n\nclass SimpleTensor(torch.Tensor):\n @staticmethod\n def __new__(cls, inner_tensor, *args, **kwargs):\n \n kwargs[\"device\"] = inner_tensor.device\n kwargs[\"layout\"] = inner_tensor.layout\n kwargs[\"dtype\"] = inner_tensor.dtype\n kwargs[\"requires_grad\"] = inner_tensor.requires_grad\n print(f\"New SimpleTensor: {kwargs}\")\n return torch.Tensor._make_wrapper_subclass(cls, inner_tensor.shape, **kwargs) # type: ignore[attr-defined]\n\n def __init__(self, inner_tensor, *args, **kwargs):\n self.inner_tensor = inner_tensor\n\n def __repr__(self):\n return f\"SimpleTensor({self.inner_tensor.shape})\"\n\n def __tensor_flatten__(self):\n return [\"inner_tensor\"], None\n \n def __tensor_unflatten__(inner_tensors, metadata, outer_size, outer_stride):\n return SimpleTensor(inner_tensors[\"inner_tensor\"])\n \n @classmethod\n def __torch_function__(cls, func, types, args=(), kwargs=None):\n kwargs = {} if kwargs is None else kwargs\n try:\n print(f\"calling {func.__name__} with args: {[type(arg) for arg in args]} and kwargs: {kwargs}\")\n with torch._C.DisableTorchFunctionSubclass():\n return func(*args, **kwargs)\n except Exception as e:\n print(f\"ERR: subclass doesn't implement {func}\")\n raise e\n\n def __torch_dispatch__(self, func: OpOverload, types, args=(), kwargs=None):\n \n FUNCS = [aten.detach.default, aten.copy_.default]\n print(f\"dispatching {func._schema.name} {func._opname} {func._overloadname} with {len(args)} args: {[type(arg) for arg in args]} and kwargs: {kwargs}\")\n print(f\"Func in impelmented funcs: {func in FUNCS}\")\n if func is torch.ops.aten.detach.default:\n print(f\"returning {args[0]}\")\n return args[0]\n if func is aten.copy_.default:\n print(f\"copying {args[0]} to {args[1]}\")\n original = args[0]\n copy_in = args[1]\n original.inner_tensor.copy_(copy_in.inner_tensor)\n return\n\n return func(*args, **kwargs)\n\ntorch.serialization.add_safe_globals([SimpleTensor])\n\n###\n\ndtype = torch.bfloat16\ndevice = \"cuda\"\nbatch_size = 2\nin_features = 256\nout_features = 128\noriginal_tensor = torch.randn(out_features, in_features, dtype=dtype, device=device)\n\nprint(\"\\n=================== SimpleTensor =================================\\n\")\nsimple_tensor = SimpleTensor(original_tensor)\n\ntry:\n print(f\"Simple tensor: {simple_tensor.inner_tensor.shape}\")\nexcept Exception as e:\n print(f\"Simple tensor error: {e}\")\n \ntorch.utils.swap_tensors(original_tensor, simple_tensor)\n\ntry:\n print(f\"Swapped tensor: {original_tensor.inner_tensor.shape}\")\nexcept Exception as e:\n print(f\"Swapped tensor error: {e}\")\n\nbuffer = BytesIO()\ntorch.save({\"weight\": original_tensor}, buffer)\nbuffer.seek(0)\ntry:\n state_dict = torch.load(buffer)\nexcept Exception as e:\n print(f\"State load error: {e}\")\n\ntry:\n restored_tensor = state_dict['weight']\n print(f\"Restored tensor: {restored_tensor.inner_tensor.shape}\")\nexcept Exception as e:\n print(f\"Restored tensor error: {e}\")\n\nprint(\"\\n=================== NF4Tensor =================================\\n\")\noriginal_tensor = torch.randn(out_features, in_features, dtype=dtype, device=device)\nnf4_tensor = to_nf4(original_tensor)\n\ntry:\n for name in _INNER_TENSOR_NAMES_FOR_SHARDING:\n print(f\"NF4 tensor {name}: {getattr(nf4_tensor, name).shape}\")\nexcept Exception as e:\n print(f\"NF4 tensor error: {e}\")\n\ntorch.utils.swap_tensors(original_tensor, nf4_tensor)\ntry:\n for name in _INNER_TENSOR_NAMES_FOR_SHARDING:\n print(f\"Swapped tensor {name}: {getattr(original_tensor, name).shape}\")\nexcept Exception as e:\n print(f\"Swapped tensor Error: {e}\")\n\nbuffer = BytesIO()\ntorch.save({\"weight\": original_tensor}, buffer)\nbuffer.seek(0)\nstate_dict = torch.load(buffer)\ntry:\n restored_tensor = state_dict['weight']\n for name in _INNER_TENSOR_NAMES_FOR_SHARDING:\n print(f\"State dict {name}: {getattr(restored_tensor, name).shape}\")\nexcept Exception as e:\n print(f\"State dict error: {e}\")\n```\n\nRunning the above gives the following prints an error while loading the state dict for `SimpleTensor` with `weights_only=True` even after registering `SimpleTensor` as safe (`torch.serialization.add_safe_globals([SimpleTensor])`):\n```\nState load error: Weights only load failed. In PyTorch 2.6, we changed the default value of the `weights_only` argument i", "url": "https://github.com/pytorch/ao/issues/1764", "state": "open", "labels": [ "question" ], "created_at": "2025-02-23T03:25:05Z", "updated_at": "2025-03-01T19:32:57Z", "user": "jeromeku" }, { "repo": "pytorch/torchtitan", "number": 875, "title": "RuntimeError: Got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators", "body": "When I ran the llama3-8b model with cp on a third party device, I ran into a problem with the error message: \n`RuntimeError: npu.npu_fusion_attention.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators.`\nnpu_fusion_attention is called in the torch.nn.functional.scaled_dot_product_attention function, which is a custom operator . How can I solve this problem? Do I need to register a custom operator somewhere?\n\n", "url": "https://github.com/pytorch/torchtitan/issues/875", "state": "closed", "labels": [ "question", "module: context parallel", "module: dtensor" ], "created_at": "2025-02-21T03:23:27Z", "updated_at": "2025-02-28T08:30:44Z", "user": "aahehehe" }, { "repo": "pytorch/xla", "number": 8728, "title": "Debug XLA using GDB", "body": "## \u2753 Questions and Help\nI would like to debug XLA code using gdb via C++/Python Debugger, which means that I need a _XLAC.cpython-310-x86_64-linux-gnu.so built in debug mode to have debug symbols, just like DCMAKE_BUILD_TYPE=Debug. I don't know how to get this artifact.\n\nThanks for your help.", "url": "https://github.com/pytorch/xla/issues/8728", "state": "closed", "labels": [], "created_at": "2025-02-20T03:22:41Z", "updated_at": "2025-02-20T08:00:00Z", "comments": 2, "user": "yuanfz98" }, { "repo": "pytorch/xla", "number": 8727, "title": "Create a site map or centralize links in README", "body": "## \ud83d\udcda Documentation\n\nAdd repo map to https://github.com/pytorch/xla/blob/master/README.md. Currently we have many helpful links, but they are spread around the repo. We should have a location with these centralized to help people find useful documentation easily.", "url": "https://github.com/pytorch/xla/issues/8727", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-02-20T00:04:49Z", "updated_at": "2025-03-24T18:58:57Z", "comments": 1, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8726, "title": "Add documentation on xla_native_functions.yaml categories", "body": "## \ud83d\udcda Documentation\n\nAdd more information to https://github.com/pytorch/xla/blob/60160233ad413f030da1e7e383cc85950bcf347c/codegen/xla_native_functions.yaml#L3 on what the different categories mean in terms of lowering operations", "url": "https://github.com/pytorch/xla/issues/8726", "state": "open", "labels": [ "documentation" ], "created_at": "2025-02-19T21:57:04Z", "updated_at": "2025-02-20T12:54:47Z", "comments": 2, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8725, "title": "Add operation lowering unit tests to test_operations.py", "body": "## \ud83d\ude80 Feature\nWe should expand test/test_operations to check if operations are being lowered. We have previously seen issues being cause due to this issue (see https://github.com/pytorch/xla/issues/4032 and https://github.com/pytorch/xla/issues/8713). An example of this test can be seen in https://github.com/pytorch/xla/pull/8686.\n\nWe should study to see if it is possible to generalize this test, and expand to test our other lowered operations\n\n## Motivation\n\nImprove our unit tests to expand coverage while continuing to be readable\n\n", "url": "https://github.com/pytorch/xla/issues/8725", "state": "open", "labels": [ "testing" ], "created_at": "2025-02-19T20:18:28Z", "updated_at": "2025-03-04T22:56:09Z", "comments": 1, "user": "pgmoka" }, { "repo": "pytorch/torchtitan", "number": 862, "title": "SimpleFSDP vs. FSDP2", "body": "Hi @tianyu-l , just came across [SimpleFSDP](https://arxiv.org/pdf/2411.00284) and its [implementation](https://github.com/facebookresearch/capi/blob/main/fsdp.py) (nice project!).\n\nIn the paper, SimpleFSDP is extensively compared with FSDP2. May I know if torchtitan is going to support it or there is a way to somehow combine SimpleFSDP and FSDP2? ", "url": "https://github.com/pytorch/torchtitan/issues/862", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-19T20:16:58Z", "updated_at": "2025-02-20T08:18:36Z", "user": "yenchenlin" }, { "repo": "pytorch/xla", "number": 8722, "title": "Add args documentation to xla.launch", "body": "## \ud83d\udcda Documentation\n\nIn https://github.com/pytorch/xla/blob/60160233ad413f030da1e7e383cc85950bcf347c/torch_xla/torch_xla.py#L212, we should have arguments be documented to note that:\n1) The callable function's firts argument is the process id;\n2) The args tuple is passed to the callable function afterwards.\n\nThe pattern being called by Callable is something like:\n\nCallable(process_id, args...).\n\nWe should make this clear from the method call.", "url": "https://github.com/pytorch/xla/issues/8722", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-02-19T17:44:44Z", "updated_at": "2025-02-20T18:21:22Z", "comments": 1, "user": "pgmoka" }, { "repo": "pytorch/tutorials", "number": 3272, "title": "Introduction to Libuv TCPStore Backend", "body": "Thanks for the [article](https://github.com/pytorch/tutorials/blob/main/intermediate_source/TCPStore_libuv_backend.rst). Wondering if you can provide some details about the content of the TCPStore and what is its role in c10d . ", "url": "https://github.com/pytorch/tutorials/issues/3272", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-18T20:56:09Z", "updated_at": "2025-04-16T17:57:44Z", "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 147374, "title": "[ONNX] How to export triton custom kernels as custom ops?", "body": "### \ud83d\udc1b Describe the bug\n\ncan't export triton cumstom op kernel when use torch.onnx.export(dynamo=True)\ni have use triton_op and wrap_triton to wrap this triton kernel\n\n```python\nimport torch\nfrom torch.library import triton_op, wrap_triton\nimport triton\nfrom triton import language as tl\n@triton.jit\ndef add_kernel(\n in_ptr0,\n in_ptr1,\n out_ptr,\n n_elements,\n BLOCK_SIZE: \"tl.constexpr\",\n):\n pid = tl.program_id(axis=0)\n block_start = pid * BLOCK_SIZE\n offsets = block_start + tl.arange(0, BLOCK_SIZE)\n mask = offsets < n_elements\n x = tl.load(in_ptr0 + offsets, mask=mask)\n y = tl.load(in_ptr1 + offsets, mask=mask)\n output = x + y\n tl.store(out_ptr + offsets, output, mask=mask)\n\n@triton_op(\"mylib::add\", mutates_args={})\ndef add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:\n output = torch.empty_like(x)\n n_elements = output.numel()\n def grid(meta):\n return (triton.cdiv(n_elements, meta[\"BLOCK_SIZE\"]),)\n # NB: we need to wrap the triton kernel in a call to wrap_triton\n wrap_triton(add_kernel)[grid](x, y, output, n_elements, 16)\n return output\n@torch.compile\ndef f(x, y):\n return add(x, y)\nx = torch.randn(3, device=\"cuda\")\ny = torch.randn(3, device=\"cuda\")\nz = f(x, y)\nassert torch.allclose(z, x + y)\nwith torch.no_grad():\n torch.onnx.export(f,\n (x,y,),\n \"triton_export.onnx\", \n export_params=True, \n dynamo=True,\n opset_version=18, \n do_constant_folding=False, \n optimize=False,\n #custom_translation_table=custom_translation_table,\n input_names=[\"zzq_a\",\"zzq_b\"],\n output_names=[\"zzq_out\"],\n verbose=True)\n```\n\nerror msg:\n```\ntorch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export(..., strict=False)`...\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export(..., strict=False)`... \u274c\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export`...\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export`... \u274c\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with Torch Script...\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with Torch Script... \u274c\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with internal Dynamo apis...\n[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with internal Dynamo apis... \u2705\n[torch.onnx] Run decomposition...\n[torch.onnx] Run decomposition... \u2705\n[torch.onnx] Translate the graph into ONNX...\n[torch.onnx] Translate the graph into ONNX... \u274c\nTraceback (most recent call last):\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py\", line 708, in _translate_fx_graph\n _handle_call_function_node_with_lowering(\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py\", line 490, in _handle_call_function_node_with_lowering\n raise _errors.DispatchError(\ntorch.onnx._internal.exporter._errors.DispatchError: No ONNX function found for <torch._higher_order_ops.triton_kernel_wrap.TritonKernelWrapperFunctional object at 0x7f63c5fa01c0>. Failure message: No decompositions registered for the real-valued input\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py\", line 1372, in export\n onnx_program = _exported_program_to_onnx_program(\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py\", line 1008, in _exported_program_to_onnx_program\n values = _translate_fx_graph(\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py\", line 734, in _translate_fx_graph\n raise _errors.ConversionError(\ntorch.onnx._internal.exporter._errors.ConversionError: Error when translating node %triton_kernel_wrapper_functional_proxy : [num_users=1] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_functional](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 10, grid: [(1, 1, 1)], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg0, in_ptr1: %arg1, out_ptr: %empty_like, n_elements: 3, BLOCK_SIZE: 16}, tensors_to_clone: [out_ptr]}). See the stack trace for more information.\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/local/app/torch_ddp/triton_export.py\", line 38, in <module>\n torch.onnx.export(f,\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/__init__.py\", line 351, in export\n return _compat.export_compat(\n File \"/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/e", "url": "https://github.com/pytorch/pytorch/issues/147374", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2025-02-18T12:11:20Z", "updated_at": "2025-02-19T22:57:49Z", "user": "zzq96" }, { "repo": "pytorch/xla", "number": 8715, "title": "Pytorch XLA XMP Spawn Error", "body": "## \ud83d\udc1b Bug\n\n<!-- A clear and concise description of what the bug is. -->\n\nI'm currently trying to run a very simple example of just calling \"Hello World\" from each TPU. I'm currently running based on the torch xla versions on the vllm-tpu docker\n\n## To Reproduce\n\n<!--\nIt is really important for the team to have a quick repro, which requires no setup work.\n\nThe quicker is the repro to be run, the higher the chances the bug will be addressed sooner.\n\nThe best way to create quick repros is to create a Colab based on the following template:\n\nhttps://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information\n\nThings to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.\n\nAnother example are Colab which mount user's Google Drive storages.\n\nUsing a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:\n\nhttps://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65\n\n-->\n\nSteps to reproduce the behavior:\n\n1. Run the docker image for vllm-tpu: https://hub.docker.com/r/vllm/vllm-tpu/tags\n\nRun code:\n```\nimport ray\nimport torch_xla.core.xla_model as xm\nimport torch_xla.distributed.xla_multiprocessing as xmp\n\ndef train_mp(rank):\n # Get XLA device\n device = xm.xla_device()\n print(f\"Hello from rank {rank} on device {device}\")\n\n@ray.remote(num_cpus=10, resources={\"TPU\": 8, \"TPU-v6e-8-head\": 1})\ndef run_on_tpu():\n # Spawn 8 processes, one for each TPU core\n xmp.spawn(train_mp, nprocs=8)\n \nif __name__ == \"__main__\":\n future = run_on_tpu.remote()\n ray.get(future)\n```\n\nError:\n```\n(pid=3030, ip=10.202.15.237) WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.\nTraceback (most recent call last):\n File \"/tmp/ray/session_2025-02-17_21-04-51_192242_540/runtime_resources/working_dir_files/_ray_pkg_b1a1e85c76a92463/experiments/test_xla_infer.py\", line 17, in <module>\n ray.get(future)\n File \"/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/auto_init_hook.py\", line 21, in auto_init_wrapper\n return fn(*args, **kwargs)\n File \"/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/client_mode_hook.py\", line 103, in wrapper\n return func(*args, **kwargs)\n File \"/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/worker.py\", line 2691, in get\n values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)\n File \"/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/worker.py\", line 871, in get_objects\n raise value.as_instanceof_cause()\nray.exceptions.RayTaskError(ValueError): ray::run_on_tpu() (pid=3030, ip=10.202.15.237)\n File \"/tmp/ray/session_2025-02-17_21-04-51_192242_540/runtime_resources/working_dir_files/_ray_pkg_b1a1e85c76a92463/experiments/test_xla_infer.py\", line 13, in run_on_tpu\n xmp.spawn(train_mp, nprocs=8)\n File \"/home/ray/anaconda3/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 39, in spawn\n return pjrt.spawn(fn, nprocs, start_method, args)\n File \"/home/ray/anaconda3/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 209, in spawn\n raise ValueError(\nValueError: Unsupported nprocs (8). Please use the environment variable for the hardware you are using (X_NUM_DEVICES where X is CPU, GPU, TPU, NEURONCORE, etc).\n```\n\nI've tried some things such as setting `TPU_NUM_DEVICES` in the environment variables to 8 but that didn't help.\n\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->\n\n## Expected behavior\n\n<!-- A clear and concise description of what you expected to happen. -->\n\nI would expect a Hello world from each of the devices\n\n## Environment\n\n - Reproducible on XLA backend [CPU/TPU/CUDA]: TPU\n - torch_xla version: \n```\nUsing torch_xla version: 2.6.0+git39e67b5\n```\n\n## Additional context\n\n<!-- Add any other context about the problem here. -->\n", "url": "https://github.com/pytorch/xla/issues/8715", "state": "closed", "labels": [ "distributed" ], "created_at": "2025-02-18T06:29:06Z", "updated_at": "2025-02-20T18:59:56Z", "comments": 3, "user": "BabyChouSr" }, { "repo": "pytorch/ao", "number": 1724, "title": "[Question] Static Quantization for Open-Source LLMs", "body": "## Description\nHi, I am a beginner in quantization and would like to experiment with INT8 dynamic and static quantization on open-source LLMs.\n\n* For dynamic quantization, I found that `int8_dynamic_activation_int8_weight` is available in `torchao/quantization/quant_api.py`.\n* For static quantization, I did not find an INT8 version. Instead, I only found `float8_static_activation_float8_weight`.\n\n## Questions\n* Why is only INT8 dynamic quantization provided? Is there a specific concern that prevents static INT8 quantization?\n* If I want to implement INT8 static quantization, can I follow `tutorials/calibration_flow/static_quant.py` as a reference?\n* For `float8_static_activation_float8_weight`, it requires a scalar parameter. What would be a recommended way to determine this parameter?\n\nAny insights or guidance would be greatly appreciated. Thanks in advance! \ud83d\ude0a", "url": "https://github.com/pytorch/ao/issues/1724", "state": "open", "labels": [ "question", "quantize_" ], "created_at": "2025-02-18T02:32:20Z", "updated_at": "2025-02-19T13:13:44Z", "user": "yang-ahuan" }, { "repo": "pytorch/torchtitan", "number": 852, "title": "How to define Custom Communication Operations for Custom Operators in Distributed Settings", "body": "Thank you for your awesome project. I would like to ask how to solve the following issue: \n\nI have implemented the logcumsumexp operator, where the input placement is Shard(-1) and the output placement is Replicate(). To obtain the final result, I need to create a custom all-reduce operator (instead of using the conventional sum). How should I go about implementing this? \n\nMore generally, for an operator function `f`, given an input placement1 and an output placement2, where should I implement various custom communication operations? I would greatly appreciate it if you could provide some examples for this.", "url": "https://github.com/pytorch/torchtitan/issues/852", "state": "closed", "labels": [ "question", "module: dtensor" ], "created_at": "2025-02-17T16:49:25Z", "updated_at": "2025-08-21T03:07:29Z", "user": "Doraemonzzz" }, { "repo": "pytorch/serve", "number": 3392, "title": "How to run the benchmark scripts on the local model ?", "body": "How to run the benchmark scripts on the local model ?\n\nI tried following but it fails with `ModelNotFoundException`\npython benchmark_ab.py --config benchmark_config.json\n```\n{\n \"url\": \"./model_store/custom_model.mar\",\n \"requests\": 100,\n \"concurrency\": 10,\n \"input\": \"kitten_small.jpg\",\n \"exec_env\": \"local\",\n \"device\": \"cpu\"\n }\n```\n ", "url": "https://github.com/pytorch/serve/issues/3392", "state": "closed", "labels": [], "created_at": "2025-02-17T14:16:47Z", "updated_at": "2025-02-17T14:53:01Z", "user": "ranipakeyur" }, { "repo": "pytorch/torchtitan", "number": 850, "title": "\"Universal\" Checkpointing", "body": "Is there an equivalent of Deepspeed [Universal Checkpointing](https://github.com/deepspeedai/DeepSpeed/blob/master/blogs/deepspeed-ucp/README.md) currently for distributed checkpointing, DTensor and FSDP2? That is, how to use torch-native tooling to convert from a checkpoint with a given sharded / parallelism config to a new config such that the sharded state dicts can be directly loaded with a new world size.\n\nFor example, train a model on 128 GPUs with `FSDP` (`DP128`) and save a checkpoint with 128 sharded state dicts. Resume training on 64 GPUs with `TP2` / `FSDP` (`DP32`). \n\nManually, one could merge the original checkpoint from 128 shards -> single merged state dict, then reshard to `TP2` followed by partitioning `TP` shards to 32 `DP` partitions for a total of 64 sharded state dicts, then directly load these state dicts on each rank (without having to first materialize the full state dict on any rank).\n\n@awgu ", "url": "https://github.com/pytorch/torchtitan/issues/850", "state": "closed", "labels": [ "question", "module: checkpoint" ], "created_at": "2025-02-17T12:32:39Z", "updated_at": "2025-06-05T06:28:04Z", "user": "jeromeku" }, { "repo": "pytorch/pytorch", "number": 147263, "title": "How to trigger several independent communications simultaneously?", "body": "For example, in training with 4 GPUs, I divide the GPUs into pairs and create two communication groups: group1 = dist.new_group([0, 1]) and group2 = dist.new_group([2, 3]). If I want to run independent dist.all_gather operations within both communication groups simultaneously, it results in an error. I'd like to ask how to implement this correctly.\n\n```\nFile \"/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/deepspeed/comm/torch.py\", line 209, in all_gather\n return torch.distributed.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)\n File \"/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/c10d_logger.py\", line 72, in wrapper\n return func(*args, **kwargs)\n File \"/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py\", line 2617, in all_gather\n work = group.allgather([tensor_list], [tensor])\ntorch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3\nncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. \nLast error:\nsocketStartConnect: Connect to 192.168.1.91<48217> failed : Software caused connection abort\nnode06:1913795:1914481 [2] NCCL INFO Setting affinity for GPU 2 to 0fffff,ff000000,0fffffff\nnode06:1913796:1914482 [3] NCCL INFO Setting affinity for GPU 3 to 0fffff,ff000000,0fffffff\nnode06:1913795:1914481 [2] NCCL INFO Channel 00/04 : 0 1\nnode06:1913795:1914481 [2] NCCL INFO Channel 01/04 : 0 1\nnode06:1913795:1914481 [2] NCCL INFO Channel 02/04 : 0 1\nnode06:1913795:1914481 [2] NCCL INFO Channel 03/04 : 0 1\nnode06:1913795:1914481 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1\nnode06:1913795:1914481 [2] NCCL INFO P2P Chunksize set to 131072\nnode06:1913796:1914482 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1\nnode06:1913796:1914482 [3] NCCL INFO P2P Chunksize set to 131072\nnode06:1913795:1914481 [2] NCCL INFO Channel 00/0 : 0[2] -> 1[3] via P2P/CUMEM\nnode06:1913796:1914482 [3] NCCL INFO Channel 00/0 : 1[3] -> 0[2] via P2P/CUMEM\nnode06:1913795:1914481 [2] NCCL INFO Channel 01/0 : 0[2] -> 1[3] via P2P/CUMEM\nnode06:1913796:1914482 [3] NCCL INFO Channel 01/0 : 1[3] -> 0[2] via P2P/CUMEM\nnode06:1913795:1914481 [2] NCCL INFO Channel 02/0 : 0[2] -> 1[3] via P2P/CUMEM\nnode06:1913796:1914482 [3] NCCL INFO Channel 02/0 : 1[3] -> 0[2] via P2P/CUMEM\nnode06:1913795:1914481 [2] NCCL INFO Channel 03/0 : 0[2] -> 1[3] via P2P/CUMEM\nnode06:1913796:1914482 [3] NCCL INFO Channel 03/0 : 1[3] -> 0[2] via P2P/CUMEM\nnode06:1913796:1914482 [3] NCCL INFO Connected all rings\nnode06:1913796:1914482 [3] NCCL INFO Connected all trees\nnode06:1913796:1914482 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512\nnode06:1913795:1914481 [2] NCCL INFO Connected all rings\nnode06:1913796:1914482 [3] NCCL INFO 4 coll channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer\nnode06:1913795:1914481 [2] NCCL INFO Connected all trees\nnode06:1913795:1914481 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512\nnode06:1913795:1914481 [2] NCCL INFO 4 coll channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer\nnode06:1913795:1914481 [2] NCCL INFO comm 0x1a9590b0 rank 0 nranks 2 cudaDev 2 nvmlDev 2 busId 6c000 commId 0xdd736563a6f28c07 - Init COMPLETE\nnode06:1913796:1914482 [3] NCCL INFO comm 0x1931a220 rank 1 nranks 2 cudaDev 3 nvmlDev 3 busId 6d000 commId 0xdd736563a6f28c07 - Init COMPLETE\n```\n\ncc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o", "url": "https://github.com/pytorch/pytorch/issues/147263", "state": "open", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2025-02-15T11:47:10Z", "updated_at": "2025-04-23T20:54:39Z", "user": "Ind1x1" }, { "repo": "pytorch/xla", "number": 8710, "title": "Expand troubleshoot instructions", "body": "## \ud83d\udcda Documentation\n\nExpand troubleshoot instructions in https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/docs/source/learn/troubleshoot.md to include common errors, and new debugging strategies.", "url": "https://github.com/pytorch/xla/issues/8710", "state": "open", "labels": [ "documentation" ], "created_at": "2025-02-14T18:54:01Z", "updated_at": "2025-02-14T18:54:22Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/xla", "number": 8709, "title": "Add more info to TPU_TOPOLOGY errors", "body": "## \ud83d\udcda Documentation\n\nCurrently if a VM is created utilizing an OS that does not support training on the TPU we get a TPU_TOPOLOGY OS error. We should add to our documentation to make these errors, and their solutions clearer.", "url": "https://github.com/pytorch/xla/issues/8709", "state": "open", "labels": [ "documentation" ], "created_at": "2025-02-14T18:49:40Z", "updated_at": "2025-02-14T18:49:40Z", "comments": 0, "user": "pgmoka" }, { "repo": "pytorch/vision", "number": 8905, "title": "Can the `_make_divisible_function` be explained better?", "body": "### \ud83d\udcda The doc issue\n\nI'm referring to the following function: https://github.com/pytorch/vision/blob/main/torchvision/models/_utils.py#L76 I've no doubt that it is correct, but why does it sometimes round down the input and why is the threshold set to 90%? Is the formula from a well-known paper?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8905", "state": "closed", "labels": [], "created_at": "2025-02-14T17:16:42Z", "updated_at": "2025-02-14T17:37:01Z", "comments": 1, "user": "bjourne" }, { "repo": "pytorch/serve", "number": 3391, "title": "How can a user specify an envelope?", "body": "### \ud83d\udcda The doc issue\n\nThe `service_envelope` parameter has disappeared from the documentation:\nhttps://pytorch.org/serve/configuration.html#other-properties\n\nThe KServe documentation states that this parameter is depricated:\nhttps://kserve.github.io/website/0.11/modelserving/v1beta1/torchserve/#create-model-storage-with-a-model-archive-file-and-config\nand that `enable_envvars_config=true` should now be used instead.\n\nThe question arises how can the user now set the envelope type (`json/kserve/kservev2`) and where is the place in the code where it is defined?\nWhere is this shown in the documentation?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3391", "state": "open", "labels": [], "created_at": "2025-02-14T07:24:11Z", "updated_at": "2025-02-14T07:24:11Z", "comments": 0, "user": "yurkoff-mv" }, { "repo": "pytorch/pytorch", "number": 147187, "title": "[torch.export] How to export a model with kv cache", "body": "### \ud83d\udc1b Describe the bug\n\nIn an attention layer, kv cache needs a variable number \"start_pos\" from outside.\n\n(may related to https://github.com/pytorch/pytorch/issues/146990)\n\nHere is a simplified model for reproducing the issue:\n\n```python\nimport torch\nfrom torch import nn\n\nclass Cache(nn.Module):\n def __init__(self, head_dim):\n super().__init__()\n max_token = 128\n self.register_buffer(\"cache_k\", torch.zeros(\n (1, max_token, head_dim,)), persistent=False)\n\n def forward(\n self,\n x: torch.Tensor,\n start_pos: torch.Tensor\n ):\n _, seqlen, _ = x.size()\n end_pos = start_pos+seqlen\n self.cache_k[:, start_pos:end_pos, :] = x\n return self.cache_k[:, :end_pos, :]\n\nif __name__ == \"__main__\":\n from torch.export import Dim\n with torch.no_grad():\n # Prepare for input\n start_pos = torch.scalar_tensor(8, dtype=torch.int32)\n seqlen = 8\n hidden_size = 32\n h = torch.randn(1, seqlen, hidden_size)\n # Prepare for mdoel\n model = Cache(hidden_size)\n dynamic_shapes = {\"x\": {1: Dim.DYNAMIC},\"start_pos\": None}\n torch.export.export(model, args=(h, start_pos), dynamic_shapes=dynamic_shapes)\n```\n\n\n```Error message\nException has occurred: Unsupported (note: full exception trace is shown but execution is paused at: _run_module_as_main)\nDynamic slicing on data-dependent value is not supported\n\nfrom user code:\n File \"/home/tim/nvpu_uno/nnc/tests/test_cache.py\", line 18, in forward\n self.cache_k[:, start_pos:end_pos, :] = x\n\nSet TORCH_LOGS=\"+dynamo\" and TORCHDYNAMO_VERBOSE=1 for more information\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/exc.py\", line 317, in unimplemented\n raise Unsupported(msg, case_name=case_name)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/variables/lists.py\", line 923, in __init__\n unimplemented(\"Dynamic slicing on data-dependent value is not supported\")\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1873, in BUILD_SLICE\n self.push(SliceVariable(items))\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 962, in step\n self.dispatch_table[inst.opcode](self, inst)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1052, in run\n while self.step():\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 2868, in run\n super().run()\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 662, in transform\n tracer.run()\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 231, in _fn\n return fn(*args, **kwargs)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py\", line 1361, in transform_code_object\n transformations(instructions, code_options)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 750, in _compile_inner\n out_code = transform_code_object(code, transform)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_utils_internal.py\", line 95, in wrapper_function\n return function(*args, **kwargs)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 715, in compile_inner\n return _compile_inner(code, one_graph, hooks, transform)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 986, in _compile\n guarded_code = compile_inner(code, one_graph, hooks, transform)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 547, in __call__\n return _compile(\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 1380, in __call__\n return self._torchdynamo_orig_callable(\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1750, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1739, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 574, in _fn\n return fn(*args, **kwargs)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1750, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1739", "url": "https://github.com/pytorch/pytorch/issues/147187", "state": "open", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2025-02-14T06:15:41Z", "updated_at": "2025-02-18T19:20:39Z", "user": "exeex" }, { "repo": "pytorch/data", "number": 1442, "title": "what dataloader to use for torchdata.nodes nodes?", "body": "hi, thanks for reviving torchdata. i was able to move on to `0.10.1` for lots of my existing datapipes. it seems to work pretty nicely.\n\nquestion - am i supposed to use `torchdata.nodes.Loader` or `torchdata.stateful_dataloader.StatefulDataLoader` for my data nodes? or just `torch.utils.data.DataLoader`? i'm getting confused a bit after reading the docs and code. currently `Loader` works for my iterable data nodes, but with some caveats (no multi processing).\n", "url": "https://github.com/meta-pytorch/data/issues/1442", "state": "closed", "labels": [], "created_at": "2025-02-13T17:32:53Z", "updated_at": "2025-10-24T04:07:52Z", "comments": 16, "user": "keunwoochoi" }, { "repo": "pytorch/pytorch", "number": 147076, "title": "How to check grads in each step of model?", "body": "Hi there:\n I've implement a Pytorch version of [Retrieval-based-Voice-Conversion(RVC for short)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) at [here](https://github.com/ElinLiu0/RVCTorch/blob/master/POC_Torch.ipynb).\n The question is,when i wanna export my implementation pipeline into ONNX using below code:\n ```python\n with torch.inference_mode(), torch.cuda.amp.autocast(enabled=False):\n torch.onnx.export(\n pipeline, \n (audio.cuda(),),\n \"pipeline.onnx\",\n input_names=[\"input\"],\n output_names=[\"output\"],\n opset_version=14\n )\n ```\nIt rasing below error:\n```python\nRuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient\nTensor:\n 0.6670\n[ torch.cuda.HalfTensor{1} ]\n```\n\nTypically rasing with an `nn.BatchNorm2d` cell called at [rmvpe.py](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/rmvpe.py) at line 244.\n\nSo how could i fix this error,since this implementation finally will deploy on C# or model serving platform like NVIDIA Triton.", "url": "https://github.com/pytorch/pytorch/issues/147076", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2025-02-13T09:01:49Z", "updated_at": "2025-02-20T07:56:31Z", "user": "ElinLiu0" }, { "repo": "pytorch/torchtitan", "number": 840, "title": "profiling", "body": "A few questions . \n\n1. Is it based on kineto or something else ?\n2. Only seeing CPU activities ( e.g. python) - do I have to do anything special to see GPU activities ? \n", "url": "https://github.com/pytorch/torchtitan/issues/840", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-13T01:27:00Z", "updated_at": "2025-02-20T19:55:53Z", "user": "githubsgi" }, { "repo": "pytorch/pytorch", "number": 146990, "title": "How to export a model using topk with a variable number of neighbour?", "body": "### \ud83d\udc1b Describe the bug\n\nThe export is the following but that may not be the only one. That's the first raised one.\n\n``torch._dynamo.exc.UserError: Could not guard on data-dependent expression u7 >= 0 (unhinted: u7 >= 0). (Size-like symbols: none)``\n\n```python\nimport contextlib\nimport io\nimport logging\nimport warnings\nfrom typing import Any, Dict, List, Optional\nimport numpy as np\nimport sklearn\nimport torch\n\n\ndef flatnonzero(x):\n \"Similar to :func:`numpy.flatnonzero`\"\n return torch.nonzero(torch.reshape(x, (-1,)), as_tuple=True)[0]\n\n\ndef _get_weights(dist, weights):\n \"\"\"Get the weights from an array of distances and a parameter ``weights``.\n\n Assume weights have already been validated.\n\n Parameters\n ----------\n dist : ndarray\n The input distances.\n\n weights : {'uniform', 'distance'}, callable or None\n The kind of weighting used.\n\n Returns\n -------\n weights_arr : array of the same shape as ``dist``\n If ``weights == 'uniform'``, then returns None.\n \"\"\"\n if weights in (None, \"uniform\"):\n return None\n\n if weights == \"distance\":\n # if user attempts to classify a point that was zero distance from one\n # or more training points, those training points are weighted as 1.0\n # and the other points as 0.0\n dist = 1.0 / dist\n inf_mask = torch.isinf(dist)\n inf_row = torch.any(inf_mask, axis=1)\n dist[inf_row] = inf_mask[inf_row]\n return dist\n\n if callable(weights):\n return weights(dist)\n\n\nclass NanEuclidean(torch.nn.Module):\n \"\"\"Implements :func:`sklearn.metrics.nan_euclidean`.\"\"\"\n\n def __init__(self, squared=False, copy=True):\n super().__init__()\n self.squared = squared\n self.copy = copy\n\n def forward(self, X, Y):\n X = X.clone()\n Y = Y.to(X.dtype).clone()\n\n missing_X = torch.isnan(X)\n missing_Y = torch.isnan(Y)\n\n # set missing values to zero\n X[missing_X] = 0\n Y[missing_Y] = 0\n\n # Adjust distances for missing values\n XX = X * X\n YY = Y * Y\n\n distances = -2 * X @ Y.T + XX.sum(1, keepdim=True) + YY.sum(1, keepdim=True).T\n\n distances -= XX @ missing_Y.to(X.dtype).T\n distances -= missing_X.to(X.dtype) @ YY.T\n\n distances = torch.clip(distances, 0, None)\n\n present_X = 1 - missing_X.to(X.dtype)\n present_Y = ~missing_Y\n present_count = present_X @ present_Y.to(X.dtype).T\n distances[present_count == 0] = torch.nan\n # avoid divide by zero\n present_count = torch.maximum(\n torch.tensor([1], dtype=present_count.dtype), present_count\n )\n distances /= present_count\n distances *= X.shape[1]\n\n if not self.squared:\n distances = distances.sqrt()\n\n return distances\n\n\n# %%\n# Validation\n# ++++++++++\n\nmodel = NanEuclidean()\nX = torch.randn((5, 2))\nY = torch.randn((5, 2))\nfor i in range(5):\n X[i, i % 2] = torch.nan\nfor i in range(4):\n Y[i + 1, i % 2] = torch.nan\n\nd1 = sklearn.metrics.nan_euclidean_distances(X.numpy(), Y.numpy())\nd2 = model(X, Y)\n# print(f\"discrepancies: {max_diff(d1, d2)}\")\n\n\n# %%\n# torch implementation of KNNImputer\n# ==================================\n#\n# See :class:`sklearn.impute.KNNImputer`.\n# The code is split into several :class:`torch.nn.Module`\n# and refactored to avoid control flow.\n\n\ndef _get_mask(X, value_to_mask):\n return torch.isnan(X)\n\n\nclass SubTopKIndices(torch.nn.Module):\n def forward(self, x, k):\n # torch does not like nans\n xn = torch.nan_to_num(x, nan=1.0e10)\n return torch.topk(xn, k, dim=1, largest=False, sorted=True).indices\n\n\nclass SubWeightMatrix(torch.nn.Module):\n def __init__(self, weights):\n super().__init__()\n self.weights = weights\n\n def forward(self, donors_dist):\n weight_matrix = _get_weights(donors_dist, self.weights)\n if weight_matrix is not None:\n weight_matrix = weight_matrix.clone()\n weight_matrix[torch.isnan(weight_matrix)] = 0.0\n else:\n weight_matrix = torch.ones_like(donors_dist)\n weight_matrix[torch.isnan(donors_dist)] = 0.0\n return weight_matrix\n\n\nclass SubDonorsIdx(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self._topk = SubTopKIndices()\n\n def forward(self, dist_pot_donors, n_neighbors):\n donors_idx = self._topk(dist_pot_donors, n_neighbors)\n donors_dist = dist_pot_donors[torch.arange(donors_idx.shape[0])[:, None], donors_idx]\n return donors_idx, donors_dist\n\n\nclass MakeNewWeights(torch.nn.Module):\n def forward(self, donors_mask, donors, weight_matrix):\n return donors_mask.to(donors.dtype) * weight_matrix.to(donors.dtype)\n\n\nclass CalcImpute(torch.nn.Module):\n \"\"\"Implements :meth:`sklearn.impute.KNNImputer._calc_impute`.\"\"\"\n\n def __init__(self, weights):\n super().__init__()\n self._weights = SubWeightMatrix(weights)\n ", "url": "https://github.com/pytorch/pytorch/issues/146990", "state": "closed", "labels": [ "triaged", "oncall: pt2", "oncall: export" ], "created_at": "2025-02-12T16:02:20Z", "updated_at": "2025-02-26T17:45:40Z", "user": "xadupre" }, { "repo": "pytorch/pytorch", "number": 146977, "title": "How to install Torch version that supports RTX 5090 on Windows? - CUDA kernel errors might be asynchronously reported at some other API call", "body": "I have purchased RTX 5090 just to test AI apps\n\nCurrently getting this error on any app\n\nI need torch for Python 3.10 venv on Windows\n\nI am ok with installing nightly version etc just install command please\n\n```\nTraceback (most recent call last):\n File \"E:\\trellis_v5\\TRELLIS\\app.py\", line 401, in <module>\n pipeline = TrellisImageTo3DPipeline.from_pretrained(\"JeffreyXiang/TRELLIS-image-large\")\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\pipelines\\trellis_image_to_3d.py\", line 56, in from_pretrained\n pipeline = super(TrellisImageTo3DPipeline, TrellisImageTo3DPipeline).from_pretrained(path)\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\pipelines\\base.py\", line 39, in from_pretrained\n _models = {\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\pipelines\\base.py\", line 40, in <dictcomp>\n k: models.from_pretrained(f\"{path}/{v}\")\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\models\\__init__.py\", line 59, in from_pretrained\n model = __getattr__(config['name'])(**config['args'], **kwargs)\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\models\\structured_latent_vae\\decoder_mesh.py\", line 105, in __init__\n self.mesh_extractor = SparseFeatures2Mesh(res=self.resolution*4, use_color=self.rep_config.get('use_color', False))\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\representations\\mesh\\cube2mesh.py\", line 68, in __init__\n verts, cube = construct_dense_grid(self.res, self.device)\n File \"E:\\trellis_v5\\TRELLIS\\trellis\\representations\\mesh\\utils_cube.py\", line 11, in construct_dense_grid\n vertsid = torch.arange(res_v ** 3, device=device)\nRuntimeError: CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n```\n\n\n\ncc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @eqy", "url": "https://github.com/pytorch/pytorch/issues/146977", "state": "closed", "labels": [ "high priority", "needs reproduction", "module: build", "module: windows", "module: cuda", "triaged" ], "created_at": "2025-02-12T12:43:57Z", "updated_at": "2025-03-01T09:47:47Z", "user": "FurkanGozukara" }, { "repo": "pytorch/xla", "number": 8702, "title": "Links misssing in CONTRIBUTING.md for Additional steps for GPU.", "body": "## \ud83d\udcda Documentation\n\n<!-- A clear and concise description of what content is an issue. -->\nI was visity CONTRIBUTING.md doc and try to build a GPU version, but in the part \"Additional steps for GPU\", the refer to guide link is missing.\n\n![Image](https://github.com/user-attachments/assets/2e43682c-96ff-4072-bffa-7283f23b80ad)", "url": "https://github.com/pytorch/xla/issues/8702", "state": "open", "labels": [ "bug", "documentation" ], "created_at": "2025-02-12T07:50:41Z", "updated_at": "2025-02-17T13:40:39Z", "comments": 3, "user": "yinrun" }, { "repo": "pytorch/ao", "number": 1701, "title": "Model size after quantization", "body": "Why is the size relationship of the model unreasonable after I use these three quantization methods on the same model?\n\n```Python\nfrom torchao.quantization import quantize_, int8_weight_only\nquantize_(new_model, int8_weight_only())\n\n\n# from torchao.quantization import quantize_, int8_dynamic_activation_int8_weight\n# quantize_(new_model, int8_dynamic_activation_int8_weight())\n\n\n# from torchao.quantization import int8_dynamic_activation_int4_weight\n# quantize_(new_model, int8_dynamic_activation_int4_weight())\n```\n\nthe result:\n```Shell\n20786584 Feb 5 13:46 a8w4SWaT.pte\n20373272 Feb 5 13:45 a8w8SWaT.pte\n29685120 Oct 5 13:12 pytorch_checkpoint.pth\n20262664 Feb 5 13:44 w8onlySWaT.pte\n```\n\nBecause theoretically, the model after using the A8W4 quantization method should be the smallest, but the actual results are different", "url": "https://github.com/pytorch/ao/issues/1701", "state": "open", "labels": [ "question", "quantize_" ], "created_at": "2025-02-11T19:32:29Z", "updated_at": "2025-02-12T08:54:01Z", "user": "TaylorYangX" }, { "repo": "pytorch/ao", "number": 1699, "title": "[DOC] Questions on Integrating a New CPU Operator into TorchAO\uff1f", "body": "I'm working on integrating a **CPU operator** into TorchAO and have a few questions regarding the process:\n\n### How can I add a New **_CPU Operator_** in 'torchao/csrc':\n\n* What is the recommended approach for adding a new CPU operator in the 'csrc' directory?\n\n* Are there any specific guidelines or templates I should follow to ensure compatibility with the existing codebase?\n\n### How can I Remove or Disable current CUDA Operators:\n\n* How can I remove or disable all existing CUDA operators in the codebase?\n\n* Are there any configuration flags or build options that can be used to exclude CUDA-related code during compilation?\n\n### How can I Move Experimental MPS and CPU Code to TorchAO:\n\n* I noticed that there is experimental code for MPS and CPU in the repository (torchao/experimental/kernels'). What is the process for moving this code into the main TorchAO module?\n\n* Are there any specific considerations or steps I should follow to ensure a smooth transition?\n\nThank you for your help!\n\n", "url": "https://github.com/pytorch/ao/issues/1699", "state": "open", "labels": [ "question", "cpu" ], "created_at": "2025-02-11T12:03:02Z", "updated_at": "2025-02-13T01:53:33Z", "user": "Zijie-Tian" }, { "repo": "pytorch/pytorch", "number": 146889, "title": "How to customize a torch.Tensor() method to access the underlying data structure of a PyTorch tensor.", "body": "### \ud83d\udc1b Describe the bug\n\n1. How to customize a torch.Tensor() method and call PyTorch's THPVariable_pynew function to obtain the underlying data structure of the original Tensor.\n\n![Image](https://github.com/user-attachments/assets/8228c07f-306b-4d7e-b162-f06e6ce7c7dc)\n\ntensor = torch.Tensor(3,4).to(\"new_one\") -> initModule()->Module.cpp->and run in \nhttps://github.com/pytorch/pytorch/blob/32f585d9346e316e554c8d9bf7548af9f62141fc/torch/csrc/autograd/python_variable.cpp#L1891\n\n2.This is my project: https://github.com/xiangxinhello/torch_new_tensor. My project is based on modifications of https://github.com/pytorch/pytorch/tree/v2.5.0/test/cpp_extensions/open_registration_extension, but I was unable to modify it successfully.\n\n3.I want to obtain the underlying data structure information of a PyTorch tensor through a custom torch.Tensor method.\n\n### Versions\n\nPyTorch version: 2.5.0a0+gita8d6afb\nIs debug build: True\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: 14.0.0-1ubuntu1.1\nCMake version: version 3.31.2\nLibc version: glibc-2.35\n\nPython version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)\nPython platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.4.131\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090\nNvidia driver version: 550.120\ncuDNN version: Could not collect\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 96\nOn-line CPU(s) list: 0-95\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz\nCPU family: 6\nModel: 85\nThread(s) per core: 2\nCore(s) per socket: 24\nSocket(s): 2\nStepping: 7\nCPU max MHz: 4000.0000\nCPU min MHz: 1200.0000\nBogoMIPS: 6000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 1.5 MiB (48 instances)\nL1i cache: 1.5 MiB (48 instances)\nL2 cache: 48 MiB (48 instances)\nL3 cache: 71.5 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-23,48-71\nNUMA node1 CPU(s): 24-47,72-95\nVulnerability Gather data sampling: Mitigation; Microcode\nVulnerability Itlb multihit: KVM: Mitigation: VMX disabled\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; Enhanced IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Mitigation; TSX disabled\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.1\n[pip3] optree==0.13.1\n[pip3] torch==2.5.0a0+gita8d6afb\n[conda] magma-cuda121 2.6.1 1 pytorch\n[conda] mkl-include ", "url": "https://github.com/pytorch/pytorch/issues/146889", "state": "open", "labels": [ "triaged", "tensor subclass" ], "created_at": "2025-02-11T07:18:54Z", "updated_at": "2025-04-14T17:40:25Z", "user": "xiangxinhello" }, { "repo": "pytorch/torchtitan", "number": 831, "title": "converging.md", "body": "In the [page](https://github.com/pytorch/torchtitan/blob/main/docs/converging.md) . Can someone please clarify the the following.\n\n1. How many (dp) and what type of GPU was used for the [chart](https://github.com/pytorch/torchtitan/blob/main/docs/converging.md#test-results). \n2. What is FSDP 8 , 8 GPU's or FP 8 ?\n3. ", "url": "https://github.com/pytorch/torchtitan/issues/831", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-11T04:15:19Z", "updated_at": "2025-03-17T19:13:39Z", "user": "githubsgi" }, { "repo": "pytorch/torchtitan", "number": 828, "title": "Any optimized suggestions for fast save ema/model/optim and resume training from them all.", "body": "By using dcp.async_save, we can save the model and optimizer asynchronously, preventing them from blocking the training process. However, if I also want to save the EMA (Exponential Moving Average) model, the typical approach would be to create another async_save call for the EMA. According to the documentation, it's \"recommended to limit checkpoints to one asynchronous request at a time to avoid additional memory pressure per request\". Therefore, either the EMA or the model/optimizer must be saved synchronously, which can potentially block the main training process. If the model is large, saving the EMA first can incur significant overhead.\n\nCould you share any best practices for optimizing this save function to facilitate resuming training smoothly?", "url": "https://github.com/pytorch/torchtitan/issues/828", "state": "closed", "labels": [ "question", "module: distributed_state_dict" ], "created_at": "2025-02-10T10:39:16Z", "updated_at": "2025-02-13T07:39:35Z", "user": "tangjiasheng" }, { "repo": "pytorch/audio", "number": 3879, "title": "How to use filtfilt() function?", "body": "I'm trying to move from scipy to torchaudio.\nHere is my code below:\n```python\nfrom torchaudio.functional.filtering import filtfilt\nfrom scipy import signal\n\nbh, ah = signal.butter(N=5, Wn=48, btype=\"high\", fs=16000)\n\naudio = sample_input\n\nprint(f\"Audio contains nan: {torch.isnan(torch.from_numpy(audio).float().to(torch.float64)).any()}\")\nprint(f\"Audio contains inf: {torch.isinf(torch.from_numpy(audio).float().to(torch.float64)).any()}\")\nprint(f\"Audio min: {torch.from_numpy(audio).float().to(torch.float64).min()}\")\nprint(f\"Audio max: {torch.from_numpy(audio).float().to(torch.float64).max()}\")\nprint(f\"Audio mean: {torch.from_numpy(audio).float().to(torch.float64).mean()}\")\nprint(f\"Audio shape: {torch.from_numpy(audio).float().to(torch.float64).shape}\")\n\nprint(f\"bh contains nan: {torch.isnan(torch.from_numpy(bh).float().to(torch.float64)).any()}\")\nprint(f\"bh contains inf: {torch.isinf(torch.from_numpy(bh).float().to(torch.float64)).any()}\")\nprint(f\"bh min: {torch.from_numpy(bh).float().to(torch.float64).min()}\")\nprint(f\"bh max: {torch.from_numpy(bh).float().to(torch.float64).max()}\")\nprint(f\"bh mean: {torch.from_numpy(bh).float().to(torch.float64).mean()}\")\nprint(f\"bh shape: {torch.from_numpy(bh).float().to(torch.float64).shape}\")\n\nprint(f\"ah contains nan: {torch.isnan(torch.from_numpy(ah).float().to(torch.float64)).any()}\")\nprint(f\"ah contains inf: {torch.isinf(torch.from_numpy(ah).float().to(torch.float64)).any()}\")\nprint(f\"ah min: {torch.from_numpy(ah).float().to(torch.float64).min()}\")\nprint(f\"ah max: {torch.from_numpy(ah).float().to(torch.float64).max()}\")\nprint(f\"ah mean: {torch.from_numpy(ah).float().to(torch.float64).mean()}\")\nprint(f\"ah shape: {torch.from_numpy(ah).float().to(torch.float64).shape}\")\n\n\naudio = filtfilt(\n waveform=torch.from_numpy(audio).float().to(torch.float64),\n a_coeffs=torch.from_numpy(ah).float().to(torch.float64),\n b_coeffs=torch.from_numpy(bh).float().to(torch.float64)\n)\n\nprint(f\"Audio after filtfilt : {audio}\")\n```\n\nBut actual output is that:\n```python\nAudio contains nan: False\nAudio contains inf: False\nAudio min: -0.858154296875\nAudio max: 0.8670654296875\nAudio mean: 0.00011500650977929034\nAudio shape: torch.Size([1149120])\nbh contains nan: False\nbh contains inf: False\nbh min: -9.699606895446777\nbh max: 9.699606895446777\nbh mean: 0.0\nbh shape: torch.Size([6])\nah contains nan: False\nah contains inf: False\nah min: -9.639544486999512\nah max: 9.757863998413086\nah mean: 1.3907750447591147e-07\nah shape: torch.Size([6])\nAudio after filtfilt : tensor([nan, nan, nan, ..., nan, nan, nan], dtype=torch.float64)\n```\n\nAm i using this function in a wrong way?lol\ud83d\ude02", "url": "https://github.com/pytorch/audio/issues/3879", "state": "closed", "labels": [], "created_at": "2025-02-10T02:56:31Z", "updated_at": "2025-02-10T08:55:03Z", "user": "ElinLiu0" }, { "repo": "pytorch/torchtitan", "number": 827, "title": "How to design TP plan for `nn.GLU`", "body": "Hi guys, I'm encountering a challenge in designing TP plans for gated MLP, i.e., [nn.GLU](https://pytorch.org/docs/stable/generated/torch.nn.GLU.html#torch.nn.GLU) with packed weights `w12 = [w1, w2]`, followed by a down proj `w3`\n\nThe plan for separated `w1` and `w2` is quite straightforward\n```\nlayer_tp_plan = {\n # by default ColwiseParallel input layouts is replicated\n # and RowwiseParallel output layouts is replicated\n \"feed_foward.w1\": ColwiseParallel(),\n \"feed_forward.w2\": RowwiseParallel(),\n \"feed_forward.w3\": ColwiseParallel(),\n}\n```\nHowever, I'm unsure how to approach this when using packed weights (`w12 = [w1, w2]`) to leverage the fused GLU for better performance.\n\nCould anyone provide some guidance on how to design an effective TP plan for this scenario?\nThank you \n\n@tianyu-l \n\n\n", "url": "https://github.com/pytorch/torchtitan/issues/827", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-08T23:24:47Z", "updated_at": "2025-02-12T19:43:22Z", "user": "yzhangcs" }, { "repo": "pytorch/pytorch", "number": 146682, "title": "How to get last layer hidden state of transformer model while convert model to onnx format?", "body": "\n\nI am currently working with a model that has been exported to the ONNX format. For my project, I need to extract the last layer hidden states during inference. However, I couldn\u2019t find any documentation or example that explains how to achieve this using an ONNX-exported model.\n\nWhether the ONNX format retains the capability to extract the last layer hidden states?\n\nThanks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/146682", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2025-02-07T08:35:07Z", "updated_at": "2025-03-03T20:42:20Z", "user": "Jianshu-She" }, { "repo": "pytorch/executorch", "number": 8282, "title": "Advise on how to run the training example on iOS", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHello team,\n\nI was wondering if it is possible to run the `train_xor` or a similar training example on an iOS device.\nSo be able to do \n`#import <executorch/extension/training/training_module.h>`\n\nI have followed this guide: https://pytorch.org/executorch/main/apple-runtime and was able to build the xcframework using a local copy of executorch, add it to the Xcode project, and run it on an iOS device.\n\nI guess I need to compile and package the libraries in https://github.com/pytorch/executorch/tree/main/extension/training to the App, but I don't know how to do that, could you give some pointers?\n\nThanks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_\n\ncc @shoumikhin @JacobSzwejbka", "url": "https://github.com/pytorch/executorch/issues/8282", "state": "closed", "labels": [ "triaged", "module: ios", "module: training" ], "created_at": "2025-02-06T18:57:43Z", "updated_at": "2025-09-02T16:46:06Z", "user": "YuanTingHsieh" }, { "repo": "pytorch/pytorch", "number": 146575, "title": "How to pip3 torch==2.1.0.dev20230822+cu118", "body": "\n\n> I\u2019ve tried installing this specific version multiple times, but the issue keeps occurring.\n\npip3 install torch==2.1.0.dev20230822+cu118\n```\nERROR: Could not find a version that satisfies the requirement torch==2.1.0.dev20230822+cu118 (from versions: 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0)\nERROR: No matching distribution found for torch==2.1.0.dev20230822+cu118\n```\n\n> PLEASE HELP ME A GUILD TO SOVLE THIS ISSUE <3\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @seemethere @malfet @osalpekar @atalman", "url": "https://github.com/pytorch/pytorch/issues/146575", "state": "closed", "labels": [ "module: binaries", "triaged" ], "created_at": "2025-02-06T06:07:34Z", "updated_at": "2025-02-06T15:14:25Z", "user": "minhphi1712" }, { "repo": "pytorch/ao", "number": 1665, "title": "NF4Tensor and DDP", "body": "I am trying to use `NF4Tensor` weights in my model and wrap it with `DistributedDataParallel`, but get the following error:\n\n```\n[rank0]: model = DistributedDataParallel(\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/path/to/venv/lib/python3.12/site-packages/torch/nn/parallel/distributed.py\", line 837, in __init__\n[rank0]: _sync_module_states(\n[rank0]: File \"/path/to/venv/lib/python3.12/site-packages/torch/distributed/utils.py\", line 313, in _sync_module_states\n[rank0]: _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)\n[rank0]: File \"/path/to/venv/lib/python3.12/site-packages/torch/distributed/utils.py\", line 324, in _sync_params_and_buffers\n[rank0]: dist._broadcast_coalesced(\n[rank0]: File \"/path/to/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py\", line 745, in _fn\n[rank0]: return fn(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/path/to/venv/lib/python3.12/site-packages/torchao/dtypes/nf4tensor.py\", line 834, in __torch_dispatch__\n[rank0]: raise NotImplementedError(\n[rank0]: NotImplementedError: NF4Tensor dispatch: attempting to run aten.cat.default, this is not supported\n```\n\nTo replicate:\n\n```\nfrom torchao.dtypes.nf4tensor import linear_nf4, to_nf4\nfrom torch.nn.parallel import DistributedDataParallel\nfrom torch import nn\nimport os\nimport torch\n\nclass NF4(nn.Module):\n \n def __init__(\n self,\n device = None,\n ):\n super().__init__()\n\n self.linear = nn.Linear(512, 512, bias=False, device=device)\n self.linear.weight = nn.Parameter(to_nf4(self.linear.weight))\n\n\nif __name__ == \"__main__\":\n \n _local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n _device = f\"cuda:{_local_rank}\"\n\n torch.distributed.init_process_group(\n backend=\"nccl\",\n init_method=\"env://\",\n device_id=torch.device(_local_rank),\n )\n\n model = NF4(_device)\n\n model = DistributedDataParallel(model)\n```\n\n`torchrun --nproc_per_node=2 script.py`\n\n`NotImplementedError: NF4Tensor dispatch: attempting to run c10d.broadcast_.default, this is not supported`\n\nIs there some way around this issue?", "url": "https://github.com/pytorch/ao/issues/1665", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-05T12:12:27Z", "updated_at": "2025-02-18T02:35:05Z", "user": "psinger" }, { "repo": "pytorch/torchtitan", "number": 821, "title": "WARNING - When using FSDP, it's recommended to enable config.force_recompute_fp8_weight_in_bwd.", "body": "Not necessarily an issue, but I see this log quite a lot when I enable Float8. I can open a PR to turn it on, but was wondering if it was intentional. Thanks for the great library!", "url": "https://github.com/pytorch/torchtitan/issues/821", "state": "closed", "labels": [ "question", "module: fsdp" ], "created_at": "2025-02-05T05:04:38Z", "updated_at": "2025-02-18T18:32:34Z", "user": "c0g" }, { "repo": "pytorch/ao", "number": 1664, "title": "Tensor subclass methods for `DTensor` and `FSDP2`", "body": "Is there a protocol / interface that a tensor subclass must implement in order to be used with `DTensor` primitives and for training with `FSDP2`?\n\nI've been walking through `NF4` as an example as it [covers both](https://github.com/search?q=repo%3Apytorch%2Fao+FSDP2+and+NF4&type=pullrequests). However, the methods are scattered across `__torch_function__` and `__torch_dispatch__` (though the unittests make it clear which ops are tested for `FSDP`). \n\nIs there a cleaner / expected format for subclassing a tensor such that\n- it can be used with `DTensor` collectives and `FSDP2`, and \n- composed with subclass-specific overrides for streamlined use with `torch.compile`?\n\n@msaroufim @awgu @weifengpy @jerryzh168 \n\n---\n\np.s. Fwiw, also looked at the developer-guide tensor subclass example but found the abstractions a bit hard to follow; would personally prefer using torch-native functionalities.\n", "url": "https://github.com/pytorch/ao/issues/1664", "state": "open", "labels": [ "question" ], "created_at": "2025-02-05T00:40:54Z", "updated_at": "2025-02-05T23:33:35Z", "user": "jeromeku" }, { "repo": "pytorch/torchtitan", "number": 818, "title": "Is user-defined initializers a must-have for FSDP2?", "body": "```\nwith torch.device(\"meta\"):\n model = Transformer()\nfor module in model.modules():\n if isinstance(module, TransformerBlock):\n fully_shard(module)\nfully_shard(model)\nfor tensor in itertools.chain(model.parameters(), model.buffers()):\n assert tensor.device == torch.device(\"meta\")\n# Allocate buffers and sharded parameters on GPU\nmodel.to_empty(device=\"cuda\")\n# Run user-defined initializers\nmodel.init_weights() # or `model.apply(init_weights)`\n```\n\nCould I skip model.init_weights() # or `model.apply(init_weights)`\nif I want to just use the already initialized weights before sharding? ", "url": "https://github.com/pytorch/torchtitan/issues/818", "state": "closed", "labels": [ "question" ], "created_at": "2025-02-04T22:00:45Z", "updated_at": "2025-02-05T18:03:29Z", "user": "goldhuang" }, { "repo": "pytorch/ao", "number": 1653, "title": "[Doc] gemlite version", "body": "What gemlite version is required/supported? Can we specify this in the readme?", "url": "https://github.com/pytorch/ao/issues/1653", "state": "closed", "labels": [ "topic: documentation", "question" ], "created_at": "2025-02-03T14:26:29Z", "updated_at": "2025-05-02T18:00:20Z", "user": "bhack" }, { "repo": "pytorch/text", "number": 2283, "title": "import torchtext fails", "body": "## \ud83d\udc1b Bug\n\nToday I installed torchtext in my Linux Ubuntu. When I tried to import torchtext into python, torchtext failed.\n\nDetails\n\n1. Ubuntu 24.04.1 LTS\n\n2. Python 3.12.3\n\n3. PyTorch Version 2.5.1+cu124 (running fine)\n\n4. During the torchtext install I saw messages suggesting that the version is 0.18, which according to what I read, is the last one to be supported.\n\n5. The error messages I get when I issue the command \"import torchtex\" are below.\n\n6. QUESTION: Given that torchtext will not be supported any more, is there an alternative API for text processing in PyTorch that will take the role of torchtext?\n \n\n```\nimport torchtext\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/__init__.py\", line 18, in <module>\n from torchtext import _extension # noqa: F401\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/_extension.py\", line 64, in <module>\n _init_extension()\n File \"/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/_extension.py\", line 58, in _init_extension\n _load_lib(\"libtorchtext\")\n File \"/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/_extension.py\", line 50, in _load_lib\n torch.ops.load_library(path)\n File \"/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torch/_ops.py\", line 1350, in load_library\n ctypes.CDLL(path)\n File \"/usr/lib/python3.12/ctypes/__init__.py\", line 379, in __init__\n self._handle = _dlopen(self._name, mode)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n```\n", "url": "https://github.com/pytorch/text/issues/2283", "state": "open", "labels": [], "created_at": "2025-02-03T01:20:48Z", "updated_at": "2025-02-03T01:20:48Z", "comments": 0, "user": "JuanVargas" }, { "repo": "pytorch/pytorch", "number": 146241, "title": "How to perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 efficiently using pytorch API?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nNVIDIA's cutlass library can perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 for improved numerical stability. For example, consider the following snippet from [this code example from flash-attention](https://github.com/Dao-AILab/flash-attention/blob/02541ac9e8382f4d8e17f1f2ba0d7de2c792390c/csrc/flash_attn/src/flash_fwd_kernel.h#L319) calling it: \n```\n FLASH_NAMESPACE::gemm</*A_in_regs=*/Kernel_traits::Is_Q_in_regs>(\n acc_s, tSrQ, tSrK, tSsQ, tSsK, tiled_mma, smem_tiled_copy_Q, smem_tiled_copy_K,\n smem_thr_copy_Q, smem_thr_copy_K\n );\n```\nwhere `tSrQ`, `tSrK`, `tSsQ`, `tSsK` is BF16/FP16, while final result `acc_s` is FP32.\n\nI notice [pytorch's BF16 matrix mulitiplication](https://pytorch.org/docs/stable/notes/numerical_accuracy.html#reduced-precision-reduction-for-fp16-and-bf16-gemms) will use FP32 as intermediate accumulations, but final result is downcast to BF16 anyway. I experimented with the `out` parameter and `autocast`, but neither provided a complete solution.\n\nSurely, below code can implement BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32\n```\nA = torch.randn((12, 3, 4, 5), dtype=torch.bfloat16)\nB = torch.randn((12, 3, 5, 6), dtype=torch.bfloat16)\nC = torch.einsum(\"...ij,...jk->...ijk\", A, B).sum(dtype=torch.float32, dim=-2)\n```\nHowever, I have serious reservations about the speed and memory efficiency of this approach. I wonder if There is a more pytorch way to call the corresponding CUTLASS API.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @albanD", "url": "https://github.com/pytorch/pytorch/issues/146241", "state": "closed", "labels": [ "module: cuda", "triaged", "module: linear algebra", "module: python frontend", "matrix multiplication" ], "created_at": "2025-02-01T13:13:18Z", "updated_at": "2025-04-18T05:02:40Z", "user": "Wongboo" }, { "repo": "pytorch/xla", "number": 8660, "title": "Torch XLA Model all_gather does not work with tensors of different sizes along dimension 0", "body": "## \ud83d\udc1b Bug\nTorch XLA Model all_gather works with tensors of same size along `dim=0`, but if tensor sizes are different along `dim=0`, it hangs.\n\n## To Reproduce\n\nSave this code in `test_all_gather.py` \n\n```\nimport torch\nimport torch_xla.core.xla_model as xm\nimport torch_xla.runtime as xr\nimport torch_xla.distributed.xla_backend as xb\nimport torch.distributed\n\n\ndef test_all_gather():\n\n same = [512, 512, 512, 512, 512, 512, 512, 512]\n\n different = [416, 536, 560, 544, 576, 512, 592, 360]\n torch.distributed.init_process_group(backend=\"xla\", init_method=\"xla://\") \n\n rank = torch.distributed.get_rank()\n device = xm.xla_device()\n input = torch.randn((same[rank], 16), dtype=torch.float32, device=device)\n \n all_inputs = xm.all_gather(input, dim=0, groups=[[0,1,2,3,4,5,6,7]], pin_layout=False)\n print(f\"!!!!!! rank: {rank}, all_inputs: {all_inputs}\")\n \n input = torch.randn((different[rank], 16), dtype=torch.float32, device=device)\n \n all_inputs = xm.all_gather(input, dim=0, groups=[[0,1,2,3,4,5,6,7]], pin_layout=False)\n \n print(f\"!!!!!! rank: {rank}, all_inputs: {all_inputs}\")\n torch.distributed.destroy_process_group()\n \nif __name__ == \"__main__\":\n test_all_gather()\n```\n\n```\ntorchrun --nproc_per_node=8 test_all_gather.py\n```\n\n## Expected behavior\n\nIt should gather all the tensors from all the devices along `dim=0`\n\n## Environment\n\nDocker image\n`us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.5.0_3.10_cuda_12.4`\n\n\n## Additional context\n\nAccording to this documentation for `all_gather` https://pytorch.org/docs/stable/distributed.html uneven tensor sizes are supported.\n", "url": "https://github.com/pytorch/xla/issues/8660", "state": "open", "labels": [ "enhancement", "distributed", "usability" ], "created_at": "2025-01-31T22:02:27Z", "updated_at": "2025-03-04T22:52:46Z", "comments": 6, "user": "ajayvohra2005" }, { "repo": "pytorch/torchtitan", "number": 813, "title": "HSDP causes loss instability", "body": "I have a codebase forked from torchtitan with minor changes. FSDP trains very well with minimal instability, but HSDP on the same codebase exhibits loss spikes.\n\nIs there some reason for this you folks can think of? Note that I have implemented gradient accumulation in my fork, though without changing any sharding behavior (just to accumulate the gradients on a larger batchsize)", "url": "https://github.com/pytorch/torchtitan/issues/813", "state": "closed", "labels": [ "question", "module: fsdp" ], "created_at": "2025-01-31T03:27:09Z", "updated_at": "2025-08-21T03:06:46Z", "user": "apkumar" }, { "repo": "pytorch/vision", "number": 8889, "title": "Torchvision 0.20.1 looks for jpeg9 on MacOS, while depending on libjpeg-turbo which only provides jpeg8", "body": "### \ud83d\udc1b Describe the bug\n\nHi, I tried to create a new conda environment torch + torchvision + torchaudio + blas accelerate on a MacOS 14. \n\nPost installation, when I try to import the torchvision library, I get a warning about missing libjpeg9.\n\nI have added more details below. Just wanted to bring this to your attention for triage and if there is an issue to be fixed. Cheers!\n\n\n(Replaced full path with CONDA_PREFIX and added newlines to make it clearer)\n\n```bash\nmamba create -n env -c pytorch 'pytorch=2.5.1' torchvision torchaudio 'libblas=*=*accelerate'\nmamba run -n env python\nPython 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:19:53) [Clang 18.1.8 ] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torchvision\n{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/io/image.py:14: UserWarning: Failed to load image Python extension: 'dlopen({CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/image.so, 0x0006): Library not loaded: @rpath/libjpeg.9.dylib\n Referenced from: <367D4265-B20F-34BD-94EB-4F3EE47C385B>{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/image.so\n Reason: tried: \n'{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/../../../libjpeg.9.dylib' (no such file),\n'{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/../../../libjpeg.9.dylib' (no such file), \n'{CONDA_PREFIX}/lib/python3.12/lib-dynload/../../libjpeg.9.dylib' (no such file), \n'{CONDA_PREFIX}/bin/../lib/libjpeg.9.dylib' (no such file)'\nIf you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?\n warn(\n>>>\n```\n\nI tried to find the jpeg libraries in the conda environment with find command\n\n```bash\nfind ${CONDA_PREFIX} -name 'libjpeg*.dylib' \n{CONDA_PREFIX}/lib/libjpeg.8.3.2.dylib\n{CONDA_PREFIX}/lib/libjpeg.8.dylib\n{CONDA_PREFIX}/lib/libjpeg.dylib\n```\n\nWhen I run `otool`, I see that it is linked against jpeg9, while installing libjpeg-turbo as a dependency, which only provides jpeg8.\n\n```\n$ otool -L $CONDA_PREFIX/lib/python3.1/site-packages/torchvision/image.so\n{CONDA_PREFIX}/lib/python3.1/site-packages/torchvision/image.so:\n @rpath/libpng16.16.dylib (compatibility version 56.0.0, current version 56.0.0)\n @rpath/libjpeg.9.dylib (compatibility version 15.0.0, current version 15.0.0)\n @rpath/libwebp.7.dylib (compatibility version 9.0.0, current version 9.8.0)\n @rpath/libc10.dylib (compatibility version 0.0.0, current version 0.0.0)\n @rpath/libtorch.dylib (compatibility version 0.0.0, current version 0.0.0)\n @rpath/libtorch_cpu.dylib (compatibility version 0.0.0, current version 0.0.0)\n @rpath/libtorch_python.dylib (compatibility version 0.0.0, current version 0.0.0)\n @rpath/libc++.1.dylib (compatibility version 1.0.0, current version 1.0.0)\n /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1345.100.2)\n```\n\nconda packages\n```\n blas 2.128 accelerate conda-forge\n blas-devel 3.9.0 28_h55bc449_accelerate conda-forge\n brotli-python 1.1.0 py312hde4cb15_2 conda-forge\n bzip2 1.0.8 h99b78c6_7 conda-forge\n ca-certificates 2024.12.14 hf0a4a13_0 conda-forge\n certifi 2024.12.14 pyhd8ed1ab_0 conda-forge\n cffi 1.17.1 py312h0fad829_0 conda-forge\n charset-normalizer 3.4.1 pyhd8ed1ab_0 conda-forge\n cpython 3.12.8 py312hd8ed1ab_1 conda-forge\n filelock 3.17.0 pyhd8ed1ab_0 conda-forge\n freetype 2.12.1 hadb7bae_2 conda-forge\n giflib 5.2.2 h93a5062_0 conda-forge\n gmp 6.3.0 h7bae524_2 conda-forge\n gmpy2 2.1.5 py312h524cf62_3 conda-forge\n h2 4.1.0 pyhd8ed1ab_1 conda-forge\n hpack 4.1.0 pyhd8ed1ab_0 conda-forge\n hyperframe 6.1.0 pyhd8ed1ab_0 conda-forge\n idna 3.10 pyhd8ed1ab_1 conda-forge\n jinja2 3.1.5 pyhd8ed1ab_0 conda-forge\n lcms2 2.16 ha0e7c42_0 conda-forge\n lerc 4.0.0 h9a09cb3_0 conda-forge\n libblas 3.9.0 28_h504e6c8_accelerate conda-forge\n libcblas 3.9.0 28_h8d39bcd_accelerate conda-forge\n libcxx 19.1.7 ha82da77_0 conda-forge\n libdeflate ", "url": "https://github.com/pytorch/vision/issues/8889", "state": "open", "labels": [], "created_at": "2025-01-30T16:57:13Z", "updated_at": "2025-09-22T13:02:58Z", "comments": 4, "user": "IMG-PRCSNG" }, { "repo": "pytorch/pytorch", "number": 145978, "title": "What is the recommended way to use Distributed Checkpointing Save/Load with HSDP?", "body": "### \ud83d\udc1b Describe the bug\n\nThere are torch distributed checkpointing examples in [torch/distributed/checkpoint/examples](https://github.com/pytorch/pytorch/tree/main/torch/distributed/checkpoint/examples). All of these examples use FSDP. Running these examples out of the box has no issues, the loaded checkpoint state matches the saved checkpoint state. However, when I convert these examples to run HSDP instead of FSDP, I notice that loaded state no longer matches the saved state. \n\nHow I am converting from FSDP to HSDP:\n```\nmodel = FSDP(\n torch.nn.Linear(4, 4).cuda(dist.get_rank()),\n device_mesh=mesh,\n sharding_strategy=ShardingStrategy.HYBRID_SHARD\n )\n```\n\n[Link](https://gist.github.com/gkroiz/fcf5ed19665bc09475057f8bf626e853) to gist of updated [torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py) with HSDP modifications and printed output.\n\nI also made similar changes to [torch/distributed/checkpoint/examples/stateful_example.py](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/stateful_example.py) and saw the same discrepancies between saved and loaded state.\n\nEither (1) I'm setting up HSDP + distributed checkpointing incorrectly or (2) there is a bug with distributed checkpointing. Assuming (1), what is the correct way to set up HSDP + distributed checkpointing?\n\n### Versions\n\n```\nmy_vm:/workspace# python collect_env.py\n/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --upgrade 'optree>=0.13.0'`.\n warnings.warn(\nCollecting environment information...\nPyTorch version: 2.6.0+cu124\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.30.0\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-6.6.44+-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.5.82\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H100 80GB HBM3\nGPU 1: NVIDIA H100 80GB HBM3\nGPU 2: NVIDIA H100 80GB HBM3\nGPU 3: NVIDIA H100 80GB HBM3\nGPU 4: NVIDIA H100 80GB HBM3\nGPU 5: NVIDIA H100 80GB HBM3\nGPU 6: NVIDIA H100 80GB HBM3\nGPU 7: NVIDIA H100 80GB HBM3\n\nNvidia driver version: 550.90.07\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 208\nOn-line CPU(s) list: 0-207\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 52\nSocket(s): 2\nStepping: 8\nBogoMIPS: 5399.99\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 4.9 MiB (104 instances)\nL1i cache: ", "url": "https://github.com/pytorch/pytorch/issues/145978", "state": "open", "labels": [ "oncall: distributed", "triaged", "release notes: distributed (checkpoint)", "oncall: distributed checkpointing" ], "created_at": "2025-01-29T22:24:11Z", "updated_at": "2025-04-08T15:58:03Z", "user": "gkroiz" }, { "repo": "pytorch/torchtitan", "number": 811, "title": "FSDP checkpoints don't load when run is restarted with greater world size", "body": "A checkpoint is saved from an 8-GPU run with `dp_shard ` set to 8 and all other parallelisms set to 1. My understanding is that this is configured as an FSDP run.\n\nThe checkpoint is resumed from 16 GPUs with `dp_shard` now set to 16. When loading the checkpoint, we get this error:\n\n```\n[rank0]: Traceback (most recent call last): (RANK 15) [rank0]: File \"/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py\", line 164, in reduce_scatter [rank0]: local_data = map_fun() [rank0]: File \"/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/logger.py\", line 83, in wrapper \n[rank0]: result = func(*args, **kwargs) \n[rank0]: File \"/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py\", line 211, in local_step \n[rank0]: local_plan = planner.create_local_plan() \n[rank0]: File \"/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py\", line 233, in create_local_plan \n[rank0]: return create_default_local_load_plan( \n[rank0]: File \"/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py\", line 354, in create_default_local_load\n[rank0]: raise RuntimeError(f\"Missing key in checkpoint state_dict: {fqn}.\") \n[rank0]: RuntimeError: Missing key in checkpoint state_dict: dataloader.dp_rank_15. \n```\n\nMy understanding is that torch distributed checkpoints are supposed to support dynamic resharding at load time. Does this not work with torchtitan?\n\nI was able to successfully resume a checkpoint going down from 32 GPUs to 16.\n", "url": "https://github.com/pytorch/torchtitan/issues/811", "state": "closed", "labels": [ "bug", "documentation", "enhancement", "module: fsdp" ], "created_at": "2025-01-28T21:38:09Z", "updated_at": "2025-02-07T01:22:26Z", "comments": 4, "user": "darkmirage" }, { "repo": "pytorch/xla", "number": 8642, "title": "Make Mixtral pallas kernels Dynamo/AOTAutograd traceable", "body": "Similar to https://github.com/pytorch/xla/issues/8633, we'll need to refactor pallas kernels needed by Mixtral (e.g. GMM) into PyTorch custom ops in order to use scan in Mixtral.", "url": "https://github.com/pytorch/xla/issues/8642", "state": "open", "labels": [ "enhancement", "pallas" ], "created_at": "2025-01-28T19:29:33Z", "updated_at": "2025-02-13T13:15:27Z", "comments": 1, "user": "tengyifei" }, { "repo": "pytorch/xla", "number": 8632, "title": "[scan] Avoid re-tracing the combine function on every call", "body": "## \ud83d\ude80 Feature\n\nIt should be possible to somehow cache the traced graphs in `torch_xla.experimental.scan` so we don't trace on every call.\n\n## Motivation\n\nToday `torch_xla.experimental.scan` and `scan_layers` traces the user function with both AOTAutograd (to get the backward) and with LazyTensor (to lower them to HLO). AOTAutograd is very slow and we can easily become tracing bound. For example, `python3 examples/train_decoder_only_base.py` takes 1min30s but `python3 examples/train_decoder_only_base.py scan.decoder_with_scan.DecoderWithScan` takes 4min.\n\n## Pitch\n\nWe could wait for `torch.scan` to support autograd (c.f. https://github.com/pytorch/xla/pull/7901#issuecomment-2546903424) which will take a long time. In the meantime, we can implement some simple caching based on the `id` of the input function/module.\n\nThe caching should be opt-in because it's only sound if the function is pure. We can add a `assume_pure=True` argument to `scan` so that it only uses the caching when the user confirms that their function is pure.", "url": "https://github.com/pytorch/xla/issues/8632", "state": "closed", "labels": [ "enhancement", "good first issue", "performance" ], "created_at": "2025-01-27T06:30:47Z", "updated_at": "2025-06-19T20:02:13Z", "comments": 21, "user": "tengyifei" }, { "repo": "pytorch/text", "number": 2282, "title": "combining TEXT.build_vocab with BERT Embedding", "body": "## \u2753 Questions and Help\n\n**Description**\n\nHi, we can use glove embedding when building vocab, using\nsomething like:\n\n```\nMIN_FREQ = 2\n\nTEXT.build_vocab(train_data, \n min_freq = MIN_FREQ,\n vectors = \"glove.6B.300d\",\n unk_init = torch.Tensor.normal_)\n```\n\n<!-- Please send questions or ask for help here. -->\n\nHowever, I want to use BERT embedding because I need a sophisticated model to compare the performance of multiple embeddings. How can I use BERT in build_vocab?", "url": "https://github.com/pytorch/text/issues/2282", "state": "open", "labels": [], "created_at": "2025-01-27T02:11:21Z", "updated_at": "2025-01-27T02:11:21Z", "comments": 0, "user": "muhalfian" }, { "repo": "pytorch/torchtitan", "number": 803, "title": "Gradient Scaling With Pipeline Parallelism", "body": "The idiomatic way to perform gradient scaling is something like this:\n```python\npreds = model(inputs)\nloss = loss_fn(preds, targets)\nscaler.scale(loss).backward()\n```\n\nGiven that the current PyTorch PP API handles the backward pass *internally*, I find it difficult to do gradient scaling under a PP regime.\n\n```python\nif is_first_stage:\n pp_schedule.step(inputs) # bwd performed internally\nelif is_last_stage:\n losses = []\n pp_schedule.step(target=targets, losses=losses) # bwd performed internally\nelse:\n pp_schedule.step() # bwd performed internally\n\nloss = (\n torch.mean(torch.stack(losses)).to(device)\n if is_last_stage\n else torch.tensor([-1.0], device=device)\n)\n\n# scaler.scale(loss).backward() <-- !? backward pass has already been performed\n```\n\nIs there currently a good way to do gradient scaling with Pipeline Parallelism? And if not, will the Pipeline Parallelism API support gradient scaling in the near-term future?\n", "url": "https://github.com/pytorch/torchtitan/issues/803", "state": "open", "labels": [ "question", "module: pipelining" ], "created_at": "2025-01-24T12:16:16Z", "updated_at": "2025-02-06T23:28:00Z", "user": "windsornguyen" }, { "repo": "pytorch/xla", "number": 8617, "title": "Single core of TPU gives inference results different than the CPU results", "body": "# Description\nI encountered an issue when using PyTorch XLA to train a model on TPU. My main code gives a different results than training with CPU or GPU so I decided to check using a toy example and found that prediction using pytorch XLA gives results different than prediction using CPU.\nI also tried to check using pytorch lightning but it gives the same result like CPU so how to setup pytorch xla to give identical results like lightning?\n[Notebook](https://www.kaggle.com/code/saadsallam/tpu-cpu)\n", "url": "https://github.com/pytorch/xla/issues/8617", "state": "closed", "labels": [ "duplicate", "xla:tpu" ], "created_at": "2025-01-23T21:47:15Z", "updated_at": "2025-02-06T14:39:41Z", "comments": 1, "user": "mohamedamara7" }, { "repo": "pytorch/tutorials", "number": 3254, "title": "How to download pretrained word language quantized model?", "body": "In the word language quantized model tutorial, we assume we already have pretrained model.\nBut where can we download the model?\n\nhttps://github.com/pytorch/tutorials/blob/main/advanced_source/dynamic_quantization_tutorial.py#L151-L157", "url": "https://github.com/pytorch/tutorials/issues/3254", "state": "closed", "labels": [ "easy", "docathon-h1-2025" ], "created_at": "2025-01-23T20:29:10Z", "updated_at": "2025-06-04T21:05:05Z", "user": "Achilles718611" }, { "repo": "pytorch/torchtitan", "number": 801, "title": "[Possible Bug] RoPE here is GPT-J style instead of NeoX/Llama style?", "body": "I might miss something so please let me know if I do, and in this case I will close the issue.\n\nAs we know, GPT-J and NeoX/Llama apply RoPE slightly differently (per hugging face implementation):\n- the way GPT-J treats `q, k` as \"complex tensor\" is an interleaving style: `[q_0_real, q_0_imaginary, q_1_real, q_1_imaginary, ...]`\n- the way NeoX/Llama and almost all other RoPE based models treat them by \"rotating half\": `[q_0_real, q_1_real, ..., q_0_imaginary, q_1_imaginary, ...]` (see [here](https://github.com/huggingface/transformers/blob/2c3a44f9a769e98597d62ecdc7383785318be5a2/src/transformers/models/llama/modeling_llama.py#L150))\n\nThe way written here seems interesting:\nhttps://github.com/pytorch/torchtitan/blob/d9898423ecef131825d13c6c8b521a24e889785f/torchtitan/models/llama/model.py#L108\nIf I'm not mistaken, it is actually an interleaving style because `view_as_complex` uses the last axis as real and imaginary parts which are entries next to each other? I'm able to confirm this by spinning up a notebook session and compare it with hugging face's attention layer side-by-side. After fixing `apply_rotary_emb` it will be possible to match the attention outputs to a very good degree (though I haven't been able to match the outputs of the entire model with hugging face).\n\nThese two ways can be unified by carefully rearrange the columns in the weights of `wq` and `wk`, but I don't see it done in the model conversion script https://github.com/pytorch/torchtitan/blob/main/scripts/convert_llama_to_dcp.py \n\nIs it an oversight or did I miss something in the code?", "url": "https://github.com/pytorch/torchtitan/issues/801", "state": "closed", "labels": [], "created_at": "2025-01-22T23:32:36Z", "updated_at": "2025-01-22T23:58:48Z", "comments": 1, "user": "honglu2875" }, { "repo": "pytorch/text", "number": 2279, "title": "Could we have Android (Termux) Support?", "body": "# \u0628\u0633\u0645 \u0627\u0644\u0644\u0647 \u0627\u0644\u0631\u062d\u0645\u0627\u0646 \u0627\u0644\u0631\u062d\u064a\u0645 \u0627\u0645\u0649 \u0628\u0639\u062f \u0641\u0627\u0644\u0635\u0644\u0627\u0629 \u0648 \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u0649 \u0633\u064a\u062f\u0646\u0627 \u0645\u062d\u0645\u062f \u0648\u0639\u0644\u0649 \u0622\u0644\u0647 \u0627\u062c\u0645\u0639\u064a\u0646\n\n## Feature/Issue\n\n* building this project on mobile is pretty hard cuz of using ninja witch tries to build everything concurrently and this is got my phone to hang for a few minutes then OOM killed the process.\n* also it tries the way the build works is by rebuilding `third-party/*` even if they are already installed on the host device.\n\n## Related\n\n* [Termux Open Issue](https://github.com/termux/termux-packages/issues/19405)", "url": "https://github.com/pytorch/text/issues/2279", "state": "open", "labels": [], "created_at": "2025-01-22T05:22:38Z", "updated_at": "2025-01-22T08:45:23Z", "comments": 0, "user": "TunifyBasic" }, { "repo": "pytorch/vision", "number": 8871, "title": "SE module is missing in 'class FusedMBConv', 'efficientnet.py'. Is there a reason for it?", "body": "According to the paper, the FusedMBConv block has an SE module. But I can't find it in the code.", "url": "https://github.com/pytorch/vision/issues/8871", "state": "closed", "labels": [], "created_at": "2025-01-21T05:38:16Z", "updated_at": "2025-01-30T11:34:06Z", "comments": 5, "user": "Morris-Chen007" }, { "repo": "pytorch/vision", "number": 8868, "title": "torchvision version 0.14.0 with cuda version 116 support wheel file suddendly disappeard in download.pytorch.org", "body": "Dear Commnunity team. \n\nI have been using pytorch 1.13.0 and torchvision version 0.14.0 with cuda version 11.6 for my application(pytorch 2.x is not working for my app and torchvision 0.15 does not support pytorch 1.x)\n\nI was embarrased to find out that torchvision version 0.14.0 with cuda 11.6 has been disappeared all of sudden today. \n\nI have been downloading and installing the packages by the following command from old archives in https://download.pytorch.org/whl/cu116\n\npip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 --extra-index-url https://download.pytorch.org/whl/cu116\n\nbut today, torchvision installation doesn't work. it seems many torchvision files has been missing which support pytorch 1.0\n\nIs there anyway I can get the torchvision==0.14.0+cu116 back or any info you know about why this happened?\n\nAny advice will be big help to me. It seems many torchvision wheel\n\nThanks in advance ", "url": "https://github.com/pytorch/vision/issues/8868", "state": "closed", "labels": [], "created_at": "2025-01-19T09:28:21Z", "updated_at": "2025-01-20T00:40:31Z", "comments": 0, "user": "chulminkw" }, { "repo": "pytorch/torchtitan", "number": 797, "title": "what is the point of first part of this assertion", "body": "why we need to `assert 0 <= 1`\n\nhttps://github.com/pytorch/torchtitan/blob/d9898423ecef131825d13c6c8b521a24e889785f/torchtitan/models/llama/model.py#L79", "url": "https://github.com/pytorch/torchtitan/issues/797", "state": "closed", "labels": [], "created_at": "2025-01-19T07:05:24Z", "updated_at": "2025-01-19T15:30:25Z", "user": "gameofdimension" }, { "repo": "pytorch/executorch", "number": 7732, "title": "Be able install ET where torch is compiled from source instead of prebuilt (e.g., nightly, release)", "body": "Be able install ET where torch is compiled from source instead of prebuilt (e.g., nightly, release)\n\nThere are a few use-cases why this is useful: \n\n- If there are cross-dependencies between core vs ET and need to progress in lock steps, then we need to be able to install ET and test against locally compiled core.\n\n- Sometimes prebuilt are not available for torch, for example, Intel Mac. In those cases, users are compiling torch from source. In those cases, we should provide an easy way to integrate into ET.\n\ncc @byjlw", "url": "https://github.com/pytorch/executorch/issues/7732", "state": "closed", "labels": [ "triaged", "module: user experience" ], "created_at": "2025-01-17T18:31:51Z", "updated_at": "2025-07-28T11:34:10Z", "user": "mergennachin" }, { "repo": "pytorch/xla", "number": 8588, "title": "Run XLA container with DDP in Vertex AI", "body": "## \u2753 Questions and Help\nHey there! I prepared a Docker container that trains a model using DDP, which works fine in a TPU VM. However, when I run the training job in Vertex AI, it fails. I suspect it's because the `--privileged --net host --shm-size=16G` parameters are not available for the container in Vertex AI. Is there a way to run the container without these parameters, or is there a workaround for Vertex AI?\n\nI also prepared a minimal example.\n`run.py`:\n```Python\nimport torch_xla\n\ndef mp_fn(index):\n print(str(index) + ' is ready.')\n\nif __name__ == '__main__':\n torch_xla.launch(\n mp_fn,\n args=()\n )\n```\n\n`Dockerfile`:\n```\nFROM us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.5.0_3.10_tpuvm\n\nCOPY run.py /app/run.py\nWORKDIR /app/\n\nRUN export PJRT_DEVICE=TPU\n\nENTRYPOINT [\"python\"]\nCMD [\"/app/run.py\"]\n```\n\nI create v5litepod-8 TPU VM according to [docs](https://cloud.google.com/tpu/docs/run-in-container#train_a_pytorch_model_in_a_docker_container) and run the container as:\n`sudo docker run --rm --privileged --net host --shm-size=16G -it us-central1-docker.pkg.dev/my_registry/tpu_fail_example:latest` it works alright.\n\nNow to run the same in Vertex AI\n`train-job-spec.yaml`:\n```yaml\nworkerPoolSpecs:\n machineSpec:\n machineType: ct5lp-hightpu-8t\n tpuTopology: 2x4\n\n replicaCount: 1\n containerSpec:\n imageUri: us-central1-docker.pkg.dev/my_registry/tpu_fail_example:latest\n```\n\nAnd run it:\n```bash\ngcloud ai custom-jobs create \\\n --region=us-central1 \\\n --display-name=$HOSTNAME-tpu-fail \\\n --config=train-job-spec.yaml\n```\n\nIt results in error:\n```\nERROR 2025-01-15T11:03:07.776877384Z [resource.labels.taskName: workerpool0-0] concurrent.futures.process._RemoteTraceback:\nERROR 2025-01-15T11:03:07.776892524Z [resource.labels.taskName: workerpool0-0] \"\"\"\nERROR 2025-01-15T11:03:07.776899374Z [resource.labels.taskName: workerpool0-0] Traceback (most recent call last):\nERROR 2025-01-15T11:03:07.776904664Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/concurrent/futures/process.py\", line 246, in _process_worker\nERROR 2025-01-15T11:03:07.776919484Z [resource.labels.taskName: workerpool0-0] r = call_item.fn(*call_item.args, **call_item.kwargs)\nERROR 2025-01-15T11:03:07.776924384Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/concurrent/futures/process.py\", line 205, in _process_chunk\nERROR 2025-01-15T11:03:07.776928944Z [resource.labels.taskName: workerpool0-0] return [fn(*args) for args in chunk]\nERROR 2025-01-15T11:03:07.776935634Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/concurrent/futures/process.py\", line 205, in <listcomp>\nERROR 2025-01-15T11:03:07.776940274Z [resource.labels.taskName: workerpool0-0] return [fn(*args) for args in chunk]\nERROR 2025-01-15T11:03:07.776945034Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 58, in _run_thread_per_device\nERROR 2025-01-15T11:03:07.776951384Z [resource.labels.taskName: workerpool0-0] initializer_fn(local_rank, local_world_size)\nERROR 2025-01-15T11:03:07.776955894Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 121, in initialize_multiprocess\nERROR 2025-01-15T11:03:07.776960434Z [resource.labels.taskName: workerpool0-0] devices = xm.get_xla_supported_devices()\nERROR 2025-01-15T11:03:07.776972114Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_model.py\", line 93, in get_xla_supported_devices\nERROR 2025-01-15T11:03:07.776977254Z [resource.labels.taskName: workerpool0-0] devices = torch_xla._XLAC._xla_get_devices()\nERROR 2025-01-15T11:03:07.776981934Z [resource.labels.taskName: workerpool0-0] RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Failed to establish SliceBuilder grpc channel to localhost:8482.\nERROR 2025-01-15T11:03:07.776987123Z [resource.labels.taskName: workerpool0-0] \"\"\"\nERROR 2025-01-15T11:03:07.776993474Z [resource.labels.taskName: workerpool0-0] {\"levelname\":\"ERROR\", \"message\":\"\"}\nERROR 2025-01-15T11:03:07.776998343Z [resource.labels.taskName: workerpool0-0] The above exception was the direct cause of the following exception:\nERROR 2025-01-15T11:03:07.777002583Z [resource.labels.taskName: workerpool0-0] {\"levelname\":\"ERROR\", \"message\":\"\"}\nERROR 2025-01-15T11:03:07.777008234Z [resource.labels.taskName: workerpool0-0] Traceback (most recent call last):\nERROR 2025-01-15T11:03:07.777013183Z [resource.labels.taskName: workerpool0-0] File \"/app/tpu_minimal_fail/run.py\", line 11, in <module>\nERROR 2025-01-15T11:03:07.777017814Z [resource.labels.taskName: workerpool0-0] torch_xla.launch(\nERROR 2025-01-15T11:03:07.777023334Z [resource.labels.taskName: workerpool0-0] File \"/usr/local/lib/python3.10/site-packages/torch_xla/torch_xla.py\", line 233, in launch\nERROR 2025-01-15T11:03:07.777027923Z [resource.labe", "url": "https://github.com/pytorch/xla/issues/8588", "state": "closed", "labels": [], "created_at": "2025-01-17T11:22:13Z", "updated_at": "2025-01-27T09:55:30Z", "comments": 1, "user": "SteshinSS" }, { "repo": "pytorch/xla", "number": 8587, "title": "[torch_xla2] Wire `torch_xla2.compile`d function with torch `AutogradFunction`", "body": "## \ud83d\ude80 Feature\n<!-- A clear and concise description of the feature proposal -->\nCurrently if we wrap with model with `torch_xla2.compile` and want to train the model using the traditional torch training loop similar to https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/examples/basic_training.py\n\nYou would notice that it doesn't work.\n\nThe reason is because the compile wrapper [`JittableModule`](https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/torch_xla2/interop.py#L50) will eventuall call a `jax.jit`d callable, and torch doesn't know how to compute gradient of that callable.\n\nThe solution is to create a `torch.autograd.Function` subclass on the fly, with backward defined to call `jax.vjp` similar to this tutorial: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html\n\nThe result would be that wrapping a model with `torch_xla2.compile` it is still trainable.\n\n## Motivation\n\nHaving the forward and backward compiled with jax jit is faster to run.\n\n", "url": "https://github.com/pytorch/xla/issues/8587", "state": "open", "labels": [ "enhancement", "torchxla2" ], "created_at": "2025-01-17T01:18:27Z", "updated_at": "2025-02-11T12:19:27Z", "comments": 0, "user": "qihqi" }, { "repo": "pytorch/kineto", "number": 1028, "title": "Needs help, how to write trace files to remote storage", "body": "Recently, we have deployed dynolog in our gpu cluster to collect trace files via kineto on-demand profiling. It needs extra efforts to collect trace files dumped to local storage via `kineto` for distributed applications. We saw that kineto supports dumping traces files to remote storage in https://github.com/facebookincubator/dynolog/blob/main/docs/pytorch_profiler.md, which is exactly what we want. But there's no other docs or tutorials introduce how to use remote storage. Could you provide an introduction or a tip on how to configure kineto to write trace files to remote storage? ", "url": "https://github.com/pytorch/kineto/issues/1028", "state": "open", "labels": [], "created_at": "2025-01-16T03:52:48Z", "updated_at": "2025-03-11T20:39:30Z", "user": "staugust" }, { "repo": "pytorch/torchtitan", "number": 790, "title": "should we have an extension point for model transforms out of tree?", "body": "In [torchao](https://github.com/pytorch/ao), we have various low precision training features which are in prototype: MX, int8, bitnet. While we expect most of these to eventually end up in the main torchao APIs, it often takes ~months for a prototype to graduate.\n\ntorchtitan is extremely useful for helping us test low precision prototypes in real-world settings. For now, we've been creating unlanded PRs to test functionality (examples: https://github.com/pytorch/torchtitan/pull/614, https://github.com/pytorch/torchtitan/pull/778). Would torchtitan consider building an extension point to support this kind of experimentation fully out-of-tree?\n\nAn example of how this could look like:\n1. torchtitan provides a \"model transformation\" hook that it calls at a specified point in the initialization stage (for quantization, that should be after model init and before parallelization / torch.compile)\n2. user can provide a custom pass to transform the model (such as a prototype low precision training conversion pass)\n\nI'm not entirely sure on how this hook would be implemented since the current interface of torchtitan is CLI based, but wanted to share the request and start the discussion.", "url": "https://github.com/pytorch/torchtitan/issues/790", "state": "closed", "labels": [ "enhancement" ], "created_at": "2025-01-15T19:26:32Z", "updated_at": "2025-02-26T06:45:52Z", "comments": 17, "user": "vkuzo" }, { "repo": "pytorch/pytorch", "number": 144847, "title": "torch.compile() In my use case of calling torch.compile(), I have found that the model's data outputs are inconsistent. I suspect that using Triton for operator fusion may have introduced precision deviations. I am unsure how to locate and fix this issue.", "body": "### \ud83d\udc1b Describe the bug\n\n\"My Torch environment is as follows:\n2.2.2+cu121\n\nMy goal is to use functions related to torch.compile() to optimize the inference time of our model. In fact, it does work and achieves over a 50% reduction in inference time in the default mode.\n\nThe model code is as follows:\n`\"\"\"\ncopy from https://github.com/alimama-tech/NeurIPS_Auto_Bidding_AIGB_Track_Baseline/blob/main/bidding_train_env/baseline/dd/DFUSER.py\n\"\"\"\nfrom torch.optim import Adam\nimport os\nfrom typing import Optional, Tuple, List\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport gin\n\nfrom .temporal import TemporalUnet\nfrom .basic import (\n cosine_beta_schedule,\n Losses,\n extract,\n apply_conditioning,\n apply_conditioning_with_fix,\n)\n\n\nclass ReduceSum(nn.Module):\n def forward(self, x):\n return torch.sum(x, dim=-1)\n \n\n@gin.configurable\nclass GaussianInvDynDiffusion(nn.Module):\n def __init__(self, model, horizon, observation_dim, action_dim, n_timesteps=1000,\n clip_denoised=False, predict_epsilon=True, hidden_dim=256,\n loss_discount=1.0, returns_condition=False,\n condition_guidance_w=0.1,\n inv_bias=True,\n ):\n super().__init__()\n\n self.horizon = horizon\n self.observation_dim = observation_dim\n self.action_dim = action_dim\n self.transition_dim = observation_dim + action_dim\n self.model = model\n self.inv_model = nn.Sequential(\n nn.Linear(4 * self.observation_dim, hidden_dim, bias=inv_bias),\n nn.ReLU(),\n nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),\n nn.ReLU(),\n nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),\n nn.ReLU(),\n # ReduceSum(),\n nn.Linear(hidden_dim, self.action_dim, bias=inv_bias),\n )\n self.returns_condition = returns_condition\n self.condition_guidance_w = condition_guidance_w\n\n betas = cosine_beta_schedule(n_timesteps)\n alphas = 1. - betas\n alphas_cumprod = torch.cumprod(alphas, axis=0)\n alphas_cumprod_prev = torch.cat([torch.ones(1), alphas_cumprod[:-1]])\n\n self.n_timesteps = int(n_timesteps)\n self.clip_denoised = clip_denoised\n self.predict_epsilon = predict_epsilon\n\n self.register_buffer('betas', betas)\n self.register_buffer('alphas_cumprod', alphas_cumprod)\n self.register_buffer('alphas_cumprod_prev', alphas_cumprod_prev)\n\n # calculations for diffusion q(x_t | x_{t-1}) and others\n self.register_buffer('sqrt_alphas_cumprod', torch.sqrt(alphas_cumprod))\n self.register_buffer('sqrt_one_minus_alphas_cumprod', torch.sqrt(1. - alphas_cumprod))\n self.register_buffer('log_one_minus_alphas_cumprod', torch.log(1. - alphas_cumprod))\n self.register_buffer('sqrt_recip_alphas_cumprod', torch.sqrt(1. / alphas_cumprod))\n self.register_buffer('sqrt_recipm1_alphas_cumprod', torch.sqrt(1. / alphas_cumprod - 1))\n\n # calculations for posterior q(x_{t-1} | x_t, x_0)\n posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)\n self.register_buffer('posterior_variance', posterior_variance)\n\n self.register_buffer('posterior_log_variance_clipped',\n torch.log(torch.clamp(posterior_variance, min=1e-20)))\n self.register_buffer('posterior_mean_coef1',\n betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))\n self.register_buffer('posterior_mean_coef2',\n (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))\n\n loss_weights = self.get_loss_weights(loss_discount)\n self.loss_fn = Losses['state_l2'](loss_weights)\n\n def get_loss_weights(self, discount):\n\n self.action_weight = 1\n dim_weights = torch.ones(self.observation_dim, dtype=torch.float32)\n\n discounts = discount ** torch.arange(self.horizon, dtype=torch.float)\n discounts = discounts / discounts.mean()\n loss_weights = torch.matmul(discounts[:, None], dim_weights[None, :])\n\n if self.predict_epsilon:\n loss_weights[0, :] = 0\n\n return loss_weights\n\n # ------------------------------------------ sampling ------------------------------------------#\n\n def predict_start_from_noise(self, x_t, t, noise):\n\n if self.predict_epsilon:\n return (\n extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -\n extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise\n )\n else:\n return noise\n\n def q_posterior(self, x_start, x_t, t):\n posterior_mean = (\n extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +\n extract(self.posterior_mean_coef2, t, x_t.shape) * x_t\n )\n posterior_variance = extract(self.poste", "url": "https://github.com/pytorch/pytorch/issues/144847", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2025-01-15T07:35:30Z", "updated_at": "2025-04-22T11:18:54Z", "user": "liangshaopeng" }, { "repo": "pytorch/vision", "number": 8854, "title": "Local Windows Torchvision Build fails", "body": "I am trying to locally build torchvision in a conda environment on my cpu-only windows laptop and even if the build seems to be successful, when I try to import the torchvision package, it fails with this error: **RuntimeError: operator torchvision::nms does not exist**. I tried multiple times ( with different versions of python (3.8 and latest 3.12) in fresh conda environments and the result is the same. What can I do to fix this?", "url": "https://github.com/pytorch/vision/issues/8854", "state": "closed", "labels": [], "created_at": "2025-01-14T10:06:38Z", "updated_at": "2025-02-19T11:58:25Z", "comments": 1, "user": "alinpahontu2912" }, { "repo": "pytorch/vision", "number": 8848, "title": "ValueError for Image size: Height 480 , Width 854 in RAFT", "body": "### \ud83d\udc1b Describe the bug\n\n...\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\nraft_model = raft_small(pretrained=True, progress=False).to(device)\r\nraft_model = raft_model.eval()\r\ntransform = transforms.ToTensor()\r\nwith torch.no_grad():\r\n list_of_flows = raft_model(old_batch.to(device), new_batch.to(device))\r\n...\r\n\r\n\n\n### Versions\n\nHi there, \r\n\r\nI am testing the orchvision.models.optical_flow module raft_small, the code is running ok for image size (480, 752), (800,848)..\r\nHowever, when I test it on Image size: Height 480 , Width 854. The code throw \r\n\r\n```\r\nValueError: The feature encoder should downsample H and W by 8\r\n```\r\n\r\nI debug the code on [https://github.com/pytorch/vision/blob/d3beb52a00e16c71e821e192bcc592d614a490c0/torchvision/models/optical_flow/raft.py#L494](url)\r\n\r\n```\r\nfmaps = self.feature_encoder(torch.cat([image1, image2], dim=0))\r\nfmap1, fmap2 = torch.chunk(fmaps, chunks=2, dim=0)\r\nif fmap1.shape[-2:] != (h // 8, w // 8):\r\n raise ValueError(\"The feature encoder should downsample H and W by 8\")\r\n```\r\n**Image size: Height 480 , Width 854**\r\nwhere `fmap1.shape[-2:]` is `torch.Size([60, 107])`, `h // 8 = 60`, but `w // 8 = 106` which triggered the ValueError.\r\n\r\nI think this issue is related to output dimension of self.feature_encoder. Looking for help, thx~\r\n\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/8848", "state": "closed", "labels": [], "created_at": "2025-01-11T18:24:13Z", "updated_at": "2025-03-18T12:20:48Z", "comments": 1, "user": "Neoyning" }, { "repo": "pytorch/torchtitan", "number": 785, "title": "Why use RowwiseParallel for nn.Embedding instead of ColwiseParallel?", "body": "Colwise makes the logic a bit more clear. Rowwise splits on the token dimension, leading to confusion on how the different shards handle tokens that are not present within their shard. From a bit of debugging it seems like there is a special case for this somewhere deep in pytorch source code, but I could not find it.\r\n\r\nWith colwise, the embedding weight matrix is split on the model dim dimension, so all shards have all the tokens, just different parts of the model dim.\r\n\r\nhttps://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L133\r\n\r\n```\r\n parallelize_module(\r\n model,\r\n tp_mesh,\r\n {\r\n \"tok_embeddings\": RowwiseParallel(\r\n input_layouts=Replicate(),\r\n output_layouts=Shard(1),\r\n ),\r\n```\r\n\r\nCan someone provide some insight?", "url": "https://github.com/pytorch/torchtitan/issues/785", "state": "open", "labels": [ "question" ], "created_at": "2025-01-10T15:16:34Z", "updated_at": "2025-08-21T03:04:35Z", "user": "ghost" }, { "repo": "pytorch/TensorRT", "number": 3351, "title": "\u2753 [Question] How to install torch_tensorrt corresponding to pytorch tensorrt version", "body": "For example, I am using pytorch2.2.1, tensorrt10.2.0, how can I install torch_tensorrt (without changing pytorch, tensorrt versions)", "url": "https://github.com/pytorch/TensorRT/issues/3351", "state": "open", "labels": [ "question" ], "created_at": "2025-01-10T07:12:50Z", "updated_at": "2025-01-15T23:47:47Z", "user": "swearirh" }, { "repo": "pytorch/audio", "number": 3870, "title": "SQUIM running in real-time", "body": "I applied SQUIM to assess speech quality as a way to correct the direction-of-arrival of a location-based speech enhancement system. [More info here](https://www.sciencedirect.com/science/article/pii/S1051200424005840).\r\n\r\nI'm feeding the last 3-second window of the input to SQUIM, every 0.1 seconds. It is able to respond in less than that time: it featured a maximum response time of 0.0704 seconds. Thus, in terms of response time, SQUIM seems to be able to run in real-time.\r\n\r\nHowever, it does seem to struggle in providing a constant speech quality assessment throughout. I'm using the SI-SDR metric from the objective model. With the a speech recording with no enhancement or spatial variation carried out, the ideal behavior would be that SQUIM provided the same SI-SDR measurement through time, but, as it can be seen in Figure 2 of the aforementioned paper, it does not. It varies wildly, which required some smoothing to work well with the rest of the system.\r\n\r\nSo here are my questions:\r\n\r\n- Is it possible to modify SQUIM for this type of real-time application? I'm assuming it would need some sort of causalness built into it. Or not? I was actually impressed it was able to provide a workable result without any modification. Maybe a fine-tuning would be enough?\r\n- If so, what are the steps you would reccomend that I partake in fine-tuning SQUIM? I've taken a look at [this paper](https://arxiv.org/pdf/2206.12285) that @nateanl provided to another user inquired about it (in #3424), but it is still not clear to me how I should proceed.\r\n- Is SQUIM the best alternative for this? I've looked at other techniques for non-reference speech quality assessment, and it seems SQUIM is up there with the best of them for offline applications. But for real-time scenarios, I'm not sure.\r\n\r\nThank you in advance for any help/guidance you can provide. I'm open to help out in any way, if need be, to make SQUIM work better in real-time applications.", "url": "https://github.com/pytorch/audio/issues/3870", "state": "open", "labels": [], "created_at": "2025-01-09T19:43:35Z", "updated_at": "2025-01-09T19:43:35Z", "comments": 0, "user": "balkce" }, { "repo": "pytorch/TensorRT", "number": 3348, "title": "\u2753 [Question] How to save tensorrt engine ? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## I had already save torch.jit model and infer with pytorch backend successful, but I had tried find some example in project and issue, but I can not find any case, code, example, tutorial to show how to save a tensorrt engine for running by tensorrt backend, can you help me?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3348", "state": "open", "labels": [ "question" ], "created_at": "2025-01-08T12:24:23Z", "updated_at": "2025-01-08T15:10:27Z", "user": "lzcchl" }, { "repo": "pytorch/torchchat", "number": 1453, "title": "Unabled to import torchao experimental quant_api ", "body": "### \ud83d\udc1b Describe the bug\n\nSo i try to export my model and quantize it into .pte file using this command :\r\npython3 torchchat.py export llama3.2-1b-instruct --quantize torchchat/quant_config/mobile.json --output-pte-path llama3.2_1b_instruct.pte\r\n\r\nBefore I do this, I already activate venv and executorch env, \r\nBut i got error :\r\n\r\nPyTorch version 2.6.0.dev20241218+cpu available.\r\nUnabled to import torchao experimental quant_api with error: [Errno 2] No such file or directory: '/home/-/torchchat/torchao-build/src/ao/torchao/experimental/quant_api.py'\r\nUsing device=cpu\r\nSetting max_seq_length to 128 for ExecuTorch export.\r\nLoading model...\r\nTime to load model: 1.25 seconds\r\nQuantizing the model with: {'embedding': {'bitwidth': 4, 'groupsize': 32}, 'linear:a8w4dq': {'groupsize': 256}}\r\nKilled\r\n\r\nI try to find torchao :\r\nName: torchao\r\nVersion: 0.8.0+git2e032c6b\r\nSummary: Package for applying ao techniques to GPU models\r\nHome-page: https://github.com/pytorch-labs/ao\r\nAuthor:\r\nAuthor-email:\r\nLicense:\r\nLocation: /home/-/.pyenv/versions/3.10.0/lib/python3.10/site-packages\r\nRequires:\r\nRequired-by:\r\n\r\nI think maybe this is the problem. I want to know how can I change torchchat to find the path in /home/-/.pyenv/versions/3.10.0/lib/python3.10/site-packages instead of /home/-/torchchat/torchao-build/src/ao/torchao/experimental/quant_api.py\n\n### Versions\n\nPyTorch version: 2.6.0.dev20241218+cpu\r\nIs debug build: False\r\nCUDA used to build PyTorch: Could not collect\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.31.2\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.0 (default, Jan 4 2025, 09:08:08) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35\r\nIs CUDA available: False\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4050 Laptop GPU\r\nNvidia driver version: 555.99\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 20\r\nOn-line CPU(s) list: 0-19\r\nVendor ID: GenuineIntel\r\nModel name: 13th Gen Intel(R) Core(TM) i7-13700H\r\nCPU family: 6\r\nModel: 186\r\nThread(s) per core: 2\r\nCore(s) per socket: 10\r\nSocket(s): 1\r\nStepping: 2\r\nBogoMIPS: 5836.80\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nHypervisor vendor: Microsoft\r\nVirtualization type: full\r\nL1d cache: 480 KiB (10 instances)\r\nL1i cache: 320 KiB (10 instances)\r\nL2 cache: 12.5 MiB (10 instances)\r\nL3 cache: 24 MiB (1 instance)\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Reg file data sampling: Vulnerable: No microcode\r\nVulnerability Retbleed: Mitigation; Enhanced IBRS\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of rele", "url": "https://github.com/pytorch/torchchat/issues/1453", "state": "closed", "labels": [], "created_at": "2025-01-08T11:05:43Z", "updated_at": "2025-01-10T12:43:55Z", "comments": 1, "user": "Arthamna" }, { "repo": "pytorch/torchchat", "number": 1452, "title": "Why Torchchat uses MATH as SDPA backend?", "body": "### \ud83d\udc1b Describe the bug\n\nHi maintainers,\r\n\r\nI find that, Torchchat uses MATH as SDPA backend in https://github.com/pytorch/torchchat/blob/main/torchchat/generate.py#L542. However, for other libs like vllm, they all accept flash attention as default backend.\r\n\r\nSo why Torchchat uses MATH as a default backend? Is this required for accuracy? If not, I can help to add an argument to let user set the backend. Thanks!\n\n### Versions\n\n*", "url": "https://github.com/pytorch/torchchat/issues/1452", "state": "closed", "labels": [ "enhancement", "triaged" ], "created_at": "2025-01-08T08:40:03Z", "updated_at": "2025-01-22T01:57:41Z", "comments": 8, "user": "yanbing-j" }, { "repo": "pytorch/pytorch", "number": 144324, "title": "FSDP: How to support w8a8 quantization?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI replaced nn.Linear with QuantLinear, substituting the nn.Linear operator with an int8 quantized operator.\r\n\r\n\r\nact_tensor_int8, pertoken_scale = torch_npu.npu_dynamic_quant(x)\r\n\r\nquant_out = torch_npu.npu_quant_matmul(act_tensor_int8, \r\n self.weight.to(torch.int8),\r\n self.weight_scale, # weight scale \r\n offset=None, \r\n bias=self.bias, \r\n pertoken_scale=pertoken_scale,\r\n output_dtype=torch.bfloat16)\r\n\r\n\r\n\r\nThis change has achieved performance gains on a single GPU. However, when wrapped with FSDP (Fully Sharded Data Parallel) on multiple GPUs,\r\n\r\nmodel_fsdp = FullyShardedDataParallel(model, **settings)\r\nit fails to run because FSDP performs parameter sharding and cannot handle this quantized operator. The error message is as follows:\r\n\r\n[rank4]: RuntimeError: call aclnnQuantMatmulV4 failed, detail:E69999: Inner Error!\r\n[rank4]: E69999: [PID: 1182939] 2025-01-07-17:15:19.281.742 op[QuantBatchMatmulV3], [InferShape] dimensions a(12608) and b(128) must be equal[FUNC:InferNDimWithBias][FILE:matmul_infer_fns.cc][LINE:322]\r\n\r\n\r\nDo you have any good solutions for this issue?\r\n\r\n\n\ncc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/144324", "state": "closed", "labels": [ "triaged", "module: fsdp", "oncall: pt2" ], "created_at": "2025-01-07T13:17:02Z", "updated_at": "2025-07-02T08:19:36Z", "user": "Lenan22" }, { "repo": "pytorch/xla", "number": 8541, "title": "Slow XLA training performance.", "body": "## \u2753 Questions and Help\r\nI'm evaluating PyTorch-XLA for training, but noticed that there is a big degradation in performance compared to the native pytorch device. Is it a known problem, or is there a problem with the way I use PyTorch-XLA? I tested a simple MNIST training example, comparing the performance between PyTorch CUDA device and XLA CUDA device. The native CUDA device is twice faster.\r\nAppreciate any thoughts, suggestions or links to known performance issues, thanks!\r\n\r\n### Environment\r\nnote: there is no difference in performance measurements with the latest 2.5.0\r\n \r\n- torch 2.4.0\r\n- torch-xla 2.4.0\r\n- torch_xla_cuda_plugin 2.4.0.dev20240902\r\n- torchvision 0.19.0\r\n\r\n### How To Reproduce\r\n\r\nRun the test program with `xla = True` and `xla = False`\r\n\r\n``` python\r\nimport os\r\nfrom tqdm import tqdm\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.optim as optim\r\nfrom torchvision import datasets, transforms\r\nfrom torch.utils.data import DataLoader\r\nimport torch_xla.core.xla_model as xm\r\n\r\ndef get_device(xla):\r\n if xla:\r\n os.environ[\"PJRT_DEVICE\"] = \"CUDA\"\r\n os.environ[\"GPU_NUM_DEVICES\"] = \"1\"\r\n import torch_xla_cuda_plugin\r\n from torch_xla.experimental import plugins\r\n import torch_xla.runtime as xr\r\n plugins.use_dynamic_plugins()\r\n plugins.register_plugin('CUDA', torch_xla_cuda_plugin.CudaPlugin())\r\n xr.set_device_type('CUDA')\r\n device = xm.xla_device(devkind=\"CUDA\")\r\n else:\r\n device = torch.device('cuda:0')\r\n os.environ[\"PJRT_DEVICE\"] = \"CUDA\"\r\n os.environ[\"GPU_NUM_DEVICES\"] = \"1\"\r\n return device\r\n\r\nxla = True\r\ndevice = get_device(xla)\r\nprint(f\"Using device: {device}\")\r\n\r\nclass SimpleNN(nn.Module):\r\n def __init__(self):\r\n super(SimpleNN, self).__init__()\r\n self.fc1 = nn.Linear(28 * 28, 512) # number of neurons\r\n self.fc2 = nn.Linear(512, 256) # number of neurons\r\n self.fc3 = nn.Linear(256, 10) # Output layer (10 classes for digits 0-9)\r\n def forward(self, x):\r\n x = x.view(-1, 28 * 28) # Flatten the image\r\n x = torch.relu(self.fc1(x)) # Apply ReLU activation\r\n x = torch.relu(self.fc2(x))\r\n x = self.fc3(x)\r\n return x\r\n\r\n# Load the MNIST dataset and apply transformations\r\ntransform = transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Normalize((0.5,), (0.5,)) # Normalize to [-1, 1]\r\n])\r\n\r\ntrain_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)\r\ntest_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)\r\ntrain_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)\r\ntest_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)\r\n\r\n# Initialize the model and move it to the device\r\nmodel = SimpleNN().to(device)\r\n\r\n# Define the loss function and optimizer\r\ncriterion = nn.CrossEntropyLoss()\r\noptimizer = optim.Adam(model.parameters(), lr=0.001)\r\n\r\n# Training loop, 20 epochs\r\nfor epoch in tqdm(range(20)):\r\n model.train() # Set the model to training mode\r\n running_loss = 0.0\r\n\r\n for data, target in tqdm(train_loader):\r\n data, target = data.to(device), target.to(device) # Move data to the device\r\n\r\n optimizer.zero_grad() # Zero the gradients\r\n output = model(data) # Get model predictions\r\n loss = criterion(output, target) # Compute the loss\r\n loss.backward() # Backpropagate the gradients\r\n optimizer.step() # Update model parameters\r\n\r\n running_loss += loss.item()\r\n if xla:\r\n xm.mark_step()\r\n print(f'Epoch {epoch + 1}, Loss: {running_loss / len(train_loader)}')\r\n\r\n# Test the model\r\nmodel.eval()\r\ncorrect = 0\r\ntotal = 0\r\nwith torch.no_grad():\r\n for data, target in test_loader:\r\n data, target = data.to(device), target.to(device) # Move data to CUDA device\r\n output = model(data)\r\n _, predicted = torch.max(output, 1)\r\n total += target.size(0)\r\n correct += (predicted == target).sum().item()\r\n\r\nprint(f'Accuracy: {100 * correct / total}%')\r\n```\r\n", "url": "https://github.com/pytorch/xla/issues/8541", "state": "open", "labels": [ "performance", "xla:gpu" ], "created_at": "2025-01-07T09:49:12Z", "updated_at": "2025-02-11T13:50:46Z", "comments": 4, "user": "tzstoyanov" }, { "repo": "pytorch/vision", "number": 8836, "title": "Question: Modify Resnet File structure and how to import it", "body": "Hi, I would like to modify the structure of the model [Resnet50 ](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). My goal is neither to add nor to remove layers, only to replace the convolutions that in the code are made by the pytorch nn.Conv function by convolutions made by the Nvidia CUTLASS library (https://github.com/NVIDIA/cutlass/blob/main/examples/python/02_pytorch_extension_grouped_gemm.ipynb).\r\n\r\nI don't intend either to retrain or to modify weights, only to substitute the call to the convolutions with the call to a convolution of cutlass in a similar way to how I describe it in the pytorch forum: https://discuss.pytorch.org/t/using-diffetent-conv2d-ops-with-pre-trained-models/214367\r\n\r\nMy question is if it is possible, within the guidelines of the repository and then how can I import the file [Resnet50 ](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) from another file or pytorch as it would be done with https://pytorch.org/vision/main/models.html. \r\n\r\nThanks.", "url": "https://github.com/pytorch/vision/issues/8836", "state": "closed", "labels": [], "created_at": "2025-01-03T12:43:50Z", "updated_at": "2025-04-08T15:45:32Z", "user": "IzanCatalan" }, { "repo": "pytorch/tutorials", "number": 3211, "title": "\ud83d\udca1 [REQUEST] - Making the tutorial more coherent", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nThe 3-series tutorial set (linked in existing tutorial set) is disconnected in term of concepts being introduced and reused; like the\r\n- \"Dataset\" which is introduced in first tutorial but is not leveraged in next;\r\n- Intricate details like explanation of use of `torch.LongTensor` is skipped in part 2 (generating)\r\n\r\nI wish to modify the tutorials content by:\r\n- adding a linear flow of concepts and then updating the code in follow up concepts such that the end-user is aware of what is different from last time.\r\n- Add details in explanation of what we are doing and why\r\n- Add pictures that reinforce what is we are doing and how is it related to big picture we wish to do. \n\n### Existing tutorials on this topic\n\nTutorials with the issue\r\n- https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html\r\n- https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html\r\n- https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html\n\n### Additional context\n\nHey, \r\nI love tech-writing and wish to make tech adoption easier for all.\r\n\r\nBit of my works you can find.\r\n- https://github.com/LunaticMaestro/Content-Based-Book-Recommender\r\n- I am author of the AI Core tutorials (m ex-SAP employee): https://developers.sap.com/group.ai-core-get-started-basics.html", "url": "https://github.com/pytorch/tutorials/issues/3211", "state": "open", "labels": [ "nlp" ], "created_at": "2025-01-03T08:46:30Z", "updated_at": "2025-04-16T18:11:36Z", "comments": 1, "user": "LunaticMaestro" }, { "repo": "pytorch/torchtitan", "number": 770, "title": "How many H100 GPUs should I use to train Llama-3.1-70B models with Torchtitan?", "body": "I am planning to train the Llama-3.1-70B model using the Torchtitan framework and need advice on the optimal number of NVIDIA H100 GPUs required. My goal is to ensure efficient training in terms of time and cost, while maintaining a balance between hardware usage and model convergence. I\u2019d appreciate insights on batch size considerations, GPU memory utilization, and any recommended configurations for Torchtitan with this model. Additionally, if there are any benchmarks or past experiences with similar setups, please share them.\r\n\r\nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/770", "state": "closed", "labels": [], "created_at": "2025-01-03T02:21:50Z", "updated_at": "2025-01-04T04:46:32Z", "user": "jacklanda" }, { "repo": "pytorch/executorch", "number": 7486, "title": "How to run ExecuTorch on Linux with aarch64-oe-linux-gcc11.2?", "body": "Hi, I am new to ExecuTorch and currently trying to build and run it on a Linux-based Qualcomm board (QCS/QCM8550). The board's specifications are:\r\n\r\nOS: Linux\r\nCompiler: aarch64-oe-linux-gcc11.2\r\nSOC Model: 66\r\nHexagon Arch: V73\r\nI noticed that most guides are focused on Android environments. Could you please provide any hints or suggestions for building and running ExecuTorch on Linux with this setup?\r\nAny help or resources would be greatly appreciated!\r\nThank you in advance!\n\ncc @mergennachin @byjlw", "url": "https://github.com/pytorch/executorch/issues/7486", "state": "closed", "labels": [ "module: doc", "need-user-input", "triaged" ], "created_at": "2025-01-03T00:28:56Z", "updated_at": "2025-02-04T02:42:53Z", "user": "suhyun01150" }, { "repo": "pytorch/executorch", "number": 7467, "title": "How to run Qwen using Executorch?", "body": "### \ud83d\udcda The doc issue\n\nHi! I just wanted to how, how would I go about running Qwen using executorch? I was able to create the .pte file for Qwen. The example for Llama had a step 'Create a llama runner for android'. Do we have to do something similar for Qwen by creating a custom runner? Also the Qwen repository on Hugging Face Hub does not have a 'tokenizer.model' file, but the Llama example requires it for running inference using the adb shell. How to navigate around this?\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @mergennachin @cccclai @helunwencser @dvorjackz", "url": "https://github.com/pytorch/executorch/issues/7467", "state": "closed", "labels": [ "triaged", "module: llm" ], "created_at": "2025-01-02T07:16:56Z", "updated_at": "2025-08-28T21:17:24Z", "user": "Arya-Hari" }, { "repo": "pytorch/torchtitan", "number": 765, "title": "Can I load from non-FSDP optimizer state with FSDP2?", "body": "I have been running training on a different framework with FSDP1, where I saved the states with FULL_STATE_DICT - leading to optimizer states that are in a normal `torch.save` format. I'd love to resume from this checkpoint - is this currently supported by FSDP2 / DCP? When I naively try `dcp.load` it resulted in a shard index out of range error.", "url": "https://github.com/pytorch/torchtitan/issues/765", "state": "closed", "labels": [ "question" ], "created_at": "2024-12-31T15:52:59Z", "updated_at": "2025-01-28T18:47:26Z", "user": "syncdoth" }, { "repo": "pytorch/pytorch", "number": 143988, "title": "Add a knob to control how many blocks are used by persistent matmul/attn kernels", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWe train a transformer-style model using FSDP, and we have a very good overlap between the matmul kernels (from cuBLAS) and the NCCL operation in the background. However, when profiling, we have observed that the **matmuls take 2x as long** to complete when they are overlapped with a NCCL kernel!\r\n\r\nWe believe this is easily explained: we're running on H100 GPUs and, upon inspection, all the matmuls look like they are using \"persistent\" kernels. That is, they launch as many CUDA blocks as there are SMs on the GPU (i.e., 132) and each of these blocks will process several tiles in a row. What we're observing is thus a form of \"wave quantization\" where, due to NCCL occupying some SMs, not all blocks of the matmuls can be scheduled at once, thus breaking them into two waves, which thus take twice as long to complete.\r\n\r\nSince NCCL only occupies ~10% of the SMs, it would be much more efficient if the matmuls tried to launch a number of blocks that corresponds to ~90% of the SMs. This would allow the two kernels to run simultaneously in a single wave, with the matmuls only being ~10% slower, not ~50%!\r\n\r\nFor that, however, we need PyTorch to add a new knob allowing us to control such a value, and to forward that knob when launching its cuBLAS kernels (and others).\n\n### Alternatives\n\nNone. We couldn't find any environment variable provided by cuBLAS that allows us to override the number of blocks launched.\n\n### Additional context\n\nWith longer NCCL kernels, matmuls take a long time:\r\n<img width=\"1555\" alt=\"Screenshot 2024-12-30 at 17 29 23\" src=\"https://github.com/user-attachments/assets/d91d192e-16e9-4108-9d8e-5cb7caef80f6\" />\r\n\r\nWith shorter NCCL kernels, the non-overlapped matmuls now take less time:\r\n<img width=\"1439\" alt=\"Screenshot 2024-12-30 at 17 29 42\" src=\"https://github.com/user-attachments/assets/6e1fff67-b1a8-4b3b-a582-6648fc8b00bf\" />\r\n\n\ncc @ptrblck @msaroufim @eqy @csarofeen @xwang233 @jianyuh @nikitaved @pearu @mruberry @walterddr @Lezcano", "url": "https://github.com/pytorch/pytorch/issues/143988", "state": "closed", "labels": [ "module: cuda", "triaged", "module: cublas", "module: linear algebra" ], "created_at": "2024-12-30T16:31:05Z", "updated_at": "2025-07-10T11:20:38Z", "user": "lw" }, { "repo": "pytorch/torchtitan", "number": 764, "title": "FSDP 2 doesn't pad tensors?", "body": "Hi, I ran my model with FSDP 2, one of the linear layers has a dim that's not divisible by the world size (128), and so I got the following error:\r\n```\r\ntorch.Size([...]) is not divisible by FSDP world size 128.\r\n```\r\n\r\nFSDP 1 circumvents this issue by padding the tensors. Is this not supported by FSDP 2? If not, will it be supported?", "url": "https://github.com/pytorch/torchtitan/issues/764", "state": "open", "labels": [ "question", "module: fsdp" ], "created_at": "2024-12-29T21:55:50Z", "updated_at": "2025-02-13T01:51:43Z", "user": "cassanof" }, { "repo": "pytorch/torchchat", "number": 1446, "title": "Supply Local Weights to an LLM instead of Downloading Weights from HuggingFace", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI am having local copy of llama weights and i want to supply those weights to create a chat application.Please include a CLI flag to do so\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/torchchat/issues/1446", "state": "closed", "labels": [ "documentation", "triaged" ], "created_at": "2024-12-29T20:14:26Z", "updated_at": "2025-01-06T01:54:19Z", "comments": 2, "user": "sgupta1007" }, { "repo": "pytorch/data", "number": 1418, "title": "torch.node datawriter", "body": "### \ud83d\udcda The doc issue\n\nCan we add in the example/migration file related to a `torch.node` datawriter (if already possible with the current API). \r\nSee:\r\nhttps://github.com/pytorch/pytorch/issues/140296#issuecomment-2563190801\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1418", "state": "open", "labels": [], "created_at": "2024-12-27T13:49:24Z", "updated_at": "2024-12-27T13:49:24Z", "comments": 0, "user": "bhack" }, { "repo": "pytorch/pytorch", "number": 143906, "title": "How to correctly asynchronously copy a GPU tensor to a CPU tensor in another process without introducing blocking?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI am developing a distributed PyTorch application designed to asynchronously transfer data from a GPU process to a CPU process, ensuring that GPU computations remain non-blocking. In my current implementation, I utilize the non-blocking copy_ method to transfer data from a GPU tensor to a CPU tensor and then employ dist.isend to send the data to another rank. However, under certain conditions, this setup leads to a deadlock.\r\n```python\r\nimport torch\r\nimport torch.distributed as dist\r\nimport os\r\n\r\ndef gpu_to_cpu_and_send(rank, size):\r\n tensor = torch.randn(4096, 8192).cuda(rank) # On specific GPU\r\n print(tensor[-1][-1])\r\n print(f\"Rank {rank}: Created tensor on GPU\")\r\n cpu_tensor = torch.zeros(4096, 8192)\r\n cpu_tensor.copy_(tensor, non_blocking=True) # Non-blocking GPU to CPU copy\r\n print(f\"Rank {rank}: Copied tensor to CPU (non-blocking)\")\r\n\r\n if rank == 0:\r\n print(f\"Rank {rank}: Sending tensor to rank 1\")\r\n dist.isend(tensor=cpu_tensor, dst=1) # Sending data to rank 1\r\n print(f\"Rank {rank}: Data sent to rank 1\")\r\n\r\ndef receive_data(rank, size):\r\n received_tensor = torch.zeros(4096, 8192)\r\n print(f\"Rank {rank}: Waiting to receive data\")\r\n dist.recv(tensor=received_tensor, src=0) # Receiving data from rank 0\r\n print(f\"Rank {rank}: Received data from rank 0\")\r\n print(received_tensor[-1][-1])\r\n\r\ndef main():\r\n rank = int(os.environ['RANK'])\r\n size = int(os.environ['WORLD_SIZE'])\r\n dist.init_process_group(backend='gloo', rank=rank, world_size=size)\r\n\r\n if rank == 0:\r\n gpu_to_cpu_and_send(rank, size)\r\n elif rank == 1:\r\n receive_data(rank, size)\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n### Versions\r\n\r\ntorchrun --nproc_per_node=2 demo.py\r\n\r\nRun with Nvidia GPU.\r\n\r\ncc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o", "url": "https://github.com/pytorch/pytorch/issues/143906", "state": "open", "labels": [ "needs reproduction", "oncall: distributed", "triaged" ], "created_at": "2024-12-27T11:22:11Z", "updated_at": "2025-01-03T18:13:46Z", "user": "zhanghb55" }, { "repo": "pytorch/xla", "number": 8516, "title": "how to release tpu memory after del diffusers pipeline", "body": "## \u2753 Questions and Help\r\ni create a pipeline\r\n`\r\npipeline = DiffusionPipeline.from_pretrained(\"stable-diffusion-v1-5/stable-diffusion-v1-5\", torch_dtype=torch.bfloat16).to(torch_xla.core.xla_model.xla_device())\r\n\r\npipeline.to('cpu')\r\n\r\npipeline = StableAudioPipeline.from_pretrained(\"stabilityai/stable-audio-open-1.0\", torch_dtype=torch.bfloat16).to(torch_xla.core.xla_model.xla_device()) #which cause tpu memory problem\r\n`\r\ni want to ask how to release tpu memory. is there any tpu version of torch.cuda.empty_cache()\uff1f", "url": "https://github.com/pytorch/xla/issues/8516", "state": "closed", "labels": [ "duplicate", "question", "xla:tpu" ], "created_at": "2024-12-22T11:03:38Z", "updated_at": "2025-02-13T13:40:42Z", "user": "ghost" }, { "repo": "pytorch/torchchat", "number": 1436, "title": "If scripts need `bash`, don't say to use `sh`", "body": "### \ud83d\udc1b Describe the bug\n\nOn Debian systems, sh isn't bash, it's [dash](https://en.wikipedia.org/wiki/Almquist_shell#Dash). I haven't tested every script, but https://github.com/pytorch/torchchat/blob/main/docs/quantization.md says to run `sh torchchat/utils/scripts/build_torchao_ops.sh`, but this script fails unless run with bash on my Raspberry Pi 5.\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.6.0.dev20241218+cpu\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Debian GNU/Linux 12 (bookworm) (aarch64)\r\nGCC version: (Debian 12.2.0-14) 12.2.0\r\nClang version: Could not collect\r\nCMake version: version 3.31.2\r\nLibc version: glibc-2.36\r\n\r\nPython version: 3.11.2 (main, Sep 14 2024, 03:00:30) [GCC 12.2.0] (64-bit runtime)\r\nPython platform: Linux-6.6.51+rpt-rpi-2712-aarch64-with-glibc2.36\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 4\r\nOn-line CPU(s) list: 0-3\r\nVendor ID: ARM\r\nModel name: Cortex-A76\r\nModel: 1\r\nThread(s) per core: 1\r\nCore(s) per cluster: 4\r\nSocket(s): -\r\nCluster(s): 1\r\nStepping: r4p1\r\nCPU(s) scaling MHz: 100%\r\nCPU max MHz: 2400.0000\r\nCPU min MHz: 1500.0000\r\nBogoMIPS: 108.00\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp\r\nL1d cache: 256 KiB (4 instances)\r\nL1i cache: 256 KiB (4 instances)\r\nL2 cache: 2 MiB (4 instances)\r\nL3 cache: 2 MiB (1 instance)\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Reg file data sampling: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; CSV2, BHB\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.4\r\n[pip3] torch==2.6.0.dev20241218+cpu\r\n[pip3] torchao==0.8.0+git2f97b095\r\n[pip3] torchtune==0.5.0.dev20241218+cpu\r\n[pip3] torchvision==0.22.0.dev20241218\r\n[conda] Could not collect", "url": "https://github.com/pytorch/torchchat/issues/1436", "state": "closed", "labels": [ "bug", "documentation", "actionable", "Quantization", "triaged" ], "created_at": "2024-12-22T06:43:48Z", "updated_at": "2024-12-23T06:49:43Z", "comments": 2, "user": "swolchok" }, { "repo": "pytorch/ao", "number": 1456, "title": "[Bug] Unable to Obtain Quantized Weights Independently", "body": "**Description**\r\nThank you so much for your excellent work! I have been trying out a few demos to better understand your project. \r\n\r\nWhile running [this demo](https://github.com/pytorch/ao/tree/main/torchao/quantization#a16w8-int8-weightonly-quantization), I attempted to independently print the quantized weight values, scale, and zero points. I noticed that the latter two can be accessed directly, but the quantized weight values cannot. I wanted to confirm whether this might be a bug. \r\n\r\nI\u2019ve attached my code snippet and output log below for your reference: \r\n**Code snippet:** \r\n```python\r\nimport torch\r\nimport torchao\r\nfrom torchao.quantization import quantize_, int8_weight_only\r\nprint(f'Torch version: {torch.__version__}')\r\nprint(f'TorchAO version: {torchao.__version__}')\r\nmodel = torch.nn.Sequential(torch.nn.Linear(2, 4)).cuda().to(torch.bfloat16)\r\nquantize_(model, int8_weight_only())\r\n\r\nfor name, param in model.named_parameters():\r\n if \"weight\" in name:\r\n print('Weight Param Overview')\r\n print(\"parameter shape:\", param.shape)\r\n print(\"parameter values:\\n\", param)\r\n print('Weight detail:')\r\n print('\\nparam.tensor_impl.data:\\n', param.tensor_impl.data)\r\n print('\\nparam.tensor_impl.data.data:\\n', param.tensor_impl.data.data)\r\n print('\\nparam.tensor_impl.data.data.data:\\n', param.tensor_impl.data.data.data)\r\n print('\\nparam.tensor_impl.scale:\\n', param.tensor_impl.scale)\r\n print('\\nparam.tensor_impl.zero_point:\\n', param.tensor_impl.zero_point)\r\n```\r\n**Output log:** \r\n```bash\r\nTorch version: 2.5.1+cu121\r\nTorchAO version: 0.7.0\r\nWeight Param Overview\r\nparameter shape: torch.Size([4, 2])\r\nparameter values:\r\n AffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 127, -2],\r\n [-127, 6],\r\n [-127, -78],\r\n [ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout()), block_size=(1, 2), shape=torch.Size([4, 2]), device=cuda:0, dtype=torch.bfloat16, requires_grad=False)\r\nWeight detail:\r\n\r\nparam.tensor_impl.data:\r\n PlainAQTTensorImpl(data=tensor([[ 127, -2],\r\n [-127, 6],\r\n [-127, -78],\r\n [ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout())\r\n\r\nparam.tensor_impl.data.data:\r\n PlainAQTTensorImpl(data=tensor([[ 127, -2],\r\n [-127, 6],\r\n [-127, -78],\r\n [ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout())\r\n\r\nparam.tensor_impl.data.data.data:\r\n PlainAQTTensorImpl(data=tensor([[ 127, -2],\r\n [-127, 6],\r\n [-127, -78],\r\n [ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout())\r\n\r\nparam.tensor_impl.scale:\r\n tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)\r\n\r\nparam.tensor_impl.zero_point:\r\n tensor([0, 0, 0, 0], device='cuda:0')\r\n```\r\nFrom the print output, it can be seen that when I output `param.tensor_impl.data`, the output still includes the `scale` and `zero_point`. However, outputting `param.tensor_impl.scale` and `param.tensor_impl.zero_point` allows me to retrieve their values independently.\r\n\r\nIf you need any additional information from me, please feel free to let me know. Thanks again!", "url": "https://github.com/pytorch/ao/issues/1456", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2024-12-22T02:55:18Z", "updated_at": "2024-12-24T06:53:03Z", "user": "Mingbo-Lee" }, { "repo": "pytorch/xla", "number": 8515, "title": "multi_queries_paged_attention_kernel fails with Llama3 70B on a TPU-v4-16 with sequence length of 256", "body": "I'm running Llama3 70B with vllm on a TPU-v4-16, when using the flash attention kernel i'm able to go up to 16k, but using multi_queries_paged_attention with sequence length 256, it seems that the page table is taking too much smem.\r\n@vanbasten23 @WoosukKwon any idea how to address this (i'm familiar with pallas programming)?\r\nmaybe something along the lines of this? https://github.com/vllm-project/vllm/blob/02222a0256f60319f5bcd56d1d036a943d6334f8/vllm/attention/backends/pallas.py#L260\r\n\r\n\r\n```\r\nLoading safetensors checkpoint shards: 100% Completed | 30/30 [02:03<00:00, 4.13s/it] \r\nINFO 12-21 14:11:07 ray_tpu_executor.py:276] # TPU blocks: 19032, # CPU blocks: 6552 \r\nINFO 12-21 14:11:07 tpu_model_runner.py:274] Compiling the model with different input shapes... \r\n(RayWorkerWrapper pid=777, ip=10.130.0.186) INFO 12-21 14:11:08 tpu_model_runner.py:274] Compiling the model with different input shapes... \r\n(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 tpu.py:27] Cannot use _Backend.FLASH_ATTN backend on TPU. [repeated 6x across cluster] \r\n(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 selector.py:163] Using Pallas backend. [repeated 6x across cluster] \r\n(RayWorkerWrapper pid=1005) WARNING 12-21 14:07:13 tpu_worker.py:62] Starting to init distributed environment with config: ParallelConfig(pipeline_parallel_size=1, tensor_parallel_size=8, worker_use_ray=False, max_parallel_loading_workers=None, disable_custom_all_reduce=False, tokenizer_pool_config=None, ray_workers_use_nsight=False, p\r\nlacement_group=<ray.util.placement_group.PlacementGroup object at 0x7f05501350f0>, distributed_executor_backend='ray', worker_cls='vllm.worker.tpu_worker.TPUWorker', sd_worker_cls='auto', world_size=8, rank=3) [repeated 6x across cluster] \r\n(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 parallel_state.py:954] world_size=8 rank=3 local_rank=3 distributed_init_method=tcp://10.130.0.185:57577 backend=gloo [repeated 6x across cluster] \r\n(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 parallel_state.py:959] attempting to initialize distributed environment [repeated 6x across cluster] \r\n(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_world_group: local_rank=3 [repeated 12x across cluster] \r\n(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_world_group: backend='gloo' [repeated 6x across cluster] \r\n(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_model_parallel_group bla bla: local_rank=3 [repeated 26x across cluster] \r\n(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_model_parallel_group bla bla: backend='gloo' [repeated 13x across cluster] \r\n(RayWorkerWrapper pid=1005) self.cpu_group=<torch.distributed.distributed_c10d.ProcessGroup object at 0x7f051028d330> [repeated 6x across cluster] \r\nINFO 12-21 14:13:02 tpu_model_runner.py:284] batch_size: 1, seq_len: 16 ", "url": "https://github.com/pytorch/xla/issues/8515", "state": "open", "labels": [ "performance", "pallas", "xla:tpu" ], "created_at": "2024-12-21T14:23:04Z", "updated_at": "2025-02-13T13:43:19Z", "comments": 2, "user": "OhadRubin" }, { "repo": "pytorch/torchtitan", "number": 758, "title": "Checkpoint conversion", "body": "Hey,\r\n\r\nI am trying to evaluate a model trained with torchtitan using the lm eval harness. I am using the VLLM backend. Is there any straightforward way to convert a torchtitan model in the pytorch .pt format to, e.g., a huggingface model to be used in VLLM/lm eval harness? Within the torchtune repo, I was able to find [some code for VLMs](https://github.com/pytorch/torchtune/blob/main/recipes/eleuther_eval.py), but (a) that seems to be hardcoded for LLMs, (b) uses a new inference backend instead of e.g. relying on VLLM, and (c) I feel like there might be an easy way to convert torchtitan checkpoints rather than coming up with such an involved solution.\r\n\r\nHow did you evaluate downstream task accuracy with torchtitan models?\r\n\r\nThank you very much for your help.", "url": "https://github.com/pytorch/torchtitan/issues/758", "state": "closed", "labels": [ "question", "module: checkpoint" ], "created_at": "2024-12-20T17:57:58Z", "updated_at": "2025-08-21T02:59:53Z", "user": "MaxiBoether" }, { "repo": "pytorch/xla", "number": 8510, "title": "Input tensor is not an XLA tensor on AWS Trainium instance", "body": "Hi team, I'm currently testing my training job on AWS Trainium instance. I encountered error `Input tensor is not an XLA tensor: torch.FloatTensor` when using pytorch Conv1d/Linear module. I\u2019ve confirmed that the input tensor has been moved to xla as I explicitly called `.to(xm.xla_device())` when passing the input tensor to the module forward method. However, I found out the error was actually caused by the weight and bias generated within those pytorch module, eg here: https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/conv.py#L375, I printed the device location for self.weght and self.bias and they are on cpu. I have to modify the source Conv1d code to resolve the issue, eg:\r\n\r\n```\r\ndef _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):\r\n input = input.to(self.device)\r\n weight = weight.to(self.device)\r\n if bias is not None:\r\n bias = bias.to(self.device)\r\n\r\n if self.padding_mode != 'zeros':\r\n return F.conv1d(\r\n F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),\r\n weight, bias, self.stride, _single(0), self.dilation, self.groups\r\n )\r\n return F.conv1d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)\r\n```\r\n\r\n Does anyone know how to make sure those are on the xla device?\r\n", "url": "https://github.com/pytorch/xla/issues/8510", "state": "closed", "labels": [], "created_at": "2024-12-20T17:51:33Z", "updated_at": "2025-01-08T21:59:14Z", "comments": 4, "user": "JmeanJmy" }, { "repo": "pytorch/torchtitan", "number": 757, "title": "[question]can't disable CP for specific (unsupported) SDPA op", "body": "## Problem\r\n\r\ncurrently the API of context parallel have five problems.\r\n\r\n1. only support apply CP to whole model. if we have some cross attn in prep part of model with unsupported shape, it's impossible to apply CP since `_context_parallel` always override all SDPA and need to wrap whole backward.\r\n2. no shard/unshard with gradient support. when I try to apply CP to transformer blocks only and remain other SDPA replicate, the `context_parallel_unshard` in pytorch has `no_grad` decorator.\r\n3. weight gradients inside CP region is divided by size of CP mesh because we reduce them in DP+CP, this may work for optimizer with norm support, but make unit test harder to write, we have to scale them back to get same gradients as model without CP.\r\n4. The length of the sequence must be divisible by the number of CP (CP * 2 for robin).\r\n5. replicate input of CP region may contain wrong gradient because its gradient may be `Partial`, we have to check every replicate input and use `to_local(grad_placements=[Partial()])`.\r\n\r\nTo resolve problem 1 above, I remove `context_parallel` context to disable SDPA override, only enable `_enable_cp_dispatcher` context, then we can enable CP SDPA iff all inputs are converted to DTensor. problem 2 is easy to resolve, just write some auto grad functions.\r\n\r\nhere is my questions:\r\n1. is there a better way to support `CP region`?\r\n2. do you have any plan to support `CP region` officially and resolve issues above?", "url": "https://github.com/pytorch/torchtitan/issues/757", "state": "open", "labels": [ "enhancement", "module: context parallel" ], "created_at": "2024-12-20T11:00:23Z", "updated_at": "2025-03-12T10:30:52Z", "comments": 3, "user": "FindDefinition" }, { "repo": "pytorch/ao", "number": 1437, "title": "Segmentation Fault Running Int8 Quantized Model on GPU", "body": "Hi! We got into segmentation fault error when trying to run model inference on gpu. Below is a minimal example from the tutorial ([link](https://pytorch.org/docs/stable/quantization.html#post-training-static-quantization)):\r\n\r\n```\r\nimport torch\r\nimport time\r\n\r\n# define a floating point model where some layers could be statically quantized\r\nclass M(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n # QuantStub converts tensors from floating point to quantized\r\n self.quant = torch.ao.quantization.QuantStub()\r\n self.conv = torch.nn.Conv2d(1, 1, 1)\r\n self.relu = torch.nn.ReLU()\r\n # DeQuantStub converts tensors from quantized to floating point\r\n self.dequant = torch.ao.quantization.DeQuantStub()\r\n\r\n def forward(self, x):\r\n # manually specify where tensors will be converted from floating\r\n # point to quantized in the quantized model\r\n x = self.quant(x)\r\n x = self.conv(x)\r\n x = self.relu(x)\r\n # manually specify where tensors will be converted from quantized\r\n # to floating point in the quantized model\r\n x = self.dequant(x)\r\n return x\r\n\r\n# create a model instance\r\nmodel_fp32 = M()\r\n\r\n# model must be set to eval mode for static quantization logic to work\r\nmodel_fp32.eval()\r\ninput_fp32 = torch.randn(4, 1, 1024, 1024)\r\n\r\ntime_s = time.time()\r\nwith torch.no_grad():\r\n out = model_fp32(input_fp32)\r\ntime_e = time.time()\r\n\r\nmodel_fp32.qconfig = torch.ao.quantization.get_default_qconfig('fbgemm')\r\nmodel_fp32_fused = torch.ao.quantization.fuse_modules(model_fp32, [['conv', 'relu']])\r\nmodel_fp32_prepared = torch.ao.quantization.prepare(model_fp32_fused)\r\n\r\nmodel_fp32_prepared(input_fp32)\r\n\r\nmodel_int8 = torch.ao.quantization.convert(model_fp32_prepared)\r\n\r\n# run the model, relevant calculations will happen in int8\r\nres = model_int8(input_fp32)\r\n\r\nmodel_int8 = model_int8.to('cuda:0')\r\ninput_fp32 = input_fp32.to('cuda:0')\r\n\r\nwith torch.no_grad():\r\n out = model_int8(input_fp32)\r\n```\r\n\r\nOutput:\r\n```\r\nSegmentation fault (core dumped)\r\n```\r\n\r\nInference on CPU is fine for the int8 model. Could someone please advise on the potential reason? Thank you!", "url": "https://github.com/pytorch/ao/issues/1437", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2024-12-18T19:51:48Z", "updated_at": "2025-01-23T19:16:09Z", "user": "wendywangwwt" }, { "repo": "pytorch/TensorRT", "number": 3331, "title": "\u2753 [Question] Jetson AGX Orin build and install torch_tensorrt wheel file Failed", "body": "## \u2753 Question\r\n\r\nI follow this [tutorial](https://pytorch.org/TensorRT/getting_started/installation.html) to install Torch-TensorRT, but in the last step:\r\n\r\n```\r\ncuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g')\r\nexport TORCH_INSTALL_PATH=$(python -c \"import torch, os; print(os.path.dirname(torch.__file__))\")\r\nexport SITE_PACKAGE_PATH=${TORCH_INSTALL_PATH::-6}\r\nexport CUDA_HOME=/usr/local/cuda-${cuda_version}/\r\n# replace the MODULE.bazel with the jetpack one\r\ncat toolchains/jp_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel\r\n# build and install torch_tensorrt wheel file\r\npython setup.py --use-cxx11-abi install --user\r\n```\r\nsome errors happened:\r\n```\r\nRun this command to start an interactive shell in an identical sandboxed environment:\r\n(exec env - \\\r\n LD_LIBRARY_PATH=/usr/lib/gcc/aarch64-linux-gnu/11:/usr/local/cuda-12.6/lib64: \\\r\n PATH=/home/lab223/.cache/bazelisk/downloads/sha256/5a4cc979353671e438b9469b833924c2361e25a580cc278a75877aedc27c1c53/bin:/usr/lib/gcc/aarch64-linux-gnu/11:/home/lab223/anaconda3/envs/rnw/bin:/home/lab223/anaconda3/condabin:/usr/local/cuda-12.6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin \\\r\n PWD=/proc/self/cwd \\\r\n TMPDIR=/tmp \\\r\n /home/lab223/.cache/bazel/_bazel_lab223/install/128438993754f9753a1e4f56fdd76124/linux-sandbox -t 15 -w /dev/shm -w /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/execroot/_main -w /tmp -M /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/_hermetic_tmp -m /tmp -S /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/stats.out -D /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/debug.out -- /bin/sh -i)\r\nERROR: /home/lab223/TensorRT/core/conversion/var/BUILD:20:11: Compiling core/conversion/var/Var.cpp failed: I/O exception during sandboxed execution: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/58/execroot/_main/bazel-out/aarch64-opt/bin/external/_main~_repo_rules~libtorch/_virtual_includes/ATen/ATen/core/DeprecatedTypePropertiesRegistry.h (???????)\r\nERROR: /home/lab223/TensorRT/core/conversion/converters/BUILD:59:11: Compiling core/conversion/converters/NodeConverterRegistry.cpp failed: I/O exception during sandboxed execution: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/57/execroot/_main/bazel-out/aarch64-opt/bin/external/_main~_repo_rules~libtorch/_virtual_includes/ATen/ATen/ops/cudnn_batch_norm_ops.h (???????)\r\nERROR: /home/lab223/TensorRT/core/conversion/converters/BUILD:39:11: Compiling core/conversion/converters/converter_util.cpp failed: I/O exception during sandboxed execution: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/56/execroot/_main/external/_main~_repo_rules~libtorch/include/ATen/ops/native_dropout_backward_cpu_dispatch.h (???????)\r\nTarget //:libtorchtrt failed to build\r\nINFO: Elapsed time: 1000.299s, Critical Path: 574.06s\r\nINFO: 7984 processes: 7938 internal, 46 linux-sandbox.\r\nERROR: Build did NOT complete successfully\r\n\r\n```\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): **2.5.0**\r\n - CPU Architecture: **arm64(Jetson AGX Orin)**\r\n - OS (e.g., Linux): **Linux**\r\n - How you installed PyTorch: **pip**\r\n - Build command you used (if compiling from source): **python setup.py --use-cxx11-abi install --user**\r\n - Are you using local sources or building from archives: **building from archives**\r\n - Python version: **3.10.15**\r\n - CUDA version: **12.6**\r\n - GPU models and configuration: -\r\n - Any other relevant information: Install torch_tensorrt in the model's anaconda virtual environment\r\n\r\n## Additional context\r\n\r\nIt seems a I/O exception.But Jetson still has 11GB of space.please help me!thanks!!!!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3331", "state": "open", "labels": [ "question" ], "created_at": "2024-12-18T18:55:56Z", "updated_at": "2024-12-18T20:30:20Z", "user": "breknddone" }, { "repo": "pytorch/xla", "number": 8497, "title": "API guide code snippets don't work", "body": "## \ud83d\udcda Documentation\r\n\r\nTrying to follow the example here: https://github.com/pytorch/xla/blob/master/API_GUIDE.md#running-on-a-single-xla-device\r\n\r\nThe Python code snippet doesn't work, as `MNIST()`, `nn`, and `optim` are all undefined.\r\n", "url": "https://github.com/pytorch/xla/issues/8497", "state": "closed", "labels": [ "bug", "documentation" ], "created_at": "2024-12-17T23:14:45Z", "updated_at": "2025-05-20T15:55:40Z", "comments": 6, "user": "richardsliu" }, { "repo": "pytorch/serve", "number": 3375, "title": "503 InternalServerException, prediction failed", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nHello, my inference request is returning a 503 InternalServerException, prediction failed. How can I resolve this issue? Below are the specific request, inference response, and torchserve logs. Additional note: I am using Docker to run the service, and the inference works fine with the gRPC API, but not with the HTTP request.\r\n\r\n### Error logs\r\n\r\n![image](https://github.com/user-attachments/assets/e740baf6-684c-42af-a42d-db0bb33c9eeb)\r\n![image](https://github.com/user-attachments/assets/f32e6aa4-31a3-463b-a33e-e55bdb2b705d)\r\n\r\n\r\n### Installation instructions\r\n\r\ndocker\r\n\r\n### Model Packaging\r\n\r\n![image](https://github.com/user-attachments/assets/69c370d9-6cb0-418d-a554-2529d12b6fbc)\r\n\r\n\r\n### config.properties\r\n\r\n_No response_\r\n\r\n### Versions\r\n\r\n![image](https://github.com/user-attachments/assets/98da46e0-8033-4ccc-a9b7-a78caa65b1c9)\r\n\r\n\r\n### Repro instructions\r\n\r\n![image](https://github.com/user-attachments/assets/6254a0a9-e5e8-4d14-948a-2e432d51a29f)\r\n\r\n\r\n### Possible Solution\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/3375", "state": "closed", "labels": [], "created_at": "2024-12-17T04:02:49Z", "updated_at": "2024-12-17T08:43:24Z", "comments": 1, "user": "Jax29" }, { "repo": "pytorch/torchtitan", "number": 743, "title": "Model init with HuggingFace model", "body": "I am writing a simple script to run FSDP2 (`fully_shard`) on the `pythia-1b` model available on HuggingFace. I am currently running the model on 1 node with 2 devices. I was following the meta-device initialisation from the [FSDP2 docs](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md). However, I think there is something wrong with my implementation since the peak memory usage with FSDP is same as without FSDP (~ 1GB). Further, I get an OOM on my device when I try with `pythia-2.8b` model. Following is a snippet on how I am initialising the model on a meta device using HuggingFace APIs:\r\n\r\n```\r\nmodel_name = \"EleutherAI/pythia-14m\"\r\n \r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\ntokenizer.pad_token = tokenizer.eos_token\r\nconfig = AutoConfig.from_pretrained(model_name)\r\n with init_empty_weights():\r\n model = AutoModelForCausalLM.from_config(config)\r\n\r\n for module in model.modules():\r\n if isinstance(module, GPTNeoXLayer):\r\n fully_shard(module)\r\n \r\n model = fully_shard(model, reshard_after_forward=True)\r\n\r\n model = load_checkpoint_and_dispatch(\r\n model, path_to_safe_tensors\r\n )\r\n ```\r\n\r\nThis is not very straightforward since the shards expect `DTensors` when the weights are being loaded via `load_checkpoint_and_dispatch`. I am looking for some suggestions on what would be a good way to make FSDP2 work with HuggingFace models. I dont think accelerate supports FSDP2 yet.", "url": "https://github.com/pytorch/torchtitan/issues/743", "state": "open", "labels": [ "bug", "question", "module: checkpoint", "huggingface integration" ], "created_at": "2024-12-16T05:45:04Z", "updated_at": "2025-04-22T18:38:22Z", "user": "neeldani" }, { "repo": "pytorch/torchtitan", "number": 742, "title": "Low bit Optimizers & FA-3", "body": "1. hi have there been any tests with fa-3 and low bit optimizers from torchao like FP8adam for 8bit adam? i see divergence in training when resuming a FA-2 checkpoint with FA-3 or when using 8BITADAMW", "url": "https://github.com/pytorch/torchtitan/issues/742", "state": "open", "labels": [ "bug", "question" ], "created_at": "2024-12-16T03:56:22Z", "updated_at": "2025-01-07T00:55:59Z", "user": "asahni-sc" }, { "repo": "pytorch/audio", "number": 3863, "title": "How to install or download avutil-<VERSION>.dll and others on Windows Python venv not Conda!", "body": "I am reading this page and there is only information for conda\r\n\r\nI am not using conda but using Python venv\r\n\r\nSo how to install or where to get these dll files?\r\n\r\nhttps://pytorch.org/audio/stable/installation.html#optional-dependencies\r\n\r\n`When searching for FFmpeg installation, TorchAudio looks for library files which have names with version numbers. That is, libavutil.so.<VERSION> for Linux, libavutil.<VERSION>.dylib for macOS, and avutil-<VERSION>.dll for Windows. Many public pre-built binaries follow this naming scheme, but some distributions have un-versioned file names. If you are having difficulties detecting FFmpeg, double check that the library files you installed follow this naming scheme, (and then make sure that they are in one of the directories listed in library search path.)`\r\n\r\nI can't find anywhere these DLL files are distributed\r\n\r\nThis is causing me to get this error\r\n```\r\n\r\n File \"R:\\MMAudio_v1\\MMAudio\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 116, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"R:\\MMAudio_v1\\MMAudio\\gradio_demo.py\", line 60, in video_to_audio\r\n clip_frames, sync_frames, duration = load_video(video, duration)\r\n File \"R:\\MMAudio_v1\\MMAudio\\mmaudio\\eval_utils.py\", line 178, in load_video\r\n reader = StreamingMediaDecoder(video_path)\r\n File \"R:\\MMAudio_v1\\MMAudio\\venv\\lib\\site-packages\\torio\\io\\_streaming_media_decoder.py\", line 526, in __init__\r\n self._be = ffmpeg_ext.StreamingMediaDecoder(os.path.normpath(src), format, option)\r\n File \"R:\\MMAudio_v1\\MMAudio\\venv\\lib\\site-packages\\torio\\_extension\\utils.py\", line 25, in __getattr__\r\n self._import_once()\r\n File \"R:\\MMAudio_v1\\MMAudio\\venv\\lib\\site-packages\\torio\\_extension\\utils.py\", line 39, in _import_once\r\n self.module = self.import_func()\r\n File \"R:\\MMAudio_v1\\MMAudio\\venv\\lib\\site-packages\\torio\\_extension\\utils.py\", line 143, in _init_ffmpeg\r\n ext = _find_ffmpeg_extension(ffmpeg_vers)\r\n File \"R:\\MMAudio_v1\\MMAudio\\venv\\lib\\site-packages\\torio\\_extension\\utils.py\", line 122, in _find_ffmpeg_extension\r\n raise ImportError(\r\nImportError: Failed to intialize FFmpeg extension. Tried versions: ['6', '5', '4', '']. Enable DEBUG logging to see more details about the error.\r\n```\r\n\r\n\r\n", "url": "https://github.com/pytorch/audio/issues/3863", "state": "closed", "labels": [], "created_at": "2024-12-14T13:15:01Z", "updated_at": "2024-12-14T13:48:42Z", "user": "FurkanGozukara" }, { "repo": "pytorch/tutorials", "number": 3186, "title": "Writing a gradient tutorial, focused on leaf vs non leaf tensors.", "body": "There is no tutorial that specifically talks about requires_grad, retain_grad, and leaf tensor/ non-leaf tensors and how they interact with each other. Can I write a tutorial specifically talking about this topic? This will be useful when gradients are used in unusual places, as is the case for the deep dream algorithm. \r\n\r\ncc: @albanD ", "url": "https://github.com/pytorch/tutorials/issues/3186", "state": "closed", "labels": [ "advanced", "tutorial-proposal", "docathon-h1-2025", "hard" ], "created_at": "2024-12-14T06:44:48Z", "updated_at": "2025-08-20T23:30:53Z", "comments": 5, "user": "JitheshPavan" }, { "repo": "pytorch/torchchat", "number": 1424, "title": "Misaligned AOTI input; potential perf gains by fixing?", "body": "### \ud83d\udc1b Describe the bug\n\nPicked up in https://github.com/pytorch/torchchat/pull/1367, and worked around via https://github.com/pytorch/pytorch/pull/143236, it appears the input to the torchchat AOTI runner is not 16 byte aligned. \r\n\r\nWhile the PR from pytorch/pytorch eases this constraint, this may be indicative of potential perf losses (common of misalignment)\r\n\r\nhattip to @malfet for suggesting line of investigation\n\n### Versions\n\nhttps://github.com/pytorch/torchchat/commit/bb72b096b14f0c9753070f3523e43ed58aa55178", "url": "https://github.com/pytorch/torchchat/issues/1424", "state": "open", "labels": [ "bug", "actionable", "Compile / AOTI", "triaged" ], "created_at": "2024-12-14T01:11:30Z", "updated_at": "2024-12-17T23:35:29Z", "comments": 1, "user": "Jack-Khuu" }, { "repo": "pytorch/xla", "number": 8492, "title": "How to do multi-machine SPMD/FSDPv2 training with TPU\uff1f", "body": "## \u2753 Questions and Help\r\n\r\nI saw https://github.com/pytorch/xla/issues/6362 but there's no example training script found? For example, if I have multiple TPU v3-8 VMs, how would I achieve this with SPMD/FSDPv2?\r\n\r\nI'm currently sending the commands to all TPU VMs this way:\r\n```\r\npython3.10 podrun --include-local -- hostname\r\n```", "url": "https://github.com/pytorch/xla/issues/8492", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-12-13T18:47:39Z", "updated_at": "2025-05-05T12:34:29Z", "user": "radna0" }, { "repo": "pytorch/torchtitan", "number": 735, "title": "[question]FSDP2 have more peak active memory/reserved memory than FSDP1", "body": "## Environment\r\nOS: Ubuntu\r\nGPU: 8x GPU\r\ntorch: torch-2.6.0.dev20241212+cu124\r\nDDP: 4-way Tensor Parallel * 2-way FSDP\r\n\r\n## Problem\r\nI'm using FSDP+TP in my model and follow torchtitan code. when I switch fsdp1 to fsdp2, the memory usage showed by `nvidia-smi` increases by 10GB, also the peak active memory is greatly larger than fsdp1. is this expected? Which metric should be cared in `memory_summary` to avoid OOM?\r\n\r\nhere is the result from `torch.cuda.memory_summary()`. Following tables are generated when **first step is end**.\r\n\r\n* fsdp2\r\n```\r\n|===========================================================================|\r\n| PyTorch CUDA memory summary, device ID 0 |\r\n|---------------------------------------------------------------------------|\r\n| CUDA OOMs: 0 | cudaMalloc retries: 0 |\r\n|===========================================================================|\r\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\r\n|---------------------------------------------------------------------------|\r\n| Allocated memory | 13975 MiB | 18803 MiB | 2142 GiB | 2128 GiB |\r\n| from large pool | 13959 MiB | 18790 MiB | 2140 GiB | 2127 GiB |\r\n| from small pool | 16 MiB | 17 MiB | 1 GiB | 1 GiB |\r\n|---------------------------------------------------------------------------|\r\n| Active memory | 13975 MiB | 39454 MiB | 2142 GiB | 2128 GiB |\r\n| from large pool | 13959 MiB | 39437 MiB | 2140 GiB | 2127 GiB |\r\n| from small pool | 16 MiB | 18 MiB | 1 GiB | 1 GiB |\r\n|---------------------------------------------------------------------------|\r\n| Requested memory | 13792 MiB | 39306 MiB | 2138 GiB | 2125 GiB |\r\n| from large pool | 13775 MiB | 39289 MiB | 2137 GiB | 2124 GiB |\r\n| from small pool | 16 MiB | 18 MiB | 1 GiB | 1 GiB |\r\n|---------------------------------------------------------------------------|\r\n| GPU reserved memory | 45590 MiB | 45590 MiB | 45590 MiB | 0 B |\r\n| from large pool | 45566 MiB | 45566 MiB | 45566 MiB | 0 B |\r\n| from small pool | 24 MiB | 24 MiB | 24 MiB | 0 B |\r\n|---------------------------------------------------------------------------|\r\n| Non-releasable memory | 377331 KiB | 7818 MiB | 1017 GiB | 1017 GiB |\r\n| from large pool | 375788 KiB | 7813 MiB | 1016 GiB | 1016 GiB |\r\n| from small pool | 1543 KiB | 10 MiB | 1 GiB | 1 GiB |\r\n|---------------------------------------------------------------------------|\r\n| Allocations | 4735 | 4738 | 34212 | 29477 |\r\n| from large pool | 1504 | 1507 | 15954 | 14450 |\r\n| from small pool | 3231 | 3348 | 18258 | 15027 |\r\n|---------------------------------------------------------------------------|\r\n| Active allocs | 4735 | 4738 | 34212 | 29477 |\r\n| from large pool | 1504 | 1507 | 15954 | 14450 |\r\n| from small pool | 3231 | 3348 | 18258 | 15027 |\r\n|---------------------------------------------------------------------------|\r\n| GPU reserved segments | 304 | 304 | 304 | 0 |\r\n| from large pool | 292 | 292 | 292 | 0 |\r\n| from small pool | 12 | 12 | 12 | 0 |\r\n|---------------------------------------------------------------------------|\r\n| Non-releasable allocs | 15 | 135 | 15054 | 15039 |\r\n| from large pool | 13 | 89 | 9160 | 9147 |\r\n| from small pool | 2 | 65 | 5894 | 5892 |\r\n|---------------------------------------------------------------------------|\r\n| Oversize allocations | 0 | 0 | 0 | 0 |\r\n|---------------------------------------------------------------------------|\r\n| Oversize GPU segments | 0 | 0 | 0 | 0 |\r\n|===========================================================================|\r\n```\r\n\r\n* fsdp1\r\n```\r\n|===========================================================================|\r\n| PyTorch CUDA memory summary, device ID 0 |\r\n|---------------------------------------------------------------------------|\r\n| CUDA OOMs: 0 | cudaMalloc retries: 0 |\r\n|===========================================================================|\r\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\r\n|---------------------------------------------------------------------------|\r\n| Allocated memory | 13947 MiB | 18561 MiB | 2156 GiB | 2142 GiB |\r\n| from large pool | 13937 MiB | 18556 MiB | 2155 GiB | 2141 GiB |\r\n|", "url": "https://github.com/pytorch/torchtitan/issues/735", "state": "closed", "labels": [ "question" ], "created_at": "2024-12-13T08:42:49Z", "updated_at": "2024-12-18T11:31:23Z", "user": "FindDefinition" }, { "repo": "pytorch/torchtitan", "number": 734, "title": "using fsdp2 wrapper Flux(text to image) model , gradient is inconsistent with fsdp1", "body": "i use register_full_backward_hook print grad when backward like this way:\r\n```\r\ndef print_grad_hook(name):\r\n def hook(module, grad_input, grad_output):\r\n print(f\"Layer Name: {name},Grad input: {grad_input},Grad output: {grad_output}\")\r\n return hook\r\nfor name, layer in model.named_children():\r\n layer.register_full_backward_hook(print_grad_hook(name))\r\n```\r\nbut i discover last layer's grad is inconsistent between fsdp1 and fsdp2.\uff08'Grad output ' is consistent\uff09 \r\n```\r\nfsdp1 grad:\r\nLayer Name: proj_out,Grad input: (tensor([[[-1.4901e-08, 2.2445e-07, 5.4250e-08, ..., 3.7812e-07,\r\n 4.0606e-07, -3.8184e-07]]], device='cuda:0'),),Grad output: (tensor([[[-2.3991e-06, 2.3693e-06, 1.3947e-05, ..., \r\n 4.0233e-07, 8.0466e-07]]], device='cuda:0', dtype=torch.bfloat16),)\r\n\r\nfsdp2 grad:\r\nLayer Name: proj_out,Grad input: (tensor([[[-0.0000e+00, 2.3842e-07, 5.9605e-08, ..., 8.9407e-07,\r\n 4.1723e-07, -3.5763e-07]]], device='cuda:0'),),Grad output: (tensor([[[-2.3991e-06, 2.3693e-06, 1.3947e-05, ..., \r\n 4.0233e-07, 8.0466e-07]]], device='cuda:0', dtype=torch.bfloat16),)\r\n```\r\nBelow is my code to wrapper flux model\uff0cCurrently I'm not using compile and activation checkpointing\r\n```\r\nfor layer_id, transformer_block in model.transformer_blocks.named_children():\r\n if pp_enabled:\r\n # For PP, do not reshard after forward to avoid per-microbatch\r\n # all-gathers, which can be expensive and non-overlapped\r\n reshard_after_forward = False\r\n else:\r\n # As an optimization, do not reshard after forward for the last\r\n # transformer block since FSDP would prefetch it immediately\r\n reshard_after_forward = True\r\n fully_shard(\r\n transformer_block,\r\n **fsdp_config,\r\n reshard_after_forward=reshard_after_forward,\r\n )\r\n for layer_id, transformer_block in model.single_transformer_blocks.named_children():\r\n if pp_enabled:\r\n # For PP, do not reshard after forward to avoid per-microbatch\r\n # all-gathers, which can be expensive and non-overlapped\r\n reshard_after_forward = False\r\n else:\r\n # As an optimization, do not reshard after forward for the last\r\n # transformer block since FSDP would prefetch it immediately\r\n reshard_after_forward = int(layer_id) < len(model.single_transformer_blocks) - 1\r\n fully_shard(\r\n transformer_block,\r\n **fsdp_config,\r\n reshard_after_forward=reshard_after_forward,\r\n )\r\n fully_shard(model, **fsdp_config, reshard_after_forward=not pp_enabled)\r\n```", "url": "https://github.com/pytorch/torchtitan/issues/734", "state": "closed", "labels": [ "question" ], "created_at": "2024-12-13T07:59:32Z", "updated_at": "2025-08-21T02:58:13Z", "user": "yanmj0601" }, { "repo": "pytorch/vision", "number": 8803, "title": "OpenGL interoperability", "body": "### \ud83d\ude80 The feature\n\nZero-copy transfer of data between PyTorch and OpenGL on GPU by including \"OpenGL interoperability\" from CUDA in torchvision.\n\n### Motivation, pitch\n\nI am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize the output. Right now transfer of data between PyTorch and OpenGL is a problem for both training and inference.\r\nWithout any additional packages i can copy data from PyTorch CUDA to CPU and then back to OpenGL on GPU, this is very simple but slow. \r\nI can instead use some cuda bindings for python and a separate CUDA Toolkit installation to avoid the data transfer but this is quite complex and there are many competing ways and tools for doing this which makes it hard to navigate.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nThe 2 main ways I have been using OpenGL from python are with the packages `moderngl` and `PyOpenGL`.", "url": "https://github.com/pytorch/vision/issues/8803", "state": "open", "labels": [], "created_at": "2024-12-12T16:04:11Z", "updated_at": "2024-12-12T16:04:11Z", "comments": 0, "user": "cajoek" }, { "repo": "pytorch/xla", "number": 8486, "title": "2 questions for the composite op feature", "body": "## \u2753 Questions and Help\r\nGlad to see that the [composite op feature](https://github.com/pytorch/xla/blob/master/docs/source/features/stablehlo.md#preserving-high-level-pytorch-operations-in-stablehlo-by-generating-stablehlocomposite) is added to Torch-XLA. I have tried this feature and got some questions, hope to get answers/suggestions here:\r\n1. Some redundant IRs (start from `custom_call`) can't be erased after created the composite op, e.g. `Gelu`:\r\n```python\r\nimport torch\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\nfrom torch_xla import stablehlo\r\nfrom torch_xla.experimental.mark_pattern_utils import StableHLOCompositeBuilder\r\n\r\nclass Example(torch.nn.Module):\r\n def __init__(self):\r\n super(Example, self).__init__()\r\n self.gelu = torch.nn.GELU(approximate=\"none\")\r\n self.composite_op = StableHLOCompositeBuilder(\"composite.gelu\", {\"approximate\": \"none\"})\r\n\r\n def forward(self, x):\r\n x = self.composite_op.mark_inputs(x)\r\n y = self.gelu(x)\r\n y = self.composite_op.mark_outputs(y)\r\n return y\r\n\r\nx = torch.randn(10, device=xm.xla_device())\r\nmodel = Example().to(xm.xla_device())\r\nprint(model(x))\r\n\r\ninput_args = (x, )\r\nexported = torch.export.export(model, input_args)\r\n# print(exported.graph)\r\nstablehlo_gm = stablehlo.exported_program_to_stablehlo(exported)\r\nstablehlo = stablehlo_gm.get_stablehlo_text()\r\nprint(stablehlo)\r\n```\r\nThe generated StableHLO is:\r\n```mlir\r\nmodule @IrToHlo.16 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\r\n func.func @main(%arg0: tensor<10xf32>) -> tensor<10xf32> {\r\n %cst = stablehlo.constant dense<0.707106769> : tensor<10xf32>\r\n %0 = stablehlo.multiply %arg0, %cst : tensor<10xf32>\r\n %1 = stablehlo.custom_call @mhlo.erf(%0) {mhlo.attributes = {}, mhlo.version = 1 : i64} : (tensor<10xf32>) -> tensor<10xf32>\r\n %2 = stablehlo.composite \"composite.gelu\" %arg0 {composite_attributes = {approximate = \"none\"}, decomposition = @composite.gelu.impl} : (tensor<10xf32>) -> tensor<10xf32>\r\n return %2 : tensor<10xf32>\r\n }\r\n func.func private @composite.gelu.impl(%arg0: tensor<10xf32>) -> tensor<10xf32> {\r\n %cst = stablehlo.constant dense<1.000000e+00> : tensor<10xf32>\r\n %cst_0 = stablehlo.constant dense<0.707106769> : tensor<10xf32>\r\n %cst_1 = stablehlo.constant dense<5.000000e-01> : tensor<10xf32>\r\n %0 = stablehlo.multiply %arg0, %cst_1 : tensor<10xf32>\r\n %1 = stablehlo.multiply %arg0, %cst_0 : tensor<10xf32>\r\n %2 = stablehlo.custom_call @mhlo.erf(%1) {mhlo.attributes = {}, mhlo.version = 1 : i64} : (tensor<10xf32>) -> tensor<10xf32>\r\n %3 = stablehlo.add %2, %cst : tensor<10xf32>\r\n %4 = stablehlo.multiply %0, %3 : tensor<10xf32>\r\n return %4 : tensor<10xf32>\r\n }\r\n}\r\n```\r\nThe `erf` op in `main` is useless and not erased. I have checked the [composite op pass](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/stablehlo_composite_helper.cc#L514-L519), it left these useless ops to later `canonicalizer` instead of erasing directly, but the `canonicalizer` didn't handle it... I guess it's caused by the custom call side-effect.\r\n\r\n**The question**: Can the composite op pass erase these ops directly? Is any special reason to avoid the erasing operation here?\r\n\r\n2. Composite op feature can't work in training. Even the proposal of this feature is for inference now (work for export API), I tried to enabled it in training locally, but I found that it reported a warning:\r\n> UserWarning: xla::mark_tensor: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /data4/home/luteng/code/pytorch/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:62.)\r\n return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n\r\nThen the backward graph is not generated.\r\n\r\n**The question**: Is any plan to support composite op feature in training? It seems the missing part is only to add the Autograd for `mark_tensor`, but I'm just a XLA developer and not familiar with PyTorch, I don't know how to add it...", "url": "https://github.com/pytorch/xla/issues/8486", "state": "closed", "labels": [ "question", "stablehlo" ], "created_at": "2024-12-12T02:37:57Z", "updated_at": "2025-05-05T12:32:51Z", "user": "Zantares" }, { "repo": "pytorch/ao", "number": 1403, "title": "ImportError: cannot import name 'weight_only_quant_qconfig' from 'torchao.quantization' (R:\\CogVideoX_v3\\CogVideo\\venv\\Lib\\site-packages\\torchao\\quantization\\__init__.py)", "body": "I am trying to use [CogVideoX1.5-5B-I2V](https://huggingface.co/THUDM/CogVideoX1.5-5B-I2V) with following\r\n\r\nI am on Windows\r\n\r\nEverything installed but still getting this error - version 0.7.0\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"R:\\CogVideoX_v3\\CogVideo\\inference\\gradio_composite_demo\\app.py\", line 40, in <module>\r\n from torchao.quantization import quantize_, int8_weight_only, weight_only_quant_qconfig\r\nImportError: cannot import name 'weight_only_quant_qconfig' from 'torchao.quantization' (R:\\CogVideoX_v3\\CogVideo\\venv\\Lib\\site-packages\\torchao\\quantization\\__init__.py)\r\nPress any key to continue . . .\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n```\r\n\r\nimport torch\r\nfrom diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXImageToVideoPipeline\r\nfrom diffusers.utils import export_to_video, load_image\r\nfrom transformers import T5EncoderModel\r\nfrom torchao.quantization import quantize_, int8_weight_only\r\n\r\nquantization = int8_weight_only\r\n\r\ntext_encoder = T5EncoderModel.from_pretrained(\"THUDM/CogVideoX1.5-5B-I2V\", subfolder=\"text_encoder\",\r\n torch_dtype=torch.bfloat16)\r\nquantize_(text_encoder, quantization())\r\n\r\ntransformer = CogVideoXTransformer3DModel.from_pretrained(\"THUDM/CogVideoX1.5-5B-I2V\", subfolder=\"transformer\",\r\n torch_dtype=torch.bfloat16)\r\nquantize_(transformer, quantization())\r\n\r\nvae = AutoencoderKLCogVideoX.from_pretrained(\"THUDM/CogVideoX1.5-5B-I2V\", subfolder=\"vae\", torch_dtype=torch.bfloat16)\r\nquantize_(vae, quantization())\r\n\r\n# Create pipeline and run inference\r\npipe = CogVideoXImageToVideoPipeline.from_pretrained(\r\n \"THUDM/CogVideoX1.5-5B-I2V\",\r\n text_encoder=text_encoder,\r\n transformer=transformer,\r\n vae=vae,\r\n torch_dtype=torch.bfloat16,\r\n)\r\n\r\npipe.enable_model_cpu_offload()\r\npipe.vae.enable_tiling()\r\npipe.vae.enable_slicing()\r\n\r\nprompt = \"A little girl is riding a bicycle at high speed. Focused, detailed, realistic.\"\r\nimage = load_image(image=\"input.jpg\")\r\nvideo = pipe(\r\n prompt=prompt,\r\n image=image,\r\n num_videos_per_prompt=1,\r\n num_inference_steps=50,\r\n num_frames=81,\r\n guidance_scale=6,\r\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\r\n).frames[0]\r\n\r\nexport_to_video(video, \"output.mp4\", fps=8)\r\n```", "url": "https://github.com/pytorch/ao/issues/1403", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2024-12-11T23:43:15Z", "updated_at": "2024-12-12T01:45:57Z", "user": "FurkanGozukara" }, { "repo": "pytorch/TensorRT", "number": 3317, "title": "\u2753 [Question] Jetson AGX Orin Install in Jetpack 6.1 Build did NOT complete successfully", "body": "## \u2753 Question\r\n\r\nI follow this [tutorial](https://pytorch.org/TensorRT/getting_started/installation.html) to install Torch-TensorRT, but in the last step:\r\n```\r\n# build and install torch_tensorrt wheel file\r\npython setup.py --use-cxx11-abi install --user\r\n```\r\nsome errors happened:\r\n```\r\nusing CXX11 ABI build\r\nJetpack version: 6.1\r\nbuilding libtorchtrt cmd=['/usr/bin/bazel', 'build', '//:libtorchtrt', '--compilation_mode=opt', '--distdir=third_party/dist_dir/x86_64-linux-gnu', '--config=linux', '--platforms=//toolchains:jetpack_6.1']\r\nDEBUG: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/external/rules_python~/python/private/python.bzl:46:10: WARNING: Ignoring toolchain 'python_3_11' from module 'rules_pkg': Toolchain 'python_3_11' from module 'torch_tensorrt' already registered Python version 3.11 and has precedence\r\nINFO: Analyzed target //:libtorchtrt (127 packages loaded, 13849 targets configured).\r\nERROR: /home/lab223/TensorRT/core/util/BUILD:60:11: Compiling core/util/Exception.cpp failed: (Exit 1): gcc failed: error executing CppCompile command (from target //core/util:exception) /home/lab223/anaconda3/envs/rnw/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG ... (remaining 25 arguments skipped)\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging\r\ngcc: fatal error: cannot execute 'cc1plus': execvp: No such file or directory\r\ncompilation terminated.\r\nTarget //:libtorchtrt failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 8.444s, Critical Path: 4.05s\r\nINFO: 329 processes: 329 internal.\r\nERROR: Build did NOT complete successfully\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): **2.5.0**\r\n - CPU Architecture: **arm64(Jetson AGX Orin)**\r\n - OS (e.g., Linux): **Linux**\r\n - How you installed PyTorch: **pip**\r\n - Build command you used (if compiling from source): **python setup.py --use-cxx11-abi install --user**\r\n - Are you using local sources or building from archives: **building from archives**\r\n - Python version: **3.10.15**\r\n - CUDA version: **12.6**\r\n - GPU models and configuration: -\r\n - Any other relevant information: Install torch_tensorrt in the model's anaconda virtual environment\r\n\r\n## Additional context\r\n\r\nplease help me!thanks!!!!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3317", "state": "open", "labels": [ "question" ], "created_at": "2024-12-11T09:21:09Z", "updated_at": "2024-12-18T19:16:46Z", "user": "breknddone" }, { "repo": "pytorch/ao", "number": 1397, "title": "\"Where is the overloaded function for torch.nn.functional.linear(aqt, original_weight_tensor, bias)? \"", "body": "Here is an example\r\n\r\nint8_dynamic_activation_int8_weight\r\n\r\naqt: \r\n\r\nAffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 5, -2, 24, ..., 17, 73, 54],\r\n [ -30, -19, -53, ..., -9, -33, 55],\r\n [ -7, -20, -28, ..., 47, 71, -15],\r\n ...,\r\n [ 36, 8, 40, ..., 13, -10, 45],\r\n [ -38, -12, 47, ..., -22, 0, -29],\r\n [ 20, -127, 52, ..., 18, 27, -36]], dtype=torch.int8)... , scale=tensor([0.0293, 0.0233, 0.0271, 0.0234, 0.0209, 0.0227, 0.0247, 0.0328, 0.0270,\r\n 0.0215, 0.0245, 0.0209, 0.0325, 0.0232, 0.0238, 0.0267, 0.0237, 0.0202,\r\n 0.0249, 0.0239, 0.0255, 0.0246, 0.0225, 0.0288, 0.0194, 0.0215, 0.0224,\r\n 0.0210, 0.0253, 0.0189, 0.0240, 0.0228, 0.0208, 0.0211, 0.0295, 0.0275,\r\n 0.0200, 0.0250, 0.0202, 0.0269, 0.0266, 0.0203, 0.0223, 0.0246, 0.0212,\r\n 0.0217, 0.0246, 0.0203, 0.0219, 0.0237, 0.0216, 0.0191, 0.0213, 0.0227,\r\n 0.0330, 0.0194, 0.0226, 0.0162, 0.0203, 0.0284, 0.0218, 0.0208, 0.0254,\r\n 0.0220, 0.0357, 0.0288, 0.0290, 0.0235, 0.0218, 0.0188, 0.0279, 0.0232,\r\n 0.0238, 0.0195, 0.0256, 0.0255, 0.0204, 0.0198, 0.0211, 0.0219, 0.0262,\r\n 0.0253, 0.0246, 0.0177, 0.0209, 0.0216, 0.0253, 0.0261, 0.0215, 0.0257,\r\n 0.0240, 0.0197, 0.0206, 0.0270, 0.0243, 0.0218, 0.0261, 0.0350, 0.0238,\r\n 0.0243])... , zero_point=tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0.])... , _layout=PlainLayout()), block_size=[1, 200], shape=torch.Size([100, 200]), device=cpu, dtype=torch.float32, requires_grad=False)\r\n \r\n \r\noriginal_weight_tensor: \r\n\r\nAffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 127, 0, 0, ..., 0, 0, 0],\r\n [ 127, 0, 0, ..., 0, 0, 0],\r\n [ 127, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [ 47, 36, -70, ..., 49, 71, 5],\r\n [ 117, -2, -91, ..., -112, 9, -81],\r\n [ -67, -91, 114, ..., 51, 11, -126]], dtype=torch.int8)... , scale=tensor([7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,\r\n 2.3313e-02, 2.3492e-02, 2.3277e-02, 2.3458e-02, 2.3438e-02, 2.3528e-02,\r\n 2.3352e-02, 2.3522e-02, 2.3500e-02, 2.3332e-02, 2.3376e-02, 2.3481e-02,\r\n 2.3275e-02, 2.3509e-02, 2.3453e-02, 2.3460e-02, 2.3525e-02, 2.3489e-02,\r\n 2.3482e-02, 2.3436e-02, 2.3499e-02, 2.3523e-02, 2.3519e-02, 2.3320e-02,\r\n 2.3503e-02, 2.3453e-02, 2.3514e-02, 2.3496e-02, 2.3330e-02, 2.3444e-02,\r\n 2.3483e-02, 2.3428e-02, 2.3495e-02, 2.3445e-02, 2.3437e-02, 2.3505e-02,\r\n 2.3338e-02, 2.3517e-0", "url": "https://github.com/pytorch/ao/issues/1397", "state": "open", "labels": [], "created_at": "2024-12-10T10:05:42Z", "updated_at": "2024-12-11T06:41:30Z", "user": "Lenan22" }, { "repo": "pytorch/torchtitan", "number": 724, "title": "Issue: Loss Discrepancy Between FSDP1 and FSDP2 with AdamW Optimizer", "body": "We observed a loss discrepancy between FSDP1 and FSDP2 while training with the AdamW optimizer. Are you aware of any known issues with the AdamW optimizer and FSDP2 that might contribute to this behavior?", "url": "https://github.com/pytorch/torchtitan/issues/724", "state": "closed", "labels": [ "question" ], "created_at": "2024-12-09T19:45:45Z", "updated_at": "2025-08-21T02:57:39Z", "user": "Teng-xu" }, { "repo": "pytorch/torchtitan", "number": 723, "title": "Context parallelism understanding", "body": "Hi \r\n\r\nWe are recently testing the CP parallelism strategy, for a 2D configuration: FSDP+CP. \r\nFrom what we know, CP is to slice the sequence length, as attention kernel needs to compute the attention for the whole sequence, which means each GPU needs to gather all the sharded KV cache using some collective communication kernels. \r\n\r\nHowever, we didn't see any such kind of kernels, only found the All-Gather for parameters in pre-forward phase. \r\n![image](https://github.com/user-attachments/assets/23d89f06-ef01-4a58-b713-5864be99487e)\r\n\r\nIs there anything that we misunderstood? please add your comments for better understanding. \r\n\r\nThanks.", "url": "https://github.com/pytorch/torchtitan/issues/723", "state": "open", "labels": [ "question", "module: context parallel" ], "created_at": "2024-12-09T03:07:27Z", "updated_at": "2024-12-20T21:45:48Z", "user": "jinsong-mao" }, { "repo": "pytorch/ao", "number": 1390, "title": "AO and Automated Mixed Precision", "body": "Can we clarify in the readme what are the best practices to use ao at inference with a pytorch AMP trainer model/checkpoint?", "url": "https://github.com/pytorch/ao/issues/1390", "state": "open", "labels": [ "topic: documentation", "question" ], "created_at": "2024-12-08T13:52:15Z", "updated_at": "2025-03-17T20:46:24Z", "user": "bhack" }, { "repo": "pytorch/xla", "number": 8466, "title": "Useful Q8 Kernels For TPUs/XLA Support", "body": "## \u2753 Questions and Help\r\nI'm looking at this repo here [KONAKONA666/q8_kernels](https://github.com/KONAKONA666/q8_kernels).\r\n\r\nThe Q8 functions are being used [is located here](https://github.com/KONAKONA666/q8_kernels/tree/main/q8_kernels/functional), the [cuda kernels here](https://github.com/KONAKONA666/q8_kernels/tree/main/csrc), and I was curious if any of these have already been implemented or integrated elsewhere? I'm not particularly familiar with porting custom kernels", "url": "https://github.com/pytorch/xla/issues/8466", "state": "open", "labels": [ "question", "fp8" ], "created_at": "2024-12-06T07:01:59Z", "updated_at": "2025-02-13T15:17:36Z", "user": "radna0" }, { "repo": "pytorch/xla", "number": 8454, "title": "how to auto convert back to bfloat16 after conv1 and conv2 ", "body": "## \u2753 Questions and Help\r\nI have an tensor with dtype torch.bfloat16, in kaggle v3-8, after the conv1 and conv2 operation the return type is torch.float32. Any way (environent varable or so) to convert the return type back to torch.bfloat16?", "url": "https://github.com/pytorch/xla/issues/8454", "state": "open", "labels": [ "question" ], "created_at": "2024-12-05T09:59:35Z", "updated_at": "2025-02-13T14:35:46Z", "user": "ghost" }, { "repo": "pytorch/xla", "number": 8451, "title": "Is it possible to execute jax code in torch_xla?", "body": "## Is it possible to execute jax code in torch_xla?\r\nAfter reading the docs, I realized that customized kernels via Jax Pallas can be adopted as kernels. I wonder if it is possible to execute jax code in torch_xla. It seems torch_xla._XLAC._xla_tpu_custom_call only accept custom kernels. Is there a way to execute jax ir code?\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8451", "state": "closed", "labels": [], "created_at": "2024-12-04T17:54:55Z", "updated_at": "2024-12-08T12:24:51Z", "comments": 2, "user": "lime-j" }, { "repo": "pytorch/gloo", "number": 399, "title": "How to specify ai_family explicitly ", "body": "we note that gloo supports ipv4 and ipv6 by setting ai_family = AF_UNSPEC and deciding a real one at runtime. However, in our cluster, we got an exception about ai_family mismatching. Our cluster contains both ipv4 and ipv6 network stacks. How can we specify ai_family explicitly?\n\nWe run pyroch, and get below exception.\nRuntimeError: [enforce fail at ../third_party/gloo/gloo/transport/tcp/[device.cc:276](http://device.cc:276/)] ss1.ss_family == ss2.ss_family. 2 vs 10", "url": "https://github.com/pytorch/gloo/issues/399", "state": "open", "labels": [], "created_at": "2024-12-04T08:30:50Z", "updated_at": "2025-02-10T09:06:52Z", "user": "NEWPLAN" }, { "repo": "pytorch/tutorials", "number": 3174, "title": "\ud83d\udca1 [REQUEST] - Tutorial for exporting popular class of models, showing the unique challenges faced and how to address them", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\r\n\r\nThe gaming community cares about certain classes of models like pose estimation, instance segmentation, video classification. When we try to export OSS implementations of these models, we run into unique challenges with `torch.export`\r\n\r\nCurrently, we have tutorials showing usage of export and talking about the core export-related concepts to keep in mind with simple examples. We also have `ExportDB` which has information on unsupported constructs with simple examples. However, practically, when running export on many models, its not very clear how does once go about addressing the issues.\r\n\r\nThis tutorial aims to do the reverse. Pick 4 models which are popular, try to export them, show the errors we run into and how do we solve them. The problems being solved are generic enough to be applicable to a range of models.\r\n\r\n### Existing tutorials on this topic\r\n\r\nhttps://pytorch.org/docs/stable/export.html\r\nhttps://pytorch.org/tutorials/intermediate/torch_export_tutorial.html\r\nhttps://pytorch.org/docs/stable/generated/exportdb/index.html\r\n\r\n### Additional context\r\n\r\nhttps://github.com/pytorch/pytorch/issues/138111\r\nhttps://github.com/pytorch/pytorch/issues/138120\r\n\r\n_No response_\r\n\r\ncc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/tutorials/issues/3174", "state": "closed", "labels": [ "module: export" ], "created_at": "2024-12-03T20:35:42Z", "updated_at": "2025-01-21T18:22:54Z", "user": "agunapal" }, { "repo": "pytorch/xla", "number": 8430, "title": "Request for Wheel with Older GLIBC", "body": "## \u2753 Questions and Help\r\nHi, I have installed torch-xla from https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.5.0-cp311-cp311-manylinux_2_28_x86_64.whl. \"manylinux_2_28\" indicates that it is compiled with GLIBC 2.28. However, when I installed and tried to import torch_xla, it said GLIBC 2.29 was not found. Upgrading GLIBC on the server is not possible. I kindly request any help on this. It will be very helpful if there is a pre-compiled wheel that can run on a server with GLIBC 2.28 (e.g., RedHat 8)\r\n\r\nI have pytorch-2.5.1-cu118 installed in python 3.11. My system is RHEL 8.\r\n", "url": "https://github.com/pytorch/xla/issues/8430", "state": "open", "labels": [ "question", "build" ], "created_at": "2024-12-03T17:50:24Z", "updated_at": "2025-02-13T14:47:34Z", "user": "ASU-ScopeX-Lab" }, { "repo": "pytorch/vision", "number": 8777, "title": "Documentation for the expected input dimension of the model class", "body": "### \ud83d\udcda The doc issue\n\nThe built-in models are really convenient. However, the documentation usually did not specified the expected input dimension, I always find it troublesome to confirm what is the correct input dimension for the model class that i want to use. \r\n\r\nFor example:\r\nhttps://pytorch.org/vision/main/models/generated/torchvision.models.resnet18.html\r\nhttps://pytorch.org/vision/main/models/generated/torchvision.models.swin_t.html\r\nhttps://pytorch.org/vision/main/models/generated/torchvision.models.video.swin3d_b.html\r\n\r\nIs there clear documentation for this issue? Or is there a simple and clear rule that i can use (e.g., a rule that were used to develop these model class in pytorch that are consistent throughout?)\r\n\r\n\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8777", "state": "closed", "labels": [], "created_at": "2024-12-02T17:55:40Z", "updated_at": "2024-12-03T10:30:23Z", "comments": 2, "user": "hzhz2020" }, { "repo": "pytorch/torchtitan", "number": 709, "title": "First Shard Group Save and Load Checkpoint for HSDP", "body": "Based on my understanding, current strategy:\r\n\t1.\tAll ranks currently read and load the checkpoint.\r\n\t2.\tAll ranks also save and write the checkpoint.\r\n\r\nI have a question regarding the HSDP case:\r\nIf different shard groups write data to storage, could this lead to data corruption?\r\nIdeally, should only the first shard group read the data, broadcast it, and handle writing to ensure consistency?", "url": "https://github.com/pytorch/torchtitan/issues/709", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-29T22:20:42Z", "updated_at": "2025-01-08T07:52:58Z", "user": "qsh-zh" }, { "repo": "pytorch/rl", "number": 2618, "title": "[Feature Request] Provide documentation on how to use CatFrames with a data collector and replay buffer for images", "body": "## Motivation\r\nUsing CatFrames for inference is fairly straightforward and is already well documented. \r\nThat being said, using CatFrames to reconstruct a stack of frames when sampling from the replay buffer is not so straightforward I find (subjective) and is not explicitly documentd for images (objective).\r\nUsing frame stacking for Visual RL is very common practice so I feel like the community would benefit from getting a better documentation on how to use CatFrames **for images**. \r\n\r\n## Solution\r\nProvide a clear documentation that explains everything in details (no magic flags/magic values) on how to use CatFrames with a data collector and replay buffer (both extend and sample() method should be shown) for images.\r\n\r\nI have created a gist of my attempt to use CatFrames for images and while the inference part works, the stack frames retrieved from the replay buffer do not make sense. \r\n\r\nhttps://gist.github.com/AlexandreBrown/fe378f26a87bdc40c5995dcc7d42f482 \r\n\r\nAny help on how to make the last part where we sample from the replay buffer return the correct CatFrames data is appreciated. \r\n\r\n## Contributions \r\nI am willing to work on the PR for the documentation update if someone can help me get the [MVP script](https://gist.github.com/AlexandreBrown/fe378f26a87bdc40c5995dcc7d42f482) working.\r\n\r\n## Checklist\r\n\r\n- [x] I have checked that there is no similar issue in the repo (**required**)\r\n", "url": "https://github.com/pytorch/rl/issues/2618", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-11-29T16:57:42Z", "updated_at": "2024-11-29T16:58:06Z", "user": "AlexandreBrown" }, { "repo": "pytorch/TensorRT", "number": 3307, "title": "\u2753 [Question] TensorRT Export Failure with Large Input Sizes", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nI'm trying to export a torch model that processes large inputs (e.g., 8192x2048). I have noticed that `torch_tensorrt.compile` fails with inputs greater than 4096x2048 (I haven't tried them all, only powers of 2). Specifically, the conversion fails for convolution and ReLU operations with a \"No valid tactics\" and \"Illegal memory access\" error:\r\n```\r\n[1A2024-11-29 16:56:42,307 - torch_tensorrt [TensorRT Conversion Context] - ERROR - [scopedCudaResources.cpp::~ScopedCudaStream::55] Error Code 1: Cuda Runtime (an illegal memory access was encountered)\r\n2024-11-29 16:56:42,311 - torch_tensorrt [TensorRT Conversion Context] - ERROR - IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node [CONVOLUTION]-[aten_ops.convolution.default]-[teacher.3/convolution_5] + [RELU]-[aten_ops.relu.default]-[teacher.4/relu_4].)\r\n2024-11-29 16:56:42,312 - [MODEL EXPORT] - ERROR - TensorRT export failed: \r\nTraceback (most recent call last):\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/tools/launchers.py\", line 398, in <module>\r\n export(\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/tools/launchers.py\", line 298, in export\r\n trt_model = torch_tensorrt.compile(model, **compile_spec)\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/_compile.py\", line 269, in compile\r\n trt_graph_module = dynamo_compile(\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py\", line 288, in compile\r\n trt_gm = compile_module(\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py\", line 464, in compile_module\r\n trt_module = convert_module(\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py\", line 142, in convert_module\r\n interpreter_result = interpret_module_to_result(\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py\", line 121, in interpret_module_to_result\r\n interpreter_result = interpreter.run()\r\n File \"/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 635, in run\r\n assert serialized_engine\r\nAssertionError\r\n```\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\nHere attached is the script and full output log: [issue.zip](https://github.com/user-attachments/files/17961259/issue.zip)\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 2.5.1+cu121\r\n - TorchTensorRT Version: 2.5.0\r\n - CPU Architecture: AMD EPYC 7543 32-Core Processor\r\n - OS (e.g., Linux): Ubuntu 22.04.5 LTS\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Python version: 3.10.12\r\n - CUDA version: Cuda compilation tools, release 12.1, V12.1.66 Build cuda_12.1.r12.1/compiler.32415258_0\r\n - GPU models and configuration: NVIDIA A100-SXM4-80GB, on SLURM with MIG enabled.\r\n\r\nIs there any limit to the input size when converting using torch_tensorrt? Any solution to this problem?\r\n\r\nThanks.\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3307", "state": "open", "labels": [ "question" ], "created_at": "2024-11-29T16:01:14Z", "updated_at": "2024-12-04T15:53:40Z", "user": "AndreaBrg" }, { "repo": "pytorch/pytorch", "number": 141746, "title": "How to specify the port for processes with rank > 1 in the Gloo communication backend?", "body": "In Pytorch, when performing distributed training using gloo as the communication backend, you only need to specify master_addr and master_port; other processes will actively connect and use random ports for initialization. May I ask if it is possible for other processes to perform initialization by specifying the port?\n\ncc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o", "url": "https://github.com/pytorch/pytorch/issues/141746", "state": "open", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2024-11-28T02:07:49Z", "updated_at": "2024-12-19T03:52:52Z", "user": "tecaccc" }, { "repo": "pytorch/torchtitan", "number": 700, "title": "Is `autocast` needed with FSDP2?", "body": "Hi, is it necessary to wrap the forward pass in `autocast` when using FSDP2? I noticed that the `torchtitan` training loop does not.\r\n\r\nIf I wrap in `torch.autocast(device_type=\"cuda\", dtype=torch.bfloat16)` my matmuls will be `bfloat16`, but my softmaxes (say) will be in `float32`. This behavior requires the autocast wrapper:\r\n```python\r\nt = torch.randn(100, device=\"cuda\", dtype=torch.bfloat16)\r\n\r\nwith torch.autocast(device_type=\"cuda\", dtype=torch.bfloat16):\r\n out = t.softmax(dim=-1)\r\n\r\nout.dtype # torch.float32\r\n\r\n# Without autocast:\r\nt.softmax(dim=-1).dtype # torch.bfloat16\r\n``` \r\nThis is the usual way to do DDP or non-distributed mixed-precision training.\r\n\r\nIt seems to me that this behavior is lost in the `torchtitan` training loop which doesn't use the `autocast` [context manager](https://github.com/garrett361/torchtitan/blob/3247841423429faf37bdf6918204350db293e482/train.py#L308-L314). Is this not true? Does FSDP2 somehow still perform the upcast for the usual upcasted amp ops like softmax? Not seeing how it might do so, and can't test easily at the moment.\r\n\r\nI believe I correctly understand that `MixedPrecisionPolicy` controls the `dtype`s that weights are held in, reductions are performed in, and whether to cast a given module's outputs to a certain `dtype`, but that is all orthogonal to the dispatcher flags that `autocast` controls, IIUC.\r\n\r\nRelates to #600 and #591. Also, I believe [OLMo uses autocast with FSDP](https://github.com/allenai/OLMo/blob/9c677c90cc881c37787c71373836d6889ad4de4a/olmo/train.py#L799-L809), but that is FSDP1 last time I checked.\r\n\r\nCC @awgu ", "url": "https://github.com/pytorch/torchtitan/issues/700", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-25T22:32:13Z", "updated_at": "2024-12-05T15:51:06Z", "user": "garrett361" }, { "repo": "pytorch/vision", "number": 8749, "title": "Pretrained weights for ResNet[18, 34, 50, 101] are incorrect", "body": "### \ud83d\udc1b Describe the bug\n\nHi,\r\n\r\nI have been trying to run the pretrained ResNet models. The model weights seem to be incorrect. Below is a code to reproduce the erroneous results:\r\n\r\n```\r\nimport torch\r\nfrom torchvision.models import resnet18, ResNet18_Weights\r\nfrom PIL import Image\r\n\r\nresnet = resnet18(weights=ResNet18_Weights.IMAGENET1K_V1)\r\npreprocess = ResNet18_Weights.IMAGENET1K_V1.transforms()\r\n\r\n# !wget \"https://github.com/pytorch/hub/raw/master/images/dog.jpg\"\r\ninput_image = Image.open('dog.jpg')\r\n\r\n# !wget https://upload.wikimedia.org/wikipedia/commons/b/b6/Felis_catus-cat_on_snow.jpg -O cat.jpg\r\n# input_image = Image.open('cat.jpg')\r\n\r\ninput_tensor = preprocess(input_image)\r\ninput_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model\r\n\r\nwith torch.no_grad():\r\n output = resnet(input_batch)\r\n\r\nprobabilities = torch.nn.functional.softmax(output[0], dim=0)\r\n# !wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt\r\nwith open(\"imagenet_classes.txt\", \"r\") as f:\r\n categories = [s.strip() for s in f.readlines()]\r\n\r\n# Show top categories per image\r\ntop5_prob, top5_catid = torch.topk(probabilities, 5)\r\nfor i in range(top5_prob.size(0)):\r\n print(categories[top5_catid[i]], top5_prob[i].item())\r\n```\r\n\r\nOutput is the same for both \"cat.jpg\" and \"dog.jpg\" for ResNet18:\r\n\r\n```\r\nbucket 0.008743884041905403\r\nplunger 0.006772771943360567\r\nhook 0.005883160978555679\r\npaper towel 0.005243286956101656\r\nashcan 0.005110109690576792\r\n```\r\n\r\nThese predictions are clearly incorrect. Through a noncomprehensive testing, the garbage output occurs for the model weights:\r\n\r\n```\r\nResNet18_Weights.IMAGENET1K_V1\r\nResNet34_Weights.IMAGENET1K_V1\r\nResNet50_Weights.IMAGENET1K_V1\r\nResNet101_Weights.IMAGENET1K_V1\r\n```\r\n\r\nwhile the output for the following model weights are correct:\r\n\r\n```\r\nResNet50_Weights.IMAGENET1K_V2\r\nResNet101_Weights.IMAGENET1K_V2\r\n```\r\n\r\nMy guess is that the pretrained weight files are linked incorrectly for the V1 models.\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.5.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.4\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Rocky Linux 9.4 (Blue Onyx) (x86_64)\r\nGCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0] (64-bit runtime)\r\nPython platform: Linux-5.14.0-427.42.1.el9_4.x86_64-x86_64-with-glibc2.34\r\nIs CUDA available: True\r\nCUDA runtime version: 12.4.131\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA A10\r\nGPU 1: NVIDIA A10\r\n\r\nNvidia driver version: 550.54.15\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 57 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 96\r\nOn-line CPU(s) list: 0-95\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz\r\nCPU family: 6\r\nModel: 106\r\nThread(s) per core: 2\r\nCore(s) per socket: 24\r\nSocket(s): 2\r\nStepping: 6\r\nCPU(s) scaling MHz: 98%\r\nCPU max MHz: 3500.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 5600.00\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nL1d cache: 2.3 MiB (48 instances)\r\nL1i cache: 1.5 MiB (48 instances)\r\nL2 cache: ", "url": "https://github.com/pytorch/vision/issues/8749", "state": "closed", "labels": [], "created_at": "2024-11-25T22:17:58Z", "updated_at": "2024-11-27T18:24:38Z", "comments": 3, "user": "longyuxi" }, { "repo": "pytorch/xla", "number": 8413, "title": "Review documentation in the docs/source/contribute directory", "body": "## \ud83d\udcda Documentation\r\n\r\nReview content in the docs/source/learn directory to improve readability and ensure it aligns with Google documentation standards.\r\n", "url": "https://github.com/pytorch/xla/issues/8413", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-11-25T17:13:51Z", "updated_at": "2025-06-02T21:59:49Z", "comments": 2, "user": "mikegre-google" }, { "repo": "pytorch/pytorch", "number": 141473, "title": "How to use torch.compile + HF model?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nProblem: There seem to be 2 ways of using torch compile with a HF model, both of which don't work for all the ways a model inference is called, which is one of 3 possible methods: `generate()`, `forward()` and `__call__()`.\r\n\r\n## Option 1: `model = torch.compile(model)`\r\n\r\nThis works if we use either `forward()` or the `__call__()` methods. But, if we try to call the `.generate()` method (which is the more popular API for inferencing and calls `forward()` internally), we notice that we DON'T seem to be using the compiled model (ex. `TORCH_LOGS=\"dynamo\"` gives no output).\r\n\r\nSimple reproducible example (custom class with `generate` and `forward` like implementations):\r\n```\r\nimport torch\r\n\r\nclass MyModule(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n\r\n def forward(self, input):\r\n # return average of the inputs\r\n return torch.Tensor(torch.sum(input)/len(input))\r\n\r\n def generate(self, max_tokens, input):\r\n for i in range(max_tokens):\r\n output = self(input) # Doesn't work with either call or forward\r\n input = torch.cat((input, output.view(1)))\r\n return input\r\n\r\nmodel = MyModule()\r\nmodel = torch.compile(model)\r\ninput = torch.rand(4)\r\noutput = model.generate(input=input, max_tokens=3) # THIS DOES NOT WORK!!!\r\n#output = model.forward(input=input) # THIS WORKS\r\n``` \r\n\r\nor use any HF model compile followed by generate:\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(\"facebook/opt-125m\")\r\nmodel = torch.compile(model)\r\noutput = model.generate(input_ids, max_new_tokens=100)\r\n```\r\n\r\nThe problem is that the output of `torch.compile(model)` is an `OptimizedModule` object with the `__call__()` set to the compiled forward and `orig_mod` set to `model` itself. \r\nWhen `compiled_model.generate()` is called, this accesses the generate through the `__getattr__()` function which gets the model's generate. That `generate` calls `self()`, which calls the original model's forward instead of the compiled forward.\r\n\r\n## Option 2: `model.compile()`\r\nThe other option is to use the `torch.nn.Module`'s compile, which does an inplace modification where the compiled forward is stored in `_compiled_call_impl` variable and used when `__call__()` is done. But, this only works with the `__call__()` method and does NOT work with the `forward()` method. If the `generate()` internally uses call, then generate works.\r\n\r\n```\r\nmodel.compile()\r\noutput = model.generate(input=input, max_tokens=3) # Works\r\n#output = model.forward(input_ids) # DOES NOT WORK\r\n```\r\n\r\nProblem is that neither of these approaches works with both `generate()` and `forward()` methods. \r\n\r\nAs an aside, I tried a couple of unsuccessful possible fixes:\r\n- Tried if Option 1 could be fixed somehow by setting the `orig_mod.forward` to the compiled forward but that causes infinite recursion because of the circular dependency\r\n- I also tried changing `TorchDynamoContext.__call__()` (in `eval_frame.py`) in the nn.Module case, to internally do `model.compile` instead of creating an OptimizedModule. This fixes things slightly, for ex. Option 1 works if it generate uses `call` instead of `forward`, but obviously, not really a solution.\r\n\r\ncc: @chanderg\r\n\r\n### Error logs\r\n\r\n_No response_\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.6.0.dev20241103+cu124\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.4\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.30.2\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 12.6.68\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA A100-SXM4-80GB\r\nGPU 1: NVIDIA A100-SXM4-80GB\r\nGPU 2: NVIDIA A100-SXM4-80GB\r\nGPU 3: NVIDIA A100-SXM4-80GB\r\nGPU 4: NVIDIA A100-SXM4-80GB\r\nGPU 5: NVIDIA A100-SXM4-80GB\r\nGPU 6: NVIDIA A100-SXM4-80GB\r\nGPU 7: NVIDIA A100-SXM4-80GB\r\n\r\nNvidia driver version: 525.105.17\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 48 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): ", "url": "https://github.com/pytorch/pytorch/issues/141473", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2024-11-25T05:19:31Z", "updated_at": "2024-11-26T04:21:05Z", "user": "SilverSoldier" }, { "repo": "pytorch/pytorch", "number": 141422, "title": "What is \"recompilation profiler\" in doc? (Seems to have a dangling link)", "body": "### \ud83d\udcda The doc issue\n\nhttps://pytorch.org/docs/stable/torch.compiler_faq.html says:\r\n\r\n![image](https://github.com/user-attachments/assets/83bd3f29-6ed2-4ce3-b93f-65cc25ed412c)\r\n\r\nBut by clicking on it, it jumps to nowhere. I would appreciate it if I could know how to debug this excessive recompilation issue.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames", "url": "https://github.com/pytorch/pytorch/issues/141422", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2024-11-23T06:01:44Z", "updated_at": "2024-11-26T23:22:21Z", "user": "fzyzcjy" }, { "repo": "pytorch/torchtitan", "number": 696, "title": "[question] Need clarification on the purpose and performance benefits of GarbageCollection class", "body": "For the [impl](https://github.com/pytorch/torchtitan/blob/5525d7723175a1b4477bde3034a96f803b6c3fae/torchtitan/utils.py#L104)\r\n\r\nI have several questions about the motivation and use cases for this class:\r\n\r\nCould you provide examples of scenarios where this class can improves performance? compare against default Python GC?\r\n\r\nTo my understanding, during backward, activation cuda memory should be released timely when we run backward in computational graph, will the GarbageCollection affect how we release cuda memory?\r\n\r\nWhat are the tradeoffs of disabling automatic GC (gc.disable())?", "url": "https://github.com/pytorch/torchtitan/issues/696", "state": "closed", "labels": [ "documentation", "question" ], "created_at": "2024-11-23T04:39:20Z", "updated_at": "2024-11-26T00:25:12Z", "user": "qsh-zh" }, { "repo": "pytorch/executorch", "number": 7030, "title": "how to build a llama2 runner binary with vulkan backends in the server with intel x86 server", "body": "### \ud83d\udcda The doc issue\n\nhttps://pytorch.org/executorch/stable/native-delegates-executorch-vulkan-delegate.html\r\nhttps://pytorch.org/executorch/stable/build-run-vulkan.html\r\ndear helper, above documentation descripe how to build the LLaMA runner binary on Android with VULKAN backend. however I can't find how to build the LLaMA runner binary onthe server with intel x86 server with vulkan backends. Could you help me about the issue? thank you in advanced.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @SS-JIA @manuelcandales", "url": "https://github.com/pytorch/executorch/issues/7030", "state": "closed", "labels": [ "module: vulkan", "triaged" ], "created_at": "2024-11-22T03:16:40Z", "updated_at": "2025-12-18T21:39:49Z", "user": "l2002924700" }, { "repo": "pytorch/xla", "number": 8405, "title": "Einsum is not added to the supported list for autocast", "body": "We noticed that einsum is not added to the supported ops list for low precision policy in autocast, is there a reason for that? Does this op have some issues in the support? \r\n", "url": "https://github.com/pytorch/xla/issues/8405", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-11-21T17:25:01Z", "updated_at": "2025-02-17T14:31:09Z", "comments": 3, "user": "avizon-aws" }, { "repo": "pytorch/torchtitan", "number": 687, "title": "Question about FSDP2 + FP8 all gather", "body": "Does FSDP2 work with both FP8 allgather and FP8 linear?", "url": "https://github.com/pytorch/torchtitan/issues/687", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-21T17:13:39Z", "updated_at": "2024-11-21T23:52:06Z", "user": "sbhavani" }, { "repo": "pytorch/xla", "number": 8402, "title": "Kaggle Notebook: model return loss None on TPU", "body": "## \u2753 Questions and Help\r\n\r\nHi, I recieved loss None when training model. Anyone can help?\r\n\r\nSimple reproduct kaggle notebook [link](https://www.kaggle.com/code/liondude/notebook548442067d)\r\n\r\n```\r\nimport os\r\nimport time\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\nfrom tqdm import tqdm\r\n\r\nimport datasets\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.optim as optim\r\nimport torch_xla as xla\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_multiprocessing as xmp\r\nfrom torch_xla.distributed.fsdp.utils import apply_xla_patch_to_nn_linear\r\nimport torch_xla.distributed.parallel_loader as pl\r\nimport torch_xla.core.xla_env_vars as xenv\r\nimport torch_xla.debug.metrics as met\r\nimport torch_xla.distributed.spmd.xla_sharding as xs\r\nfrom torch_xla.distributed.spmd.xla_sharding import Mesh\r\nimport torch_xla.runtime as xr\r\n\r\nimport re\r\nfrom datasets import Dataset, load_dataset\r\nimport transformers\r\nfrom transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy\r\nfrom transformers import AutoConfig, AutoProcessor, AutoTokenizer, AutoModelForCausalLM, DataCollatorWithPadding\r\nfrom peft import PeftModel, PeftConfig, get_peft_model, LoraConfig, TaskType\r\nfrom transformers import logging as hf_logging\r\n\r\nhf_logging.set_verbosity_error()\r\n\r\nos.environ[\"PJRT_DEVICE\"] = \"TPU\"\r\n\r\nclass CFG:\r\n NUM_EPOCHS = 1\r\n BATCH_SIZE = 24\r\n DROPOUT = 0.05\r\n MODEL_NAME = 'unsloth/Qwen2.5-7B-Instruct'\r\n SEED = 2024\r\n MAX_LENGTH = 4096\r\n NUM_WARMUP_STEPS = 128\r\n LR_MAX = 2e-4\r\n NUM_LABELS = 3\r\n LORA_RANK = 16\r\n LORA_ALPHA = 16\r\n LORA_MODULES = ['o_proj', 'v_proj',\"q_proj\", \"k_proj\"]\r\n\r\nFLAGS = {'MAX_INPUT': 64,\r\n 'LOGGING_STEPS': 10,\r\n 'NUM_EPOCHS': 3,\r\n 'BATCH_SIZE': 24,\r\n }\r\n\r\nMAX_INPUT=128\r\nMODEL = \"unsloth/Qwen2.5-7B-Instruct\"\r\n\r\n\r\ndef get_dataset():\r\n tokenizer = AutoTokenizer.from_pretrained(CFG.MODEL_NAME)\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.padding_side = 'right'\r\n tokenizer.add_eos_token = True\r\n\r\n # save tokenizer to load offline during inference\r\n tokenizer.save_pretrained('tokenizer')\r\n max_seq_length = 4096\r\n tokenizer_x = AutoTokenizer.from_pretrained(CFG.MODEL_NAME, max_seq_length=max_seq_length)\r\n tokenizer_x.pad_token_id = tokenizer.eos_token_id\r\n df = datasets.load_dataset('stanfordnlp/imdb', split='train')\r\n # df = df['train']\r\n df = df.remove_columns(['label'])\r\n \r\n def preprocess(tasks, train_mode=True):\r\n return {\"text\": 'this is test'}\r\n df = df.map(preprocess, batched = False, remove_columns=df.column_names)\r\n print(df)\r\n def preprocess_function(example):\r\n x = tokenizer(example[\"text\"], truncation=True, max_length=4096, padding='max_length')\r\n \r\n return {\r\n \"input_ids\": x.input_ids,\r\n \"labels\": 0,\r\n \"attention_mask\": x.attention_mask\r\n }\r\n\r\n data_train = df.map(preprocess_function, batched=False, num_proc=4).remove_columns(['text'])\r\n\r\n return data_train, tokenizer, FLAGS\r\n\r\n##############################################################################################################################################\r\ndef train(data_train, tokenizer, FLAGS):\r\n# print('rank', rank)\r\n N_SAMPLES = len(data_train)\r\n STEPS_PER_EPOCH = N_SAMPLES // CFG.BATCH_SIZE\r\n METRICS = {\r\n 'loss': [],\r\n 'accuracy': {'y_true': [], 'y_pred': [] }}\r\n device = xm.xla_device()\r\n print('device', device)\r\n num_devices = xr.global_runtime_device_count() #8\r\n model_axis = 1\r\n mesh_shape = (1, num_devices // model_axis, model_axis) # 2x4 on v3-8, 2x2 on v4-8\r\n\r\n device_ids = np.array(range(num_devices))\r\n\r\n mesh = Mesh(device_ids, mesh_shape, ('dcn', 'data', 'model'))\r\n\r\n print('world_size:', xm.xrt_world_size())\r\n rng = torch.Generator().manual_seed(42)\r\n training_loader = torch.utils.data.DataLoader(data_train,\r\n batch_size=FLAGS['BATCH_SIZE'],\r\n collate_fn=DataCollatorWithPadding(tokenizer=tokenizer),\r\n# sampler=train_sampler,\r\n drop_last=True, generator=rng)\r\n\r\n\r\n sharding_spec = xs.ShardingSpec(mesh, (('dcn', 'data'), None))\r\n xla_train_loader = pl.MpDeviceLoader(training_loader,\r\n device = xm.xla_device(),\r\n input_sharding=sharding_spec,\r\n device_prefetch_size=16\r\n )\r\n\r\n base_model = AutoModelForCausalLM.from_pretrained(MODEL, torch_dtype=torch.bfloat16)\r\n\r\n base_model.config.pretraining_tp = 1\r\n\r\n tokenizer.pad_token = tokenizer.eos_token # If pad_token is not set\r\n base_model.config.pad_token_id = tokenizer.pad_token_id # Ensure the model respects the pad_token\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8402", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-20T09:50:51Z", "updated_at": "2025-02-17T14:32:56Z", "user": "hiwamk" }, { "repo": "pytorch/pytorch", "number": 141118, "title": "Dynamo: how to deal with multiple inheritance (nn.Module/MutableMapping)?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nTensorDict is a MutableMapping object, and is treated as such by torch.compile:\r\n```python\r\nimport torch\r\nfrom tensordict import TensorDict\r\n\r\ntd = TensorDict(a=1, b=2, c=True)\r\n\r\n@torch.compile(fullgraph=True)\r\ndef add1(td):\r\n return TensorDict(**td)+1\r\n\r\nadd1(td)\r\n```\r\n\r\nWe also have a `TensorDictParams` primitive that acts a bit like ParameterList: it is a TensorDict but also an nn.Module. That's useful when you want to set a TensorDict in an nn.Module have have the leaf tensors included in the state_dict, or dispatch ops like `module.to(...)` to the tensors it contains. However, `_dynamo` looks at it like an nn.Module and not a MutableMapping\r\n```python\r\nimport torch\r\nfrom tensordict import TensorDictParams, TensorDict\r\n\r\ntd = TensorDictParams(TensorDict(a=1, b=2, c=True))\r\n\r\n@torch.compile(fullgraph=True)\r\ndef add1(td):\r\n return TensorDict(**td)+1\r\n\r\nadd1(td)\r\n```\r\nbreaks with\r\n```\r\n File \"/Users/vmoens/venv/rl/lib/python3.10/site-packages/torch/_dynamo/variables/dicts.py\", line 357, in call_method\r\n dict_vt = BuiltinVariable.call_custom_dict(tx, dict, args[0])\r\n File \"/Users/vmoens/venv/rl/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 1432, in call_custom_dict\r\n unimplemented(f\"{user_cls.__name__}(): {args} {kwargs}\")\r\n File \"/Users/vmoens/venv/rl/lib/python3.10/site-packages/torch/_dynamo/exc.py\", line 313, in unimplemented\r\n raise Unsupported(msg, case_name=case_name)\r\ntorch._dynamo.exc.Unsupported: dict(): (UnspecializedNNModuleVariable(TensorDictParams),) {}\r\n```\r\n\r\nMy understanding is that `call_custom_dict` looks at the arg an in one case it's a `variables.MutableMappingVariable` which is fine but in the other it's a `UnspecializedNNModuleVariable` which isn't a mutable mapping.\r\n\r\nSo I guess my question is (other than how can we fix this) how does dynamo look at multiple inheritance? Shouldn't there be a way to tell \"look, this isn't a bird or a fish but a fish that can fly\"? \r\n\r\n(note that in this specific case, `smth(**obj)` will call `obj.keys()` followed by `obj.__getitem__` which are ops that compile is happy about - maybe that's what `call_custom_dict` should be doing?)\r\n\r\nHere is a MRE:\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nimport collections\r\n\r\n# class MyWeirdDict(collections.abc.MutableMapping): # Works\r\nclass MyWeirdDict(collections.abc.MutableMapping, nn.Module): # breaks\r\n def __init__(self, **kwargs):\r\n super().__init__()\r\n self._items = kwargs\r\n def keys(self):\r\n return self._items.keys()\r\n def __getitem__(self, item):\r\n return self._items[item]\r\n def __setitem__(self, key, value):\r\n self._items[key] = value\r\n def __delitem__(self, item):\r\n del self._items[item]\r\n def __len__(self):\r\n return len(self._items)\r\n def __iter__(self):\r\n yield from self._items\r\n def __hash__(self):\r\n return hash(id(self))\r\n def items(self):\r\n for k, v in self._items.items():\r\n yield (k, v)\r\n\r\n@torch.compile(fullgraph=True)\r\ndef to_weird_dict(td):\r\n return MyWeirdDict(**td)\r\n\r\nd = MyWeirdDict(a=1, b=2, c=3)\r\nto_weird_dict(d)\r\n```\r\n\r\n### Error logs\r\n\r\nSee above\r\n\r\n### Versions\r\n\r\nnightlies\r\n\r\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames", "url": "https://github.com/pytorch/pytorch/issues/141118", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "dynamo-dicts", "dynamo-nn-modules" ], "created_at": "2024-11-20T09:01:58Z", "updated_at": "2024-12-10T19:22:18Z", "user": "vmoens" }, { "repo": "pytorch/pytorch", "number": 141116, "title": "How to fuse batchnorm to conv2d in the graph exported by torch.export", "body": "I used the torch.export to export my CNN model in eval mode,but the op batchnorm still exists. how to eliminate it. Is there some options in torch.export.export function or I should write a fusion pass by myself. \r\nThanks.\r\ncode:\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nclass CNN(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.conv1 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=3, stride=1, padding=1)\r\n self.bn1 = nn.BatchNorm2d(16)\r\n\r\n def forward(self, x):\r\n x = self.conv1(x) \r\n x = self.bn1(x)\r\n return x\r\n\r\ntorch.manual_seed(0)\r\nmodel=CNN().eval()\r\ninput=torch.randn(3,16,224,224)\r\nep=torch.export.export(model,(input,))\r\nprint(ep.graph)\r\n```\r\ngraph:\r\n```\r\ngraph():\r\n %p_conv1_weight : [num_users=1] = placeholder[target=p_conv1_weight]\r\n %p_conv1_bias : [num_users=1] = placeholder[target=p_conv1_bias]\r\n %p_bn1_weight : [num_users=1] = placeholder[target=p_bn1_weight]\r\n %p_bn1_bias : [num_users=1] = placeholder[target=p_bn1_bias]\r\n %b_bn1_running_mean : [num_users=1] = placeholder[target=b_bn1_running_mean]\r\n %b_bn1_running_var : [num_users=1] = placeholder[target=b_bn1_running_var]\r\n %b_bn1_num_batches_tracked : [num_users=0] = placeholder[target=b_bn1_num_batches_tracked]\r\n %x : [num_users=1] = placeholder[target=x]\r\n %conv2d : [num_users=1] = call_function[target=torch.ops.aten.conv2d.default](args = (%x, %p_conv1_weight, %p_conv1_bias, [1, 1], [1, 1]), kwargs = {})\r\n %_native_batch_norm_legit_no_training : [num_users=1] = call_function[target=torch.ops.aten._native_batch_norm_legit_no_training.default](args = (%conv2d, %p_bn1_weight, %p_bn1_bias, %b_bn1_running_mean, %b_bn1_running_var, 0.1, 1e-05), kwargs = {})\r\n %getitem : [num_users=1] = call_function[target=operator.getitem](args = (%_native_batch_norm_legit_no_training, 0), kwargs = {})\r\n return (getitem,)\r\n```\n\ncc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/141116", "state": "open", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2024-11-20T07:46:28Z", "updated_at": "2024-11-20T19:06:46Z", "user": "TingfengTang" }, { "repo": "pytorch/ao", "number": 1315, "title": "How to trigger torchao unit tests?", "body": "We plan to run unit tests when we switch to different torch versions and triton versions.\r\nHow should we leverage with torchao's unit tests to make sure new torch version and triton versions are working?\r\nThanks!", "url": "https://github.com/pytorch/ao/issues/1315", "state": "closed", "labels": [], "created_at": "2024-11-19T22:50:34Z", "updated_at": "2024-12-05T01:43:54Z", "user": "goldhuang" }, { "repo": "pytorch/benchmark", "number": 2543, "title": "How to get benchmark statistics?", "body": "I'm building a CI to test some models on certain types of devices. I want get benchmark statistics like which model cases failed? which tests were skipped and why? These statistics will be used to generate a table like this:\r\n\r\n<table>\r\n\t<tr>\r\n\t <th rowspan=\"2\">Devices</th>\r\n\t <th colspan=\"2\">BERT_pytorch</th>\r\n\t <th colspan=\"2\">hf_GPT2</th>\r\n\t</tr>\r\n\t<tr>\r\n\t <th>train</th>\r\n\t <th>eval</th>\r\n\t <th>train</th>\r\n\t <th>eval</th>\r\n\t</tr>\r\n\t<tr>\r\n\t <th>CPU</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u2705</th>\r\n\t</tr>\r\n\t<tr>\r\n\t <th>CUDA</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u2705</th>\r\n\t</tr>\r\n\t<tr>\r\n\t <th>Foo</th>\r\n\t <th>\u274c (failed)</th>\r\n\t <th>\u2705</th>\r\n\t <th>\u26a0\ufe0f (skipped)</th>\r\n\t <th>\u2705</th>\r\n\t</tr>\r\n</table>\r\n\r\nSo how can I get benchmark statistics? Is there a recommended way to do this? Can anyone give suggestions? Thanks so much!\r\n\r\n", "url": "https://github.com/pytorch/benchmark/issues/2543", "state": "closed", "labels": [], "created_at": "2024-11-19T09:36:22Z", "updated_at": "2025-02-11T08:15:40Z", "user": "shink" }, { "repo": "pytorch/torchchat", "number": 1388, "title": "eval doc does not pass test", "body": "### \ud83d\udc1b Describe the bug\n\nhttps://github.com/pytorch/torchchat/pull/1383 enables `run-docs evaluation` to extract a test script from eval documentation,\r\nto run evaluation script. In turn, this extracts the command\r\n\r\n```\r\npython3 torchchat.py eval stories15M --tasks wikitext --limit 10\r\n```\r\n\r\nfrom the eval doc as a test to ensure that the doc is in fact correct. This appears to be a correct use of eval to me, yet it fails when running as follows:\r\n\r\nhttps://hud.pytorch.org/pr/pytorch/torchchat/1383#33154706429\r\n\r\n```\r\n2024-11-18T18:13:35.1710781Z + python3 torchchat.py eval stories15M --tasks wikitext --limit 10\r\n2024-11-18T18:13:35.1711201Z NumExpr defaulting to 16 threads.\r\n2024-11-18T18:13:35.1711531Z PyTorch version 2.6.0.dev20241002+cu121 available.\r\n2024-11-18T18:13:35.1711768Z \r\n2024-11-18T18:13:35.1711939Z Downloading builder script: 0% 0.00/5.67k [00:00<?, ?B/s]\r\n2024-11-18T18:13:35.1712401Z Downloading builder script: 100% 5.67k/5.67k [00:00<00:00, 37.1MB/s]\r\n2024-11-18T18:13:35.1712808Z Traceback (most recent call last):\r\n2024-11-18T18:13:35.1713182Z File \"/pytorch/torchchat/torchchat.py\", line 100, in <module>\r\n2024-11-18T18:13:35.1713552Z eval_main(args)\r\n2024-11-18T18:13:35.1713905Z File \"/pytorch/torchchat/torchchat/usages/eval.py\", line 238, in main\r\n2024-11-18T18:13:35.1714340Z builder_args = BuilderArgs.from_args(args)\r\n2024-11-18T18:13:35.1714667Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n2024-11-18T18:13:35.1715101Z File \"/pytorch/torchchat/torchchat/cli/builder.py\", line 169, in from_args\r\n2024-11-18T18:13:35.1715520Z return cls(\r\n2024-11-18T18:13:35.1715827Z run_cmd_or_die(f\"docker exec -t {container_name} /exec\")\r\n2024-11-18T18:13:35.1716580Z File \"/home/ec2-user/actions-runner/_work/torchchat/torchchat/test-infra/.github/scripts/run_with_env_secrets.py\", line 39, in run_cmd_or_die\r\n2024-11-18T18:13:35.1717388Z raise RuntimeError(f\"Command {cmd} failed with exit code {exit_code}\")\r\n2024-11-18T18:13:35.1718153Z RuntimeError: Command docker exec -t c2e4cff2805edb5848301b09ed712578d726414222642162007e0e16e7c48ba1 /exec failed with exit code 1\r\n2024-11-18T18:13:35.1718786Z ^^^^\r\n2024-11-18T18:13:35.1719026Z File \"<string>\", line 24, in __init__\r\n2024-11-18T18:13:35.1719475Z File \"/pytorch/torchchat/torchchat/cli/builder.py\", line 76, in __post_init__\r\n2024-11-18T18:13:35.1719926Z raise RuntimeError(\r\n2024-11-18T18:13:35.1720431Z RuntimeError: need to specified a valid checkpoint path, checkpoint dir, gguf path, DSO path, or PTE path\r\n```\r\n\r\n\r\n\r\n\n\n### Versions\n\ngithub runner, environment as configured by pytorch test infra", "url": "https://github.com/pytorch/torchchat/issues/1388", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-11-19T05:38:54Z", "updated_at": "2024-12-10T04:41:51Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/ao", "number": 1310, "title": "[NF4] Various bugs in how NF4 handles `.to()` to move to a different device", "body": "Reproduction\r\n\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nfrom torchao.dtypes.nf4tensor import to_nf4\r\n\r\nx = torch.randn(1024, 1024)\r\nx_nf4 = to_nf4(x)\r\nprint(x_nf4.cuda()) # this will dequantize NF4 -> unwanted\r\nprint(x_nf4.to(device=\"cuda\")) # this will raise error\r\nprint(x_nf4.to(\"cuda\")) # this will do the right thing\r\n\r\n# .cpu() does not move .nf4 to CPU, because call_from_inner_tensors does not call the method on .nf4\r\nx = torch.randn(1024, 1024).cuda()\r\nx_nf4 = to_nf4(x).cpu()\r\nprint(x_nf4.quantized_data.device) # cpu\r\nprint(x_nf4.nf4.device) # cuda:0\r\nprint(x_nf4.to(torch.float32)) # error due to device mismatch\r\n\r\n# not working with nn.Module\r\nlinear = nn.Linear(1024, 1024)\r\nlinear.weight = nn.Parameter(to_nf4(linear.weight.detach()), requires_grad=False)\r\nlinear.cuda() # NF4 weight is not moved to CUDA\r\n# linear.to(\"cuda\") # same problem\r\n\r\nprint(linear.weight.device) # cuda:0\r\nprint(linear.weight.quantized_data.device) # cpu\r\nprint(linear.weight.to(torch.float32).device) # cpu\r\n```\r\n\r\nSummary:\r\n1. `NF4Tensor.cuda()` will dequantize -> this is unwanted\r\n2. `NF4Tensor.to(device=\"cuda\")` will raise `IndexError`, since `args[1]` does not exist\r\n3. `NF4Tensor.cpu()` does not move `.nf4` attribute -> cannot dequantize\r\n4. Does not work with `nn.Module.to(device)`\r\n\r\n- IMO, the semantics `NF4Tensor.to(torch.float32)` will dequantize is the culprit that causes these troubles + it is not consistent with AQT behavor. If `.to(dtype)` does not dequantize (only change appearance dtype), we only need to implement `aten._to_copy` instead of `Tensor.cpu`, `Tensor.to` and myriad of others. Though I understand this design is to make NF4 feels more like a true dtype.\r\n- I think it makes more sense to designate `NF4Tensor.dequantize()` as the method to dequantize the tensor (also consistent with plain Tensor behavior, though plain `Tensor.dequantize()` will always return FP32), instead of the current situation (`NF4Tensor.dequantize()` is a static method for lookup table, while `NF4Tensor.get_original_weight()` does dequant)\r\n- Changing this is BC, so we probably leave it as is.", "url": "https://github.com/pytorch/ao/issues/1310", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-19T04:31:35Z", "updated_at": "2024-11-26T06:19:03Z", "user": "gau-nernst" }, { "repo": "pytorch/torchchat", "number": 1385, "title": "Update dead link in https://github.com/pytorch/torchchat/blob/main/docs/quantization.md", "body": "### \ud83d\udc1b Describe the bug\n\nThere is a dead link https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L1260-L1266 in https://github.com/pytorch/torchchat/blob/main/docs/quantization.md like `See the available quantization schemes [here](https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L1260-L1266).`. Could you please help update it to show the quantization schemes examples?\n\n### Versions\n\n#", "url": "https://github.com/pytorch/torchchat/issues/1385", "state": "closed", "labels": [ "documentation", "Quantization" ], "created_at": "2024-11-19T01:34:54Z", "updated_at": "2024-12-09T22:37:22Z", "comments": 4, "user": "yanbing-j" }, { "repo": "pytorch/xla", "number": 8390, "title": "[TPU][torch.compile] How to introduce in-place custome Ops through Pallas ?", "body": "## \u2753 Questions and Help\r\n\r\nHi torch.xla team, thank you so much for the great work on making pytoch available on XLA devices! We have had great experience with it so far. \r\n\r\nWe are exploring the idea of adding custome Pallas kernels in the graph and using it along with `torch.compile(..., backend='openxla')` for TPUs. However, we have hit a limitation that the operator cannot be in-place, which is very important for performance reasons. \r\n\r\nI have stripped down a minimal reproduceable example, happy to provide more details:\r\n\r\n```\r\nfrom typing import List, Callable\r\nimport jax\r\nimport jax.numpy as jnp\r\nfrom jax.experimental import pallas as pl\r\nfrom jax.experimental.pallas import tpu as pltpu\r\nimport torch\r\nimport torch_xla\r\nfrom torch_xla.experimental import custom_kernel\r\nfrom functools import partial\r\n\r\n\r\ndef plus_one_kernel(x_ref, o_ref):\r\n o_ref[:] = o_ref[:] + 1\r\n\r\n@partial(jax.jit, donate_argnums=[0])\r\ndef plus_one_pallas(x: jax.Array):\r\n size = x.shape[0]\r\n return pl.pallas_call(\r\n plus_one_kernel,\r\n grid=(1, 1),\r\n out_shape=jax.ShapeDtypeStruct(x.shape, x.dtype),\r\n input_output_aliases={0:0}\r\n )(x)\r\n\r\n@torch.library.custom_op(\"xla::plus_one_\", mutates_args=(\"x\", ))\r\ndef plus_one_(x: torch.Tensor) -> None:\r\n plus_one_pt = torch_xla.experimental.custom_kernel.make_kernel_from_pallas(\r\n plus_one_pallas, output_shape_dtype_fn = lambda x: [(x.shape, x.dtype)]\r\n )\r\n plus_one_pt(x)\r\n\r\ndef fn(x):\r\n torch.ops.xla.dynamo_set_buffer_donor_(x, True)\r\n return plus_one_(x)\r\n\r\nfn = torch.compile(fn, backend=\"openxla\")\r\n\r\nx = torch.ones(4, dtype=torch.bfloat16, device='xla')\r\n\r\nfn(x)\r\nprint(x)\r\n```\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8390", "state": "closed", "labels": [], "created_at": "2024-11-18T19:03:23Z", "updated_at": "2024-11-18T19:08:50Z", "user": "xinli-sw" }, { "repo": "pytorch/xla", "number": 8389, "title": "Prepare a subsection to educate users on the PyTorch workloads on AI-Hypercomputer", "body": "## \ud83d\udcda Documentation\r\n\r\nAI-Hypercomputer is where customers and users can find optimized implementation of representative models.\r\n\r\nPlease add a section in the PyTorchXLA README page (and the html documentation) that introduces this concept and points the users to the following resource: https://github.com/AI-Hypercomputer/tpu-recipes\r\n\r\nKeep in mind that the AI-Hypercomputer tpu-recipe repo is WIP and gradually grows in scope.\r\n\r\nRead more [context on AI-Hypercomputer ](https://cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-tpu-v5p-and-ai-hypercomputer?e=48754805)\r\n\r\nTimeline: would be great to add this documentation to the repo for 2.6 branch cut.\r\n\r\ncc @tengyifei ", "url": "https://github.com/pytorch/xla/issues/8389", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-11-18T18:48:24Z", "updated_at": "2024-12-10T00:24:25Z", "comments": 1, "user": "miladm" }, { "repo": "pytorch/xla", "number": 8388, "title": "Need help validating TPU/XLA devices support for ComfyUI.", "body": "## \u2753 Questions and Help\r\nI'm working on adding initial XLA support to ComfyUI https://github.com/comfyanonymous/ComfyUI/pull/5657 and would greatly appreciate any feedback or validation from the community. Specifically, I'm looking for:\r\n\r\n- Testing across different XLA-compatible hardware (e.g., TPUs or GPUs with XLA support).\r\n- Suggestions for optimizing performance with XLA in this context.\r\n- Identifying any compatibility issues or edge cases that might arise during execution.\r\n\r\nIf you're familiar with integrating XLA into PyTorch workflows or have experience with related pipelines, your input would be invaluable. Thank you in advance for your help!", "url": "https://github.com/pytorch/xla/issues/8388", "state": "open", "labels": [ "question" ], "created_at": "2024-11-17T23:09:49Z", "updated_at": "2025-02-17T18:13:57Z", "user": "radna0" }, { "repo": "pytorch/xla", "number": 8387, "title": "Can Triton be used with XLA/TPU devices?", "body": "## \u2753 Questions and Help\r\n\r\nI see that there are docs for triton support but only for GPU? Is it possible for TPU to use triton?\n```[tasklist]\n### Tasks\n```\n", "url": "https://github.com/pytorch/xla/issues/8387", "state": "closed", "labels": [], "created_at": "2024-11-16T09:46:06Z", "updated_at": "2024-12-11T06:21:18Z", "comments": 1, "user": "radna0" }, { "repo": "pytorch/torchchat", "number": 1380, "title": "What is the future plan of model expansion?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI see current torchchat only support a few kinds of model, like llama based(liked) architecture, or pre-defined Transformer architecture models. Is there any plan to support other kinds of model architecture in the future? which kinds of model you're considering to add? If there is a new model whose architecture is not in the supporting list, is there a way to run it?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/torchchat/issues/1380", "state": "open", "labels": [ "enhancement", "Question", "triaged" ], "created_at": "2024-11-15T23:33:01Z", "updated_at": "2025-03-31T20:39:15Z", "user": "jenniew" }, { "repo": "pytorch/torchtitan", "number": 679, "title": "Question about integration with DeepSpeed-Ulysses", "body": "Hi developers,\r\n\r\nThanks for such a great project that can demonstrate the power of newly released features in torch.\r\n\r\nWhen I want to run llama2 model with 128k long sequence, how can we enable it? I have some experience with DeepSpeed-Ulysses, so the question becomes does torchtitan support sequence parallelism in DeepSpeed-Ulysses?\r\n\r\nThanks!", "url": "https://github.com/pytorch/torchtitan/issues/679", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-15T09:56:38Z", "updated_at": "2024-11-22T00:28:16Z", "user": "zigzagcai" }, { "repo": "pytorch/xla", "number": 8385, "title": "How to write in-place custom ops compatible with torch.compile using pallas", "body": "## \u2753 Questions and Help\r\n\r\nI'm trying to implement an in-place operator using pallas, and wrap it as a torch custom op. However, I found it difficult to make it work with `torch.compile`. More specifically, I\u2019m unclear about how to set donation, input-output aliases, and the op schema. It seems having an output aliased with the input will leads to functionalization problems in torch compiler.\r\n\r\nThanks!\r\n\r\nMy script is like this:\r\n\r\n```python\r\nfrom typing import List, Callable\r\n\r\nimport os\r\n\r\nimport jax\r\nimport jax.numpy as jnp\r\nfrom jax.experimental import pallas as pl\r\nfrom jax.experimental.pallas import tpu as pltpu\r\nimport torch\r\nimport torch_xla\r\nfrom torch_xla.experimental import custom_kernel\r\nfrom functools import partial\r\nimport torch_xla.debug.profiler as xp\r\n\r\nserver = xp.start_server(9012)\r\nprofile_logdir = \"./profile\" \r\nxp.trace_detached('localhost:9012', profile_logdir)\r\n\r\nos.environ[\"XLA_SAVE_TENSORS_FILE\"] = \"./graph.txt\"\r\nos.environ[\"XLA_FLAGS\"] = \"--xla_dump_to=./graph_hlo/\"\r\nos.environ[\"XLA_DUMP_HLO_GRAPH\"]=\"1\"\r\n\r\nM = 4096\r\nN = 1024\r\n\r\ndef plus_one_kernel(x_ref, o_ref):\r\n o_ref[...] = x_ref[...] + 1\r\n\r\ndef plus_one_pallas(x: jax.Array):\r\n return pl.pallas_call(\r\n plus_one_kernel,\r\n grid=[2, 2],\r\n in_specs=[pl.BlockSpec([M, N], lambda i, j: (i, j))],\r\n out_specs=pl.BlockSpec([M, N], lambda i, j: (i, j)),\r\n out_shape=jax.ShapeDtypeStruct(x.shape, dtype=jnp.int32),\r\n input_output_aliases={0:0}\r\n )(x)\r\n\r\n@torch.library.custom_op(\"xla::plus_one_\", mutates_args={})\r\ndef plus_one_(x: torch.Tensor) -> torch.Tensor:\r\n plus_one_pt = torch_xla.experimental.custom_kernel.make_kernel_from_pallas(\r\n plus_one_pallas, output_shape_dtype_fn = lambda x: [(x.shape, x.dtype)]\r\n )\r\n return plus_one_pt(x)\r\n\r\n@plus_one_.register_fake\r\ndef plus_one_fake(x: torch.Tensor) -> torch.Tensor:\r\n return x\r\n\r\ndef fn(x):\r\n torch.ops.xla.dynamo_set_buffer_donor_(x, True)\r\n ret = plus_one_(x)\r\n return ret\r\n\r\nfn = torch.compile(fn, backend=\"openxla\")\r\nx = torch.ones([M * 2, N * 2], dtype=torch.int32, device='xla')\r\n\r\nret = fn(x)\r\nprint(ret)\r\n``` \r\n\r\nAnd it seems it does not change the value of `x`.\r\n", "url": "https://github.com/pytorch/xla/issues/8385", "state": "open", "labels": [ "pallas" ], "created_at": "2024-11-15T08:34:05Z", "updated_at": "2025-02-15T05:43:45Z", "user": "soodoshll" }, { "repo": "pytorch/torchtitan", "number": 678, "title": "Any suggestion for Llama-3.1-70b(128k seq len) deploy mesh with torchtian?", "body": "Under the 128k long sequence, the activation value memory increases significantly. \r\nCP8 + TP8 seems necessary (they reduce the activation value memory almost linearly), but there is still as much as 50G of activation value memory. \r\nReccompute the activations of the MLP can reduce it by about 9G, while the recalculation of the ATTENTION layer or MLP up linear seems rather costly.I noticed that the article at https://arxiv.org/pdf/2410.06511 mentioned Full checkpoint was applied to address the activation memory issue\uff0cwhich seems to significantly increase the execution time of recomputation\uff1f\r\nDoes TorchTitan plan to offload the activation values and reload them during the backward calculation to reduce the activation value memory?\r\n", "url": "https://github.com/pytorch/torchtitan/issues/678", "state": "closed", "labels": [ "enhancement", "question" ], "created_at": "2024-11-15T03:36:20Z", "updated_at": "2025-02-26T06:40:07Z", "user": "medivh-xp" }, { "repo": "pytorch/xla", "number": 8380, "title": "How are PJRT asynchronous executions throttled by torch_xla?", "body": "## \ud83d\udc1b Bug\r\n\r\nHere at AWS we have a single PJRT device plugin for both PyTorch and JAX, and recently we've made implements to our device plugin to make it work better with JAX. I.e. now `PJRT_LoadedExecutable_Execute()` is fully asynchronous, we queue up an execution and return immediately, and expect the caller to wait on the `returned_future`, whereas before, execution was synchronous and is completed when `PJRT_LoadedExecutable_Execute()` returns.\r\n\r\nAs soon as we switched to the new implementation, we noticed that now torch_xla queues up as many executions it can without any throttling in PJRT or torch_xla, which causes us to easily exhaust device memory. It appears that now that there are no internal throttling mechanisms, and only explicit ones which needs to be triggered by user code:\r\n1. when `xm.wait_device_ops()` is called, which calls down to `WaitDeviceOps()`\r\n2. when tensor is read, which internally calls `WaitDeviceOps()`\r\nHowever, `WaitDeviceOps()` is a heavy hammer because it pauses the world until the entire pipeline is drained. Ideally we do not want to rely on this mechanism for throttling. Also we do not want the user to have to guess when to insert these calls to avoid running out of memory. Some sensible internal throttling mechanism is needed.\r\n\r\nThe main issue here is that [pjrt_computation_client.cc ](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/pjrt_computation_client.cc#L744) does not await on the `returned_future` from PJRT. It simply throws it away.\r\n\r\nHowever, according to torch's [lazy_graph_executor](https://github.com/pytorch/pytorch/blob/main/torch/csrc/lazy/core/lazy_graph_executor.h#L164), \"only one asynchronous operation can execute at the same time, on a given device.\" This is controlled by a device lock, which is supposed to be held for the entire duration of the asynchronous execution. However, in torch_xla's [xla_graph_executor.cpp](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/xla_graph_executor.cpp#L826), the device locks acquired by torch are released as soon as `ExecuteComputation()` returns, and `ExecuteComputaton()` does not actually wait for the actual computation to complete. Therefore, torch lazy_graph_executor's throttling mechanism is defeated here.\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8380", "state": "closed", "labels": [], "created_at": "2024-11-14T18:39:43Z", "updated_at": "2024-11-27T17:59:21Z", "comments": 7, "user": "mcuiaws" }, { "repo": "pytorch/torchtitan", "number": 677, "title": " Fine-Tuning Llama Model with Large Context and Customized Dataset Using Torchtitan", "body": "Hi,\r\n\r\nI am trying to fine-tune a Llama model with a large context size, and I found that to efficiently shard activations across multiple GPUs, I need to use Torchtitan. Here are some questions related to my setup:\r\n\r\nSee related issue: [meta-llama/llama-recipes#785](https://github.com/meta-llama/llama-recipes/issues/785)\r\n\r\n1. **Custom Dataset Usage** \r\n I created a custom dataset using parquet files and a `custom_dataset.py` file, which is compatible with `llama-recipes`. I'm also using the `DEFAULT_CHATML_CHAT_TEMPLATE`. Could you please provide guidance on how to integrate and use this custom dataset effectively with Torchtitan?\r\n\r\n2. **Fine-Tuning with Pretrained Model** \r\n Is it possible to fine-tune the model starting from a pretrained checkpoint? If so, are there specific steps or configurations needed to achieve this with Torchtitan?\r\n\r\n3. **Model Support (Llama-3.2-1B)** \r\n I noticed that Torchtitan currently supports training Llama 3 models (8B, 70B) out of the box. What steps would I need to take if I wanted to train `meta-llama/Llama-3.2-1B` specifically?\r\n\r\n4. **Large Context and FSDP Limitation** \r\n I am unable to use FSDP because of the large context sizes I\u2019m working with. Any additional guidance on handling large contexts effectively with Torchtitan would be appreciated.\r\n\r\nThank you for your help!", "url": "https://github.com/pytorch/torchtitan/issues/677", "state": "closed", "labels": [ "enhancement", "question" ], "created_at": "2024-11-14T17:29:52Z", "updated_at": "2024-12-17T16:11:20Z", "user": "Amerehei" }, { "repo": "pytorch/executorch", "number": 6846, "title": "How to Apply Different Quantization Settings Per Layer in ExecuTorch?", "body": "Dear @kimishpatel @jerryzh168 @shewu-quic \r\n\r\nI want to split a model(eg, Llama-3.2-3B) into multiple layers and apply different quantization settings(qnn_8a8w, qnn_16a4w...) to each layer.\r\nHas such a method been tested in ExecuTorch?\r\nIf not, could you suggest how this can be achieved?\r\n\r\nThank you", "url": "https://github.com/pytorch/executorch/issues/6846", "state": "open", "labels": [ "partner: qualcomm", "triaged", "module: quantization" ], "created_at": "2024-11-14T02:48:39Z", "updated_at": "2024-12-23T19:32:53Z", "user": "crinex" }, { "repo": "pytorch/torchtitan", "number": 676, "title": "Very low wps with H200 Gpus", "body": "Hello, I am running the multinode_trainer.slurm (llama3_70b.toml) on 4 nodes that have 32 H200 Gpus. However, wps is only around ~200. Any ideas what can cause this slowness?\r\n\r\n[output.txt](https://github.com/user-attachments/files/17740634/output.txt)\r\n\r\n[multinode_trainer.slurm.txt](https://github.com/user-attachments/files/17740601/multinode_trainer.slurm.txt)\r\n", "url": "https://github.com/pytorch/torchtitan/issues/676", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-13T23:59:00Z", "updated_at": "2025-02-26T04:16:21Z", "user": "aniltrkkn" }, { "repo": "pytorch/xla", "number": 8379, "title": "Confusing text in bazel.md", "body": "## \ud83d\udcda Documentation\r\n\r\nThe bazil.md file contains the following text:\r\n\r\nBazel brings in [pybind11](https://github.com/pybind/pybind11) embeded python and links against it to provide libpython to the plugin using this mechanism. Python headers are also sourced from there instead of depending on the system version. These are satisfied from the \"@pybind11//:pybind11_embed\", which sets up compiler options for linking with libpython transitively.\r\n\r\nFrom what I can determine:\r\n\r\n- `pybind` is a library of headers that defines an API for C++ and Python code to interact\r\n- libpython is a library that provides the core implementation of the Python interpreter\r\n\r\nThe text above says \"Bazel ... links against pybind to provide libpython to the plugin...\" \r\n\r\n- What does this mean? \r\n- To what plugin does this refer?\r\n- How does Bazel \"provide libpython to the plugin\"? Does this mean that Bazel uses the libpython library when building the plugin and the plugin uses the API defined in pybind to call into libpython? Why is it important to state how the plugin communicates with libpython?\r\n\r\nThe text says: \"Python headers are also sourced from there instead of depending on the system version. \"\r\n\r\n- To where does \"there\" refer?\r\n\r\nThe text says: \"These are satisfied from the \"@pybind11//:pybind11_embed\", which sets up compiler options for linking with libpython transitively.\"\r\n\r\n- What does \"these\" refer to?\r\n- What is \"@pybind11//:pybind11_embed\"?\r\n- What does it mean to link with libpython transitively?\r\n", "url": "https://github.com/pytorch/xla/issues/8379", "state": "open", "labels": [ "documentation", "build" ], "created_at": "2024-11-13T23:11:00Z", "updated_at": "2025-11-13T00:46:46Z", "comments": 3, "user": "mikegre-google" }, { "repo": "pytorch/executorch", "number": 6813, "title": "How to convert tokenizer of SmolLM model as accepted by executorch", "body": "Hi,\r\nI am trying to convert [SmolLm-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) model to .pte format and then run on an android device.\r\nI have been successful in converting the model but executorch requires the tokenizer in either .bin format or .model format which can then be converted into .bin format. But on huggingface tokenizer.model or tokenizer.bin files are not present. \r\n\r\nHow would I go about converting the tokenizer.json file into the appropriate format.\n\ncc @mergennachin @byjlw", "url": "https://github.com/pytorch/executorch/issues/6813", "state": "open", "labels": [ "triaged", "module: extension", "module: user experience" ], "created_at": "2024-11-13T11:19:13Z", "updated_at": "2025-12-18T20:16:46Z", "user": "Arpit2601" }, { "repo": "pytorch/xla", "number": 8371, "title": "TPU Trillium Base Docker Image cannot initialize ", "body": "## TPU initialization is failed\r\n\r\nWhen I started tpu v6e-4 TPU Vm with v2-alpha-tpuv6e base image, with pip enviroment and xla updates I can clearly initialized tpus. However when I start to dockerize my pipelie, it fails to initialize TPUs. I tried so much tpu xla base images but I could not achieve to initialize. This happens everytime get device from torch_xla.core.xla_model.xla_device().\r\n\r\nI have checked this base images. I guess v2-alpha-tpuv6e configuration is crucial, is there any related base docker image?\r\n\r\n> us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm_20241028\r\n\r\n> us-central1-docker.pkg.dev/deeplearning-images/reproducibility/pytorch-tpu-diffusers:v4\r\n\r\n## To Reproduce\r\n# DevDockerfile\r\n```dockerfile\r\nFROM us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm_20241028\r\n\r\n# Set environment variables to avoid prompts during installation\r\nENV DEBIAN_FRONTEND=noninteractive\r\nENV PYTHONUNBUFFERED=1\r\n\r\nRUN apt-get update && apt-get install -y \\\r\n vim \\\r\n curl \\\r\n git \\\r\n bash \\\r\n wget \\\r\n libopenblas-base \\\r\n && rm -rf /var/lib/apt/lists/*\r\n\r\nRUN pip3 install --no-cache-dir --pre torch==2.6.0.dev20241028+cpu torchvision==0.20.0.dev20241028+cpu --index-url https://download.pytorch.org/whl/nightly/cpu\r\nRUN pip install \"torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.6.0.dev20241028-cp310-cp310-linux_x86_64.whl\" -f https://storage.googleapis.com/libtpu-releases/index.html\r\nRUN pip install torch_xla[pallas] -f https://storage.googleapis.com/jax-releases/jax_nightly_releases.html -f https://storage.googleapis.com/jax-releases/jaxlib_nightly_releases.html\r\nCOPY . .\r\nCMD [\"python3\", \"app.py\"]\r\n``` \r\n\r\n\r\n#app.py \r\n```python\r\n# Quite simple to reproduce\r\nimport torch_xla.core.xla_model as xm\r\n\r\n#Hangs in here not initilize tpu.\r\ndevice = xm.xla_device()\r\n``` \r\n\r\n\r\nBoth file are in same directory. Generate docker with \r\n`docker build -f DevDockerfile -t tpu .`\r\nThen run with privileged.\r\n`docker run -ti --rm -p 5000:5000 --privileged tpu`\r\n\r\n\r\n## Expected behavior\r\n\r\n<!-- Tpu cores cannot initialized in docker enviroment. It should initialized as not docker ones-->\r\n\r\n## Environment\r\n\r\n - Reproducible on XLA backend [CPU/TPU/CUDA]:\r\n - torch_xla version: torch_xla-2.6.0.dev20241028-cp310\r\n", "url": "https://github.com/pytorch/xla/issues/8371", "state": "open", "labels": [ "bug", "xla:tpu" ], "created_at": "2024-11-12T07:38:53Z", "updated_at": "2025-02-18T12:43:11Z", "comments": 9, "user": "hsebik" }, { "repo": "pytorch/vision", "number": 8721, "title": "make processing of arbitrary inputs to transforms.v2 public and document it", "body": "### \ud83d\ude80 The feature\n\nSupporting arbitrary input structures in custom transforms is very important in the case of transform compositions:\r\n```python\r\ntr = Compose([RandomCrop((128,128), CustomTransform])\r\n```\r\nThis can be done by inheriting from `torchvision.transforms.v2.Transform` and implementing the **private** `._transform` method, which avoids having to unravel the data structure on your own (since this is done anyway in the `.forward` method). \r\n```python\r\nclass CustomTransform(Transform):\r\n def __init__(self, *kwargs):\r\n pass\r\n def _transform(self, inpt, params):\r\n if isinstance(inpt, Image):\r\n pass\r\n elif isinstance(inpt, BoundingBoxes):\r\n pass\r\n else:\r\n pass\r\n return transformed_inpt\r\n```\r\nThe method has also been described in this blog post [How to Create Custom Torchvision V2 Transforms](https://christianjmills.com/posts/torchvision-custom-v2-transform-tutorial/index.html), but the official torchvision docs do not yet describe it and instead suggest hard-coding the input structure.\r\n\r\nHaving to implement a **private** method for this (even though the class `Transform` is public) feels very wrong this means that things could break on our side any time. I would appreciate if the `._transform` method was made public -> `.transform` and the `Transform` class would receive proper documentation on how this method should be implemented for custom transforms.\n\n### Motivation, pitch\n\nThe `torchvision.transforms.v2` API has now been around for quite some time already and it would be nice to give developers the chance to develop transforms of the same quality and flexibility as the originally implemented ones!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8721", "state": "closed", "labels": [], "created_at": "2024-11-11T13:48:03Z", "updated_at": "2024-12-09T12:39:09Z", "comments": 3, "user": "liopeer" }, { "repo": "pytorch/audio", "number": 3852, "title": "Can anyone provide a real-time pretrain model for Visual Speech Recognition?", "body": "### \ud83d\udcda The doc issue\n\nI don't have the LRS3 dataset, I can't use the author's real time recipe, I would like to ask if I can directly request the trained MODEL? I would like to ask the author if he can provide the trained mods directly, or if there is anyone who has the download point of LRS3, thank you!\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3852", "state": "open", "labels": [], "created_at": "2024-11-11T06:19:57Z", "updated_at": "2024-11-11T06:19:57Z", "comments": 0, "user": "bernie-122" }, { "repo": "pytorch/vision", "number": 8722, "title": "The link of **Multi-view Stereo Correspondence** doesn't exist in the doc", "body": "### \ud83d\udcda The doc issue\n\n[The link](http://matthewalunbrown.com/patchdata/patchdata.html) of **Multi-view Stereo Correspondence** doesn't exist in [the doc](https://pytorch.org/vision/stable/datasets.html#image-pairs) as shown below:\r\n\r\n![Screenshot 2024-11-10 102207](https://github.com/user-attachments/assets/a279a8a3-831b-4d03-8993-96caabdd5e4b)\r\n\r\n\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8722", "state": "open", "labels": [ "module: documentation" ], "created_at": "2024-11-10T01:31:15Z", "updated_at": "2024-11-27T17:56:47Z", "comments": 3, "user": "hyperkai" }, { "repo": "pytorch/serve", "number": 3362, "title": "Trying to find a doc explaining how the scaling works (min_worker to max_worker)", "body": "### \ud83d\udcda The doc issue\n\nCan anyone help out?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3362", "state": "open", "labels": [], "created_at": "2024-11-09T22:01:02Z", "updated_at": "2024-11-09T22:01:02Z", "user": "lschaupp" }, { "repo": "pytorch/xla", "number": 8366, "title": "Export training model to StableHlo", "body": "## \u2753 Questions and Help\r\nThe export API only supports `torch.nn.module` as input, is any method to export a training model with **step_fn** to StableHlo?\r\n\r\nHere is a simple training case from [example](https://github.com/pytorch/xla/blob/6454b42fd404d13f2008730ed4ad33b3a91723e3/examples/train_resnet_base.py#L16):\r\n```python\r\n def __init__(self):\r\n ...\r\n self.device = torch_xla.device()\r\n self.model = torchvision.models.resnet50().to(self.device)\r\n self.optimizer = optim.SGD(self.model.parameters(), weight_decay=1e-4)\r\n self.loss_fn = nn.CrossEntropyLoss()\r\n ...\r\n\r\n def run_optimizer(self):\r\n self.optimizer.step()\r\n\r\n def step_fn(self, data, target):\r\n self.optimizer.zero_grad()\r\n output = self.model(data)\r\n loss = self.loss_fn(output, target)\r\n loss.backward()\r\n self.run_optimizer()\r\n return loss\r\n```\r\n\r\nThe guidance https://pytorch.org/xla/master/features/stablehlo.html#torch-export-to-stablehlo only introduced how to export the original `self.model`, but it didn't tell how to export the model with Optimizer and Loss functions.", "url": "https://github.com/pytorch/xla/issues/8366", "state": "closed", "labels": [], "created_at": "2024-11-08T08:02:01Z", "updated_at": "2025-01-09T02:00:38Z", "comments": 3, "user": "Zantares" }, { "repo": "pytorch/torchchat", "number": 1358, "title": "Create doc and tests for distributed inference", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nOnce distributed inference integration into torchchat is functional, let's add a docs/distributed.md with an example, and plumb that example into `.ci/scripts/run-docs distributed`. (updown.py extracts all commands between triple backticks into a test script.) \r\n\r\ntorchchat has the same runners as pytorch/pytorch, so at least a minimal 2 or 4 GPU setup on a single node would be great. Not sure whether we can run multi-node testing, you can suppress commands from tests with `[skip default]: begin` and `[skip default]: end` around those commands. \r\n\r\ncc: @mreso @lessw2020 @kwen2501 \n\n### Alternatives\n\nNone\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/torchchat/issues/1358", "state": "closed", "labels": [ "documentation", "actionable", "Distributed", "triaged" ], "created_at": "2024-11-08T02:08:33Z", "updated_at": "2025-01-18T06:15:01Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/FBGEMM", "number": 3338, "title": "how to add -r in build instructions ? ", "body": "<img width=\"1053\" alt=\"image\" src=\"https://github.com/user-attachments/assets/63c8565c-55b6-4ee0-a209-60862c51fe68\">\r\n", "url": "https://github.com/pytorch/FBGEMM/issues/3338", "state": "open", "labels": [], "created_at": "2024-11-07T02:06:21Z", "updated_at": "2024-11-07T06:03:40Z", "user": "zhaozheng09" }, { "repo": "pytorch/xla", "number": 8359, "title": "Query regarding using 1 chip (2 cores of TPU v3) for Inference", "body": "## \u2753 Questions and Help\r\nHello,\r\nI am trying to benchmark the performance of TPU v3 for inference. However, I would like to use 2 cores (1 chip).\r\nPlease point me to any documentation that I can get started on. \r\nAlso, is it possible to launch 2 inferences on 2 cores as separate independent processes? (This would just give 2x the performance of one core) \r\nThanks again,\r\nDeepak \r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8359", "state": "open", "labels": [ "question", "xla:tpu" ], "created_at": "2024-11-06T18:03:21Z", "updated_at": "2025-02-18T12:45:15Z", "user": "deepakkumar2440" }, { "repo": "pytorch/vision", "number": 8713, "title": "`torchvision.ops.boxes.batched_nms` slow on large box numbers", "body": "### \ud83d\udc1b Describe the bug\n\n## Description\r\n\r\n`torchvision.ops.boxes.batched_nms` on CUDA GPU slows down considerably when then number of bounding boxes involved increases.\r\n\r\nThe slow down is associated with Device -> Host transfer, and is linked to the iterative part of the Non Maximum Suppression (NMS) algorithm. In a nutshell the IoU map is computed on the device, then the mask is copied to the CPU to perform the iterative unwrap, which result is copied back to the device (from [here and below](https://github.com/pytorch/vision/blob/868a3b42f4bffe29e4414ad7e4c7d9d0b4690ecb/torchvision/csrc/ops/cuda/nms_kernel.cu#L136)).\r\n\r\nThe mask size grows quadratically with the number of input bounding boxes and we see a large TX rate when running on 30_000+ boxes.\r\n\r\nIn comparison the [OpenLabs mmcv](https://github.com/open-mmlab/mmcv) solution does the same thing for the IoU map but runs a custom kernel to do the unwrap directly on the device. The[ implemented kernel](https://github.com/open-mmlab/mmcv/blob/71437a361cc8918fc398ae408267cf019f4ca03f/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh#L76) is not very efficient compute wise but save the data transfer cost, which is the main bottleneck.\r\n\r\nI benchmarked `torchvision` batched_nms against `mmcv`'s on `V100` and `A100` GPUs.\r\n![A100_bench_rel_loglog](https://github.com/user-attachments/assets/12fbc0c7-e883-446d-8e3d-c753072abd5b)\r\n![V100_bench_rel_loglog](https://github.com/user-attachments/assets/15fa6971-1f70-4355-93ea-094f3b9d9509)\r\nBoth figures show the speed factor when comparing a solution to `torchvision.ops.boxes._batched_nms_vanilla` (there is 2 nms in torchvision, selected based on the number of elements. Here , `torchvision.ops.boxes._batched_nms_vanilla` is used a base comparison and we compare `torchvision.ops.boxes._batched_nms_coordinate_trick` and `mmcv` batched_nms). From 30k boxes and above `mmcv` NMS is x20+ faster.\r\n\r\nIs there a reason why we keep this GPU -> CPU transfer ?\r\nCould we improve the scalability by having a similar on-device additional kernel ?\r\n\r\n## Additional informations\r\n\r\n* All boxes are from the same class\r\n* Benchmark has been done using `torch.utils.benchmark.Timer` on 100 examples for each NMS.\r\n* I did not know if this should be put as Bug report or Feature request.\n\n### Versions\n\n```\r\nPyTorch version: 2.5.0+cu124\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.4\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: 14.0.0-1ubuntu1.1\r\nCMake version: version 3.24.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.14 (main, May 14 2024, 06:11:20) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-5.10.219-208.866.amzn2.x86_64-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 12.1.105\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB\r\nNvidia driver version: 535.183.01\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 96\r\nOn-line CPU(s) list: 0-95\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz\r\nCPU family: 6\r\nModel: 85\r\nThread(s) per core: 2\r\nCore(s) per socket: 24\r\nSocket(s): 2\r\nStepping: 7\r\nBogoMIPS: 5999.99\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke\r\nHypervisor vendor: KVM\r\nVirtualization type: full\r\nL1d cache: 1.5 MiB (48 instances)\r\nL1i cache", "url": "https://github.com/pytorch/vision/issues/8713", "state": "closed", "labels": [], "created_at": "2024-11-06T12:58:13Z", "updated_at": "2025-02-20T17:16:10Z", "comments": 1, "user": "Ghelfi" }, { "repo": "pytorch/ao", "number": 1230, "title": "How to skip decomposition of dequantize_affine and quantize_affine custom ops in inductor?", "body": "I want to use the `torch.ops.quant.quantize_affine` (Q) and `torch.ops.quant.dequantize_affine` (DQ) to represent a quant model DAG in QDQ style, and do quant fusion using inductor's [pattern matcher](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/pattern_matcher.py), for instance:\r\n```\r\nx(i8) w(i8) b(i32) x(i8) w(i8) b(i32)\r\n | | | | | |\r\nDQ DQ DQ | | |\r\n \\ | / \\ | / \r\ntorch.ops.aten.linear.default -> my_q_linear_triton_impl\r\n | |\r\n Q |\r\n | |\r\n y(i8) y(i8)\r\n```\r\nHowever, since `torch.ops.quant.quantize_affine` and `torch.ops.quant.dequantize_affine` are registered to inductor's decomposition table, as well as with `CompositeImplicitAutograd` flag, they are decomposed in aot_autograd.\r\n\r\nI wonder how to preserve the original Q-DQ ops after aot_autograd? I noticed that the torch's built-in custom Q-DQ ops, such as `torch.ops.quantized_decomposed.quantize_per_tensor` and `torch.ops.quantized_decomposed.dequantize_per_tensor`, can be preserved after aot_autograd, and there are [pattern rewrites](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/fx_passes/quantization.py) based on these Q-DQ ops. (BTW, what's the relationship between torchao and torch.ao module, will torchao be merged into torch.ao in the future?)", "url": "https://github.com/pytorch/ao/issues/1230", "state": "closed", "labels": [], "created_at": "2024-11-06T08:01:46Z", "updated_at": "2024-11-12T05:35:06Z", "user": "Nullkooland" }, { "repo": "pytorch/executorch", "number": 6655, "title": "How To Building and Running Llama 3.2 1B Instruct with Qualcomm AI Engine Direct Backend\uff1f", "body": "### Right Case\r\nWhen I follow the doc : https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#enablement,\r\nI export the Llama3.2-1B-Instruct:int4-spinquant-eo8 model to xnnpack backend pte successfully, and working alright on cpu.\r\n\r\n[\r\n![SpinQuant_XNNPACK](https://github.com/user-attachments/assets/4a9da7f9-e68b-4682-8fde-88bae0b4800f)\r\n](url)\r\n\r\n\r\n### Bad Case\r\nBut as the link: https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md, when I export to the qnn backend using mode Llama3.2-1B-Instruct, I can get the out pte file, but when I make it running on the android device, it not working right.\r\n\r\n**I export pte file like this:**\r\n\r\npython -m examples.models.llama.export_llama --checkpoint \"${MODEL_DIR}/consolidated.00.pth\" -p \"${MODEL_DIR}/params.json\" -kv --disable_dynamic_shape --qnn --pt2e_quantize qnn_16a4w -d fp32 --metadata '{\"get_bos_id\":128000, \"get_eos_ids\":[128009, 128001]}' --soc_model SM8550 --output_name=\"llama3_2_ptq_qnn_.pte\"\r\n\r\n**This is the part of output when I export**\r\n\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_permute_copy_default_979, aten.permute_copy.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_squeeze_copy_dims_175, aten.squeeze_copy.dims\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_add_tensor_79, aten.add.Tensor\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_select_copy_int_512, aten.select_copy.int\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_rms_norm_default_32, aten.rms_norm.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_view_copy_default_288, aten.view_copy.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_permute_copy_default_980, aten.permute_copy.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_convolution_default_112, aten.convolution.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_permute_copy_default_981, aten.permute_copy.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_view_copy_default_289, aten.view_copy.default\r\nINFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: quantized_decomposed_dequantize_per_tensor_tensor, quantized_decomposed.dequantize_per_tensor.tensor\r\n[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters\r\n[INFO] [Qnn ExecuTorch]: Destroy Qnn context\r\n[INFO] [Qnn ExecuTorch]: Destroy Qnn device\r\n[INFO] [Qnn ExecuTorch]: Destroy Qnn backend\r\n/home/hebaotong/AI/Executorch/executorch_new/executorch/exir/emit/_emitter.py:1512: UserWarning: Mutation on a buffer in the model is detected. ExecuTorch assumes buffers that are mutated in the graph have a meaningless initial state, only the shape and dtype will be serialized.\r\n warnings.warn(\r\nINFO:root:Required memory for activation in bytes: [0, 17552384]\r\nmodelname: llama3_2_ptq_qnn_\r\noutput_file: llama3_2_ptq_qnn_.pte\r\nINFO:root:Saved exported program to llama3_2_ptq_qnn_.pte\r\n\r\n**Screenshot of run status**\r\n\r\n![PTQ_QNN](https://github.com/user-attachments/assets/c2959707-51cc-4f9d-982f-41186ee3ddfe)\r\n", "url": "https://github.com/pytorch/executorch/issues/6655", "state": "open", "labels": [ "partner: qualcomm", "triaged", "module: qnn", "module: llm" ], "created_at": "2024-11-05T08:00:19Z", "updated_at": "2025-12-19T19:15:57Z", "user": "baotonghe" }, { "repo": "pytorch/serve", "number": 3357, "title": "413 Request Entity Too Large", "body": "### \ud83d\udcda The doc issue\n\nWhen making a request, sometimes 413 Request Entity Too Large is reported. Is there any configuration for torchserve that can increase the threshold of request size?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3357", "state": "open", "labels": [], "created_at": "2024-11-05T02:38:59Z", "updated_at": "2025-01-12T05:21:34Z", "comments": 1, "user": "pengxin233" }, { "repo": "pytorch/tutorials", "number": 3143, "title": "New Search Engine should link to right branch (stable/main/preview pr branch)", "body": "The search feature should match the branch that the docs loaded in. Why? The use case I often have is I use the search bar to quickly navigate to the page I had just edited in my PR to see how it'd render in prod. The new search engine produces results that always directs to stable, though, so there's no easy way to navigate to the page I wanted to check. This is a regression from the previous search experience.\r\n\r\nFor example, in the following preview, I'm looking for the custom operators page, which I had modified in the PR. I search for it in the preview docs, but all the results point to stable. Ideally, these would point to the docs built for my PR branch, which was the old behavior.\r\n\r\n![image](https://github.com/user-attachments/assets/640c00d2-e6ec-4413-a81a-530aedd0f447)\r\n\r\n\r\nIt would also be good for those look at docs on main to stay in docs on main (vs be redirected to stable).\r\n\r\n\r\n## Alternatives\r\n\r\nAllow the old search engine", "url": "https://github.com/pytorch/tutorials/issues/3143", "state": "closed", "labels": [ "regression" ], "created_at": "2024-11-04T19:42:26Z", "updated_at": "2024-11-19T19:19:34Z", "comments": 0, "user": "janeyx99" }, { "repo": "pytorch/xla", "number": 8355, "title": "Offer user guide instructions to users to leverage various `libtpu` versions", "body": "## \ud83d\udcda Documentation\r\n\r\nOffer user guide instructions to users to leverage various `libtpu` versions. We want users to have a clear understanding of how to set their expectations when choosing between different libtpu options.\r\n\r\nHere is a snippet of various libtpu version. I will add more details (as needed) to this bug.\r\n\r\n```\r\n# Install latest libtpu release\r\n$ pip install libtpu -f https://storage.googleapis.com/libtpu-wheels/index.html\r\n\r\n# Install specific libtpu release\r\n$ pip install libtpu==x.y.z -f https://storage.googleapis.com/libtpu-wheels/index.html\r\n\r\n# Install latest libtpu nightly build\r\n$ pip install libtpu --pre -f https://storage.googleapis.com/libtpu-wheels/index.html\r\n\r\n# Install specific libtpu nightly build\r\n$ pip install libtpu==0.0.3.dev20241029 -f https://storage.googleapis.com/libtpu-wheels/index.html\r\n```\r\n\r\nasking @mikegre-google to help with adding this information to the READM\r\ncc @tengyifei to assist", "url": "https://github.com/pytorch/xla/issues/8355", "state": "closed", "labels": [ "usability", "documentation" ], "created_at": "2024-11-04T18:12:13Z", "updated_at": "2025-03-03T18:32:33Z", "comments": 15, "user": "miladm" }, { "repo": "pytorch/vision", "number": 8714, "title": "I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?", "body": "### \ud83d\udc1b Describe the bug\n\nI am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?\n\n### Versions\n\nI am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?", "url": "https://github.com/pytorch/vision/issues/8714", "state": "closed", "labels": [], "created_at": "2024-11-04T07:23:48Z", "updated_at": "2024-12-11T09:35:34Z", "comments": 5, "user": "jiangsu415" }, { "repo": "pytorch/torchchat", "number": 1338, "title": "can't build AOTI runner", "body": "### \ud83d\udc1b Describe the bug\n\n`torchchat/utils/scripts/build_native.sh aoti`\r\n\r\nFails with \r\n```\r\nBuilding aoti native runner...\r\nDefaulting TORCHCHAT_ROOT to /home/warden/source/torchchat/torchchat/utils/scripts/../../.. since it is unset.\r\n~/source/torchchat ~/source/torchchat\r\nSynchronizing submodule url for 'tokenizer/third-party/abseil-cpp'\r\nSynchronizing submodule url for 'tokenizer/third-party/re2'\r\nSynchronizing submodule url for 'tokenizer/third-party/sentencepiece'\r\n~/source/torchchat\r\n-- VERSION: 0.2.1\r\n-- Not Found TCMalloc: TCMALLOC_LIB-NOTFOUND\r\n-- Using ET BUILD DIR: --[et-build]--\r\n-- TORCHCHAT_ROOT=\"/home/warden/source/torchchat\"\r\n-- Looking for excutorch in /home/warden/source/torchchat/et-build/install\r\n-- Could NOT find executorch (missing: executorch_DIR)\r\n-- Caffe2: CUDA detected: 12.0\r\n-- Caffe2: CUDA nvcc is: /usr/bin/nvcc\r\n-- Caffe2: CUDA toolkit directory: /usr\r\n-- Caffe2: Header version is: 12.0\r\n-- Could NOT find nvtx3 (missing: nvtx3_dir) \r\n-- USE_CUDNN is set to 0. Compiling without cuDNN support\r\n-- USE_CUSPARSELT is set to 0. Compiling without cuSPARSELt support\r\n-- USE_CUDSS is set to 0. Compiling without cuDSS support\r\n-- USE_CUFILE is set to 0. Compiling without cuFile support\r\n-- Autodetected CUDA architecture(s): 8.9 8.6\r\n-- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89;-gencode;arch=compute_86,code=sm_86\r\n-- Configuring done (0.3s)\r\n-- Generating done (0.1s)\r\n-- Build files have been written to: /home/warden/source/torchchat/cmake-out\r\n[1/4] Linking CXX static library tokenizer/third-party/sentencepiece/src/libsentencepiece.a\r\n[2/4] Building CXX object tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o\r\nFAILED: tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o \r\n/usr/bin/c++ -I/home/warden/source/torchchat/tokenizer -I/home/warden/source/torchchat/tokenizer/third-party/sentencepiece/src -I/home/warden/source/torchchat/tokenizer/third-party/re2 -I/home/warden/source/torchchat/tokenizer/third-party/abseil-cpp -D_GLIBCXX_USE_CXX11_ABI=0 -MD -MT tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o -MF tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o.d -o tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o -c /home/warden/source/torchchat/tokenizer/tiktoken.cpp\r\nIn file included from /home/warden/source/torchchat/tokenizer/tiktoken.cpp:18:\r\n/home/warden/source/torchchat/tokenizer/base64.h:37:11: error: \u2018uint32_t\u2019 does not name a type\r\n 37 | constexpr uint32_t DECODE_TABLE[] = {\r\n | ^~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:29:1: note: \u2018uint32_t\u2019 is defined in header \u2018<cstdint>\u2019; did you forget to \u2018#include <cstdint>\u2019?\r\n 28 | #include <string>\r\n +++ |+#include <cstdint>\r\n 29 | #include <string_view>\r\n/home/warden/source/torchchat/tokenizer/base64.h:57:13: error: variable or field \u2018validate\u2019 declared void\r\n 57 | inline void validate(uint32_t v) {\r\n | ^~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:57:22: error: \u2018uint32_t\u2019 was not declared in this scope\r\n 57 | inline void validate(uint32_t v) {\r\n | ^~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:57:22: note: \u2018uint32_t\u2019 is defined in header \u2018<cstdint>\u2019; did you forget to \u2018#include <cstdint>\u2019?\r\n/home/warden/source/torchchat/tokenizer/base64.h: In function \u2018void base64::detail::decode(const std::string_view&, std::string&)\u2019:\r\n/home/warden/source/torchchat/tokenizer/base64.h:70:3: error: \u2018uint32_t\u2019 was not declared in this scope\r\n 70 | uint32_t val = 0;\r\n | ^~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:70:3: note: \u2018uint32_t\u2019 is defined in header \u2018<cstdint>\u2019; did you forget to \u2018#include <cstdint>\u2019?\r\n/home/warden/source/torchchat/tokenizer/base64.h:72:3: error: \u2018uint8_t\u2019 was not declared in this scope\r\n 72 | uint8_t c = input[0];\r\n | ^~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:72:3: note: \u2018uint8_t\u2019 is defined in header \u2018<cstdint>\u2019; did you forget to \u2018#include <cstdint>\u2019?\r\n/home/warden/source/torchchat/tokenizer/base64.h:73:12: error: \u2018DECODE_TABLE\u2019 was not declared in this scope\r\n 73 | auto v = DECODE_TABLE[c];\r\n | ^~~~~~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:73:25: error: \u2018c\u2019 was not declared in this scope\r\n 73 | auto v = DECODE_TABLE[c];\r\n | ^\r\n/home/warden/source/torchchat/tokenizer/base64.h:74:3: error: \u2018validate\u2019 was not declared in this scope\r\n 74 | validate(v);\r\n | ^~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:75:3: error: \u2018val\u2019 was not declared in this scope\r\n 75 | val = v;\r\n | ^~~\r\n/home/warden/source/torchchat/tokenizer/base64.h: In function \u2018void base64::detail::decode_1_padding(const std::string_view&, std::string&)\u2019:\r\n/home/warden/source/torchchat/tokenizer/base64.h:105:3: error: \u2018uint32_t\u2019 was not declared in this scope\r\n 105 | uint32_t val = 0;\r\n | ^~~~~~~~\r\n/home/warden/source/torchchat/tokenizer/base64.h:105:3: note: \u2018uint32_t\u2019 is defined in", "url": "https://github.com/pytorch/torchchat/issues/1338", "state": "closed", "labels": [], "created_at": "2024-11-01T17:52:21Z", "updated_at": "2024-11-01T21:36:12Z", "comments": 1, "user": "byjlw" }, { "repo": "pytorch/xla", "number": 8342, "title": "Instructions in CONTRIBUTING.md for using VS Code don't seem to work", "body": "## \ud83d\udcda Documentation\r\nI've followed the instructions in CONTRIBUTING.md to set up a dev environment using VS Code. Next I run python and then tried to import torch_xla as xla and I get an error:\r\n\r\n```\r\n>>> import torch_xla as xla\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/workspaces/xla_vs_files/pytorch/xla/torch_xla/__init__.py\", line 259, in <module>\r\n from .stablehlo import save_as_stablehlo, save_torch_model_as_stablehlo\r\n File \"/workspaces/xla_vs_files/pytorch/xla/torch_xla/stablehlo.py\", line 18, in <module>\r\n from torch_xla._dynamo import dynamo_bridge\r\n File \"/workspaces/xla_vs_files/pytorch/xla/torch_xla/_dynamo/dynamo_bridge.py\", line 20, in <module>\r\n from torch._inductor.fx_passes.post_grad import ConstructorMoverPass\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/fx_passes/post_grad.py\", line 22, in <module>\r\n from .. import config, ir, pattern_matcher\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py\", line 96, in <module>\r\n from .lowering import fallback_node_due_to_unsupported_type\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/lowering.py\", line 6639, in <module>\r\n from . import kernel\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/kernel/__init__.py\", line 1, in <module>\r\n from . import mm, mm_common, mm_plus_mm, unpack_mixed_mm\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/kernel/mm.py\", line 16, in <module>\r\n from torch._inductor.codegen.cpp_gemm_template import CppPackedGemmTemplate\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_gemm_template.py\", line 14, in <module>\r\n from ..kernel.mm_common import mm_args\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/kernel/mm_common.py\", line 10, in <module>\r\n from torch._inductor.select_algorithm import realize_inputs\r\n File \"/usr/local/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py\", line 22, in <module>\r\n from filelock import FileLock\r\nModuleNotFoundError: No module named 'filelock'\r\n```\r\nSo it appears something isn't configured correctly. If I follow the instructions for directly using a container, everything works as expected. \r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8342", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-10-30T18:16:38Z", "updated_at": "2024-10-30T18:36:37Z", "comments": 1, "user": "mikegre-google" }, { "repo": "pytorch/TensorRT", "number": 3267, "title": "\u2753 [Question] How do you properly deploy a quantized model with tensorrt", "body": "## \u2753 Question\r\nI have a PTQ model and a QAT model trained with the official pytorch API following the quantization tutorial, and I wish to deploy them on TensorRT for inference. The model is metaformer-like using convolution layers as token mixer. One part of the quantized model looks like this:\r\n![image](https://github.com/user-attachments/assets/8efe5705-9044-4609-98ad-74e4be1c5ba0)\r\n\r\n\r\n## What you have already tried\r\n\r\nI have tried different ways to make things work:\r\n1. the package torch2trt: there's huge problem with dynamic input. The dataset consists of different inputs (B,C,H,W) where H and W are not necessarily the same. There's a torch2trt-dynamic package but I think there are bugs in the plugins. The code basically looks like this:\r\n`model_trt = torch2trt(\r\n model_fp32, \r\n [torch.randn(1, 11, 64, 64).to('cuda')], \r\n max_batch_size=batch_size,\r\n fp16_mode=False, \r\n int8_mode=True, \r\n calibrator= trainLoader,\r\n input_shapes=[(None, 11, None, None)]\r\n )`\r\n3. torch.compile() with backends=tensorrt. When I was trying to compile the PTQ model, there's RuntimeError: quantized::conv2d (ONEDNN): data type of input should be QUint8. And when I was trying to use the QAT model, there's W1029 14:21:17.640402 139903289382080 torch/_dynamo/utils.py:1195] [2/0] Unsupported: quantized nyi in meta tensors with fake tensor propagation. \r\nHere's the code I used:\r\n`trt_gm = torch.compile(\r\n model,\r\n dynamic= True,\r\n backend=\"tensorrt\",)\r\n`\r\n4. try to convert the torch model to an onnx model, then convert it into the trt engine. There are several problems in this case:\r\n- The onnx model is runs weirdly slow with onnx runtime. Furthermore, the loss calculated is extremely high. Here's an example:\r\n![image](https://github.com/user-attachments/assets/fb0f1f3a-2c5c-4f8d-8bf5-c6a3b5aac6ac)\r\n\r\n- I tried to visualize the quantized ONNX model with Netron because converting the quantized ONNX model to TRT engine always raise \r\n![image](https://github.com/user-attachments/assets/b09bf68a-9a8a-4ce0-8fbb-04c5bc30bf71)\r\nThis is the problematic part of the graph\r\n![image](https://github.com/user-attachments/assets/9c11d90d-8880-4e1f-9716-342edb1c4864)\r\nThe rightmost DequantizeLinear node is causing problem. I checked the x and found that it's an in32 constant array and the x_scale is a float32 constant array. The output of this node turned out to be the bias passed into the Conv layer.\r\nThere must be something wrong in the behavior of the conversion. When doing quantization with the pytorch API, only activations and weights were observed by the defined observer, so I was expecting only the leftmost and the middle DequantizeLinear Nodes while bias should be stored in fp32 and directly passed into the Conv layer. Using onnx_simplified is not able to get rid of the node. With the incompatibility between the conversion of quantized torch model to ONNX model, I'm not able to further convert the model into trt engine. I've considered using the onnx API for quantization, but the performance drop thing from unquantized original torch model to ONNX model is quite concerning.\r\nThe converting code looks like this:\r\n`torch.onnx.export(\r\n quantized_model, \r\n dummy_input,\r\n args.onnx_export_path, \r\n input_names=[\"input\"], \r\n output_names=[\"output\"], \r\n opset_version=13, \r\n export_params= True,\r\n keep_initializers_as_inputs=False, \r\n dynamic_axes= {'input': {0:'batch_size', 2: \"h\", 3: \"w\"},\r\n 'output': {0:'batch_size', 2: \"h\", 3: \"w\"}\r\n }\r\n )`\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 2.3.1\r\n - CPU Architecture: x86_64\r\n - OS: Ubuntu 20.04.4 LTS\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda\r\n - Are you using local sources or building from archives: No\r\n - Python version: 3.9.19\r\n - CUDA version: 12.1\r\n - GPU models and configuration: \r\n- Torch_TensorRT: 2.3.0\r\n- torch2trt: 0.5.0\r\n- onnx:1.16.1\r\n\r\n## Additional context\r\nPersonally I think the torch.compile() API is the most possible for me to successfully convert the quantized model since there's no performance drop. Does anyone has relevant experience on handling quantized model?\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3267", "state": "open", "labels": [ "question" ], "created_at": "2024-10-29T15:06:54Z", "updated_at": "2025-03-03T22:30:06Z", "user": "Urania880519" }, { "repo": "pytorch/torchtitan", "number": 658, "title": "Questions about FSDP2 support and memory usage.", "body": "What is current support of FSDP2 in main pytorch?\r\nI just see this here https://github.com/pytorch/pytorch/blob/main/torch/distributed/_composable/fully_shard.py#L45\r\n\r\n> \"`torch.distributed._composable.fully_shard` will be removed after PyTorch 2.5.\"\r\n\r\nWill FSDP2 be deprecated? Can FSDP1 work with DTensor as well as TP?\r\n\r\nI tried FSDP2 in my new project, but I got higher GPU Memory usage compared to FSDP1, what might this cause? The model is a 10B DiT-like model with extra embedding layer compared to LLMs. My main concern is that should I need to take more modules warpped with fully_shard to reduce the memory usage? \r\n\r\nSince the transformer block is quite similar to llama, I use the same fully_sahrd warp with your project.", "url": "https://github.com/pytorch/torchtitan/issues/658", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-29T11:09:01Z", "updated_at": "2025-08-21T02:57:19Z", "user": "tangjiasheng" }, { "repo": "pytorch/torchchat", "number": 1334, "title": "Multimodal Eval Enablement (Looking for Developer to Implement Design)", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\n***Please note that since the actual implementation is going to be simple, and the design has already been reviewed, the purpose of this GitHub Issue is to look for a developer to implement this feature ASAP.***\r\n\r\nLLM eval stands for the process of assessing the perplexity, performance and capabilities of LLMs, usually by having the model complete one or a series of tasks and assigning them scores. Torchchat is already using EleutherAI\u2019s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to do eval on text LLM ([code pointer](https://github.com/pytorch/torchchat/blob/11dcbebe6bd2ee933f7302b4e14baa23761abc0c/torchchat/usages/eval.py#L198)). Recently, torchtune has worked with EleutherAI to enable eval on text-image models in the harness, and has integrated this feature into torchtune ([code pointer](https://github.com/pytorch/torchtune/blob/d0c6460b51fc18245b3da0220568e10b3de06b63/recipes/eleuther_eval.py#L40)). Torchchat wants to just copy that solution from torchtune for text-image models.\r\n\r\nWithout the ability to do eval on multimodal LLMs, the enablement of multimodal LLMs on torchchat is incomplete. It\u2019s critical to understand how well torchchat performs with image inputs. \r\n\r\n### Additional context\r\n\r\n## Assumptions\r\n\r\n\r\n\r\n* The eval for text LLMs is already enabled on torchchat. Code pointer to the [core eval function](https://github.com/pytorch/torchchat/blob/11dcbebe6bd2ee933f7302b4e14baa23761abc0c/torchchat/usages/eval.py#L172) and the [main function](https://github.com/pytorch/torchchat/blob/11dcbebe6bd2ee933f7302b4e14baa23761abc0c/torchchat/usages/eval.py#L226).\r\n* The Llama 3.2-11b multimodal model has been onboarded to torchchat, and in the future there will be more multimodal LLMs on torchchat. \r\n* EleutherAI\u2019s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) has enabled eval on llama3.2-11b, thus we don\u2019t need to make code changes in EleutherAI repo.\r\n\r\n\r\n## The Main Goal\r\nA torchchat user can run eval on the llama 3.2-11b model (which image-text-in, text-out). Note that we don\u2019t need to worry about the internals of how the eval happens because we will only be calling the EleutherAI\u2019s eval libraries and report the metrics it returns. \r\n\r\nThe user interface will be a commandline `python torchchat.py eval <model-name>` with additional arguments specifying detailed requirements for the eval tasks.\r\n\r\nThe result will be printed out on the terminal which include the following metrics:\r\n * Tasks that have been run \r\n * The score to each task \r\n * The time it took to run each task\r\n\r\n\r\n### RFC (Optional)\r\n\r\n# Design\r\n\r\n\r\n## Overview\r\n\r\nIn this design, the multimodal eval in torchchat will borrow from the implementation of multimodal eval in torchtune which utilizes EleutherAI\u2019s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The reason we can do this is that torchchat uses the same Llama 3.2-11b model definition as torchtune. \r\n\r\n## Details\r\n\r\n\r\n### The Core Eval Implementation\r\n\r\n\r\n#### [Preferred] Approach A: import the implementation of `HFMultimodalLM` from torchtune directly \r\nThe easiest implementation is to import the implementation of <code>HFMultimodalLM </code>directly from torchtune, then call <code>evaluate()</code> with this wrapper class passed in. </em>\r\n\r\nHere\u2019s torchtune\u2019s implementation of `HFMultimodalLM`: [code pointer](https://github.com/pytorch/torchtune/blob/ced1a840300b1ab550dac4fc2054b187f5b45c8c/recipes/eleuther_eval.py#L68).\r\n\r\n*Pseudocode:*\r\n```\r\n# In eval.py\r\nfrom torchtune.recipes.eleuther_eval import _VLMEvalWrapper\r\n\r\nif model is text-based:\r\n do the existing text-based model eval\r\nelif model is text-image-based:\r\n eval_results = evaluate(_VLMEvalWrapper(...))\r\n```\r\n\r\nThe pros and cons of this solution is discussed in the following \u201cAlternatives Discussion\u201d section. This solution should be the one to start with given how quick it can enable multimodal eval on torchchat. If for some unforeseen reason that it doesn\u2019t work, then take the following approach that requires more work.\r\n\r\n\r\n#### Approach B: copy the implementation of `HFMultimodalLM` from torchtune\r\n\r\n\r\n\r\n1. Creating a wrapper class that overrides class <code>[HFMultimodalLM](https://github.com/EleutherAI/lm-evaluation-harness/blob/0845b588303f1f59af98dd1c5bdbd78a9e75a1e2/lm_eval/models/hf_vlms.py#L30)</code>, which is an abstract Hugging Face model class for multimodal models. The implementation of this class can be copied from torchtune, [code pointer](https://github.com/pytorch/torchtune/blob/ced1a840300b1ab550dac4fc2054b187f5b45c8c/recipes/eleuther_eval.py#L68).\r\n2. Then call <code>evaluate()</code> with this wrapper class passed in. \r\n\r\n*Pseudocode:*\r\n```\r\n# In eval.py\r\nfrom lm_eval.models.hf_vlms import HFMultimodalLM\r\nfrom lm_eval.evaluator import evaluate\r\n\r\nclass VLMEvalWrapper(HFMultimodalLM):\r\n ...# implementation\r\n\r\nif model is text-based:\r\n do the existing text-", "url": "https://github.com/pytorch/torchchat/issues/1334", "state": "closed", "labels": [ "enhancement", "good first issue", "actionable", "Llama 3.2- Multimodal", "triaged" ], "created_at": "2024-10-29T01:01:50Z", "updated_at": "2025-03-25T06:24:18Z", "comments": 26, "user": "Olivia-liu" }, { "repo": "pytorch/xla", "number": 8327, "title": "Add documentations for persistent caching", "body": "## \ud83d\udcda Documentation\r\n\r\nAdd documentations for persistent caching; the [current documentation](https://github.com/pytorch/xla/blob/310ff8f41858db7782f97542e76aeb60fa527d14/API_GUIDE.md#compilation-caching) briefly explains how to enable the cache. Though, it does little to \r\n\r\n1. introduce the feature\r\n2. explain what problem it solves\r\n3. how it works\r\n4. how it can be transferred from one VM to another VM\r\n5. what it's limitations are\r\n\r\nLet's add the new documentation under https://github.com/pytorch/xla/tree/master/docs\r\n\r\ncc @mikegre-google to help review this documentation", "url": "https://github.com/pytorch/xla/issues/8327", "state": "open", "labels": [ "documentation" ], "created_at": "2024-10-26T01:01:36Z", "updated_at": "2024-10-26T01:01:37Z", "comments": 0, "user": "miladm" }, { "repo": "pytorch/vision", "number": 8696, "title": "PyTorch & Torchvision compatible issue on Jetson Orin", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nPrevious discussion: https://forums.developer.nvidia.com/t/pytorch-torchversion-compatible-issue-on-l4t35-5-0/310929/9\r\n\r\n\r\n```bash\r\ndaniel@daniel-nvidia:~/Work/yolov5$ python detect.py --weights yolov5s.pt --source ../../Videos/Worlds_longest_drone_fpv_one_shot.mp4\r\nWARNING \u26a0\ufe0f Python>=3.10 is required, but Python==3.8.10 is currently installed\r\n/home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?\r\n warn(\r\ndetect: weights=['yolov5s.pt'], source=../../Videos/Worlds_longest_drone_fpv_one_shot.mp4, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_format=0, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1\r\nYOLOv5 \ud83d\ude80 v7.0-378-g2f74455a Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Orin, 7451MiB)\r\n\r\nFusing layers...\r\nYOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients\r\nTraceback (most recent call last):\r\n File \"detect.py\", line 437, in <module>\r\n main(opt)\r\n File \"detect.py\", line 432, in main\r\n run(**vars(opt))\r\n File \"/home/daniel/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"detect.py\", line 210, in run\r\n pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)\r\n File \"/home/daniel/Work/yolov5/utils/general.py\", line 1104, in non_max_suppression\r\n i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS\r\n File \"/home/daniel/.local/lib/python3.8/site-packages/torchvision/ops/boxes.py\", line 40, in nms\r\n _assert_has_ops()\r\n File \"/home/daniel/.local/lib/python3.8/site-packages/torchvision/extension.py\", line 46, in _assert_has_ops\r\n raise RuntimeError(\r\nRuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.\r\n```\r\n\r\n### Versions\r\n```bash\r\ndaniel@daniel-nvidia:~/Work/yolov5$ python -c \"import torch; import torchvision; print(f'PyTorch version: {torch.__version__}'); print(f'Torchvision version: {torchvision.__version__}')\"\r\n/home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?\r\n warn(\r\nPyTorch version: 2.1.0a0+41361538.nv23.06\r\nTorchvision version: 0.16.1+fdea156\r\n```\r\n\r\n\r\n```\r\ndaniel@daniel-nvidia:~/Work$ python collect_env.py\r\nCollecting environment information...\r\nPyTorch version: 2.1.0a0+41361538.nv23.06\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.4\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.6 LTS (aarch64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.16.3\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Sep 11 2024, 16:02:53) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.10.192-tegra-aarch64-with-glibc2.29\r\nIs CUDA available: True\r\nCUDA runtime version: 11.4.315\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0\r\nHIP runtime ver", "url": "https://github.com/pytorch/vision/issues/8696", "state": "open", "labels": [], "created_at": "2024-10-25T07:12:11Z", "updated_at": "2024-10-25T07:28:44Z", "comments": 0, "user": "lida2003" }, { "repo": "pytorch/pytorch", "number": 138888, "title": "How to Implement multi-card parallel Inference by torchrun?", "body": "Hello everyone, I'm trying to achieve a goal of using trochrun for dual-card parallel inference. Then I have two questions. First, I found that torchrun is mainly used for model training, so can it be used for model inference? If can, my inference process is divided into two parts: model loading and inference. I only want to load the model once and then infer multiple times. How can I implement it? Thank you.\n\ncc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o", "url": "https://github.com/pytorch/pytorch/issues/138888", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2024-10-25T03:52:20Z", "updated_at": "2024-11-27T01:05:31Z", "user": "lcf2610" }, { "repo": "pytorch/tutorials", "number": 3113, "title": "\ud83d\udca1 [REQUEST] - Update tutorials with device-generic APIs", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nWe should use the latest device-generic APIs when they come out in 2.6 in all tutorials to improve readability.\n\n### Existing tutorials on this topic\n\nhttps://pytorch.org/tutorials/beginner/basics/buildmodel_tutorial is an example of one we should update. There is most likely more.\n\n### Additional context\n\ncc @guangyey that might be a good follow up to consider before 2.6", "url": "https://github.com/pytorch/tutorials/issues/3113", "state": "closed", "labels": [], "created_at": "2024-10-24T17:33:29Z", "updated_at": "2025-01-29T09:35:10Z", "comments": 3, "user": "albanD" }, { "repo": "pytorch/serve", "number": 3352, "title": "GPU not detected inside torchserve docker container", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI am trying to create a Docker image for my custom handler of diffusers. I can create the Docker image and then a Docker container from it, but the Docker container is not able to detect the GPU. I have used the official TorchServe Docker image from Docker Hub, but it still cannot use the GPU inside the container. I have also added --gpus all in the Docker container run command, but it still does not work. \r\n\r\nHow can I enable the GPU inside the container so that my custom handler can use it?\r\n\r\n### Error logs\r\n\r\n```\r\nWARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.\r\n2024-10-23T06:11:55,474 [DEBUG] main org.pytorch.serve.util.ConfigManager - xpu-smi not available or failed: Cannot run program \"xpu-smi\": error=2, No such file or directory\r\n2024-10-23T06:11:55,498 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties\r\n2024-10-23T06:11:55,513 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...\r\n2024-10-23T06:11:55,560 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml\r\n2024-10-23T06:11:55,750 [INFO ] main org.pytorch.serve.ModelServer -\r\nTorchserve version: 0.12.0\r\nTS Home: /home/venv/lib/python3.9/site-packages\r\nCurrent directory: /home/model-server\r\nTemp directory: /home/model-server/tmp\r\nMetrics config path: /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml\r\nNumber of GPUs: 1\r\nNumber of CPUs: 12\r\nMax heap size: 1966 M\r\nPython executable: /home/venv/bin/python\r\nConfig file: /home/model-server/config.properties\r\nInference address: http://0.0.0.0:8080\r\nManagement address: http://0.0.0.0:8081\r\nMetrics address: http://0.0.0.0:8082\r\nModel Store: /home/model-server/model-store\r\nInitial Models: all\r\nLog dir: /home/model-server/logs\r\nMetrics dir: /home/model-server/logs\r\nNetty threads: 0\r\nNetty client threads: 0\r\nDefault workers per model: 1\r\nBlacklist Regex: N/A\r\nMaximum Response Size: 6553500\r\nMaximum Request Size: 6553500\r\nLimit Maximum Image Pixels: true\r\nPrefer direct buffer: false\r\nAllowed Urls: [file://.*|http(s)?://.*]\r\nCustom python dependency for model allowed: true\r\nEnable metrics API: true\r\nMetrics mode: LOG\r\nDisable system metrics: false\r\nWorkflow Store: /home/model-server/model-store\r\nCPP log config: N/A\r\nModel config: {\"text-to-image\": {\"1.0\": {\"defaultVersion\": true,\"marName\": \"text-to-image.mar\",\"minWorkers\": 1,\"maxWorkers\": 1,\"batchSize\": 4,\"maxBatchDelay\": 5000,\"responseTimeout\": 120}}}\r\nSystem metrics command: default\r\nModel API enabled: true\r\n2024-10-23T06:11:55,762 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin...\r\n2024-10-23T06:11:55,763 [DEBUG] main org.pytorch.serve.ModelServer - Loading models from model store: text-to-image.mar\r\n2024-10-23T06:12:10,680 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model text-to-image\r\n2024-10-23T06:12:10,681 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model text-to-image\r\n2024-10-23T06:18:40,296 [INFO ] main org.pytorch.serve.wlm.ModelManager - Installed custom pip packages for model text-to-image\r\n2024-10-23T06:18:40,297 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model text-to-image loaded.\r\n2024-10-23T06:18:40,297 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: text-to-image, count: 1\r\n2024-10-23T06:18:40,329 [DEBUG] W-9000-text-to-image_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.9/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9000, --metrics-config, /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml]\r\n2024-10-23T06:18:40,334 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.\r\n2024-10-23T06:18:40,443 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080\r\n2024-10-23T06:18:40,444 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.\r\n2024-10-23T06:18:40,446 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081\r\n2024-10-23T06:18:40,446 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.\r\n2024-10-23T06:18:40,458 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082\r\nModel server started.\r\n2024-10-23T06:18:40,741 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet.\r\n2024-10-23T06:18:41,407 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Hos", "url": "https://github.com/pytorch/serve/issues/3352", "state": "closed", "labels": [], "created_at": "2024-10-23T06:47:13Z", "updated_at": "2024-10-23T10:36:06Z", "comments": 1, "user": "dummyuser-123" }, { "repo": "pytorch/xla", "number": 8301, "title": "Provide debugging and troubleshooting tips to Pallas developer", "body": "## \ud83d\udcda Documentation\r\n\r\nPlease provide documentation on how to troubleshoot pallas issues. One place we can put this information is in this [Pallas doc](https://github.com/pytorch/xla/blob/master/docs/source/features/pallas.md)\r\n\r\ncc @mikegre-google to help review the upcoming PR", "url": "https://github.com/pytorch/xla/issues/8301", "state": "open", "labels": [ "documentation" ], "created_at": "2024-10-22T22:50:15Z", "updated_at": "2024-10-25T21:58:56Z", "comments": 0, "user": "miladm" }, { "repo": "pytorch/torchtitan", "number": 639, "title": "How to load previous distributed checkpoint after using FP8Linear + torch.compile?", "body": "FP8Linear + torch.compile is changing the parameters's name. \r\n\r\nIf I do convert to FP8Linear -> torch.compile -> fsdp2 wrapping -> load distributed ckpt, the parameters's names do not match with the ckpt we want to resume from. And it's not straightforward to change the parameters's names in the distributed ckpt.\r\n\r\nThus, my question is what's the expected solution for this workflow?\r\n\r\nThanks a lot! ", "url": "https://github.com/pytorch/torchtitan/issues/639", "state": "closed", "labels": [], "created_at": "2024-10-21T23:27:33Z", "updated_at": "2024-10-25T18:35:40Z", "user": "goldhuang" }, { "repo": "pytorch/ao", "number": 1132, "title": "What is the expected inference steps after I apply torchao in training?\u2028", "body": "Hello, I have integrated torchao to my training. But I don't think it's 100% clear what the inference should be like.\r\n\r\nShould I use the converted FP8 linear layer to do inference? Is delayed scaling supposed to work in inference?\r\nOr, should I use the original linear layer to do inference?\r\n\r\nThanks a lot in advance if you can help to clarify! ", "url": "https://github.com/pytorch/ao/issues/1132", "state": "closed", "labels": [ "float8" ], "created_at": "2024-10-21T22:19:57Z", "updated_at": "2024-12-09T18:59:50Z", "user": "goldhuang" }, { "repo": "pytorch/torchtitan", "number": 638, "title": "What is the expected inference steps after I apply torchao in training?", "body": "Hello, I have integrated torchao to my training. But I think it's not very clear what the inference should be like.\r\nShould I use the converted FP8 linear layer to do inference? Is delayed scaling supposed to work in inference?\r\nOr, should I use the original linear layer to do inference?\r\n\r\nThanks in advance if you can help to clarify! ", "url": "https://github.com/pytorch/torchtitan/issues/638", "state": "open", "labels": [ "question" ], "created_at": "2024-10-21T22:19:06Z", "updated_at": "2024-10-22T03:33:39Z", "user": "goldhuang" }, { "repo": "pytorch/xla", "number": 8295, "title": "litepod and tpu sample not working anymore: https://cloud.google.com/tpu/docs/pytorch-pods", "body": "## \ud83d\udc1b Bug\r\n\r\nSample located here doesn't seem to work on tpu v5e16 pod (previous did as of 3 days ago) https://cloud.google.com/tpu/docs/pytorch-pods\r\n\r\n## To Reproduce\r\n\r\nFollowing the steps here: https://cloud.google.com/tpu/docs/pytorch-pods\r\n\r\nBefore running the example:\r\n1. set up SSH key pair using: ssh-keygen -t rsa -f .ssh/google_compute_engine -C user\r\n2. added SSH to project meta via gcp console\r\n3. propagate key to tpu vm: \r\neval `ssh-agent -s`\r\nssh-add ~/.ssh/google_compute_engine\r\n\r\nOnly other change than what is in the sample is changing tpu to v5litepod-16.\r\n\r\nThe vm is created and all looks correct, but the process hangs. This occurs when getting the xla device. Output on the error is below. Thank you very much for the help! Exact same procedure was working consistently until yesterday. \r\n\r\ngcloud compute tpus tpu-vm ssh tpu-vm-sample --zone=us-central1-a --project=sample_tpu_project --worker=all --command=\"PJRT_DEVICE=TPU python3 ~/xla/test/test_train_mp_imagenet.py \\\r\n --fake_data \\\r\n --model=resnet50 \\\r\n --num_epochs=1 2>&1 | tee ~/logs.txt\"\r\nUsing ssh batch size of 1. Attempting to SSH into 1 nodes with a total of 4 workers.\r\nSSH: Attempting to connect to worker 0...\r\nSSH: Attempting to connect to worker 1...\r\nSSH: Attempting to connect to worker 2...\r\nSSH: Attempting to connect to worker 3...\r\nconcurrent.futures.process._RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/concurrent/futures/process.py\", line 246, in _process_worker\r\n r = call_item.fn(*call_item.args, **call_item.kwargs)\r\n File \"/usr/lib/python3.10/concurrent/futures/process.py\", line 205, in _process_chunk\r\n return [fn(*args) for args in chunk]\r\n File \"/usr/lib/python3.10/concurrent/futures/process.py\", line 205, in <listcomp>\r\n return [fn(*args) for args in chunk]\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py\", line 95, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 59, in _run_thread_per_device\r\n initializer_fn(local_rank, local_world_size)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py\", line 95, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 125, in initialize_multiprocess\r\n devices = xm.get_xla_supported_devices()\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/core/xla_model.py\", line 99, in get_xla_supported_devices\r\n devices = torch_xla._XLAC._xla_get_devices()\r\nRuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Worker failed to join a slice within 15m\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/temp_user/xla/test/test_train_mp_imagenet.py\", line 381, in <module>\r\n xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=FLAGS.num_cores)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py\", line 95, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 38, in spawn\r\n return pjrt.spawn(fn, nprocs, start_method, args)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 214, in spawn\r\n run_multiprocess(spawn_fn, start_method=start_method)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py\", line 95, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 174, in run_multiprocess\r\n replica_results = list(\r\n File \"/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py\", line 175, in <genexpr>\r\n itertools.chain.from_iterable(\r\n File \"/usr/lib/python3.10/concurrent/futures/process.py\", line 570, in _chain_from_iterable_of_lists\r\n for element in iterable:\r\n File \"/usr/lib/python3.10/concurrent/futures/_base.py\", line 621, in result_iterator\r\n yield _result_or_cancel(fs.pop())\r\n File \"/usr/lib/python3.10/concurrent/futures/_base.py\", line 319, in _result_or_cancel\r\n return fut.result(timeout)\r\n File \"/usr/lib/python3.10/concurrent/futures/_base.py\", line 458, in result\r\n return self.__get_result()\r\n File \"/usr/lib/python3.10/concurrent/futures/_base.py\", line 403, in __get_result\r\n raise self._exception\r\nRuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Worker failed to join a slice within 15m\r\nconcurrent.futures.process._RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/concurrent/futures/process.py\", line 246, in _process_worker\r\n r = call_item.fn(*call_item.args, **call_item.kwargs)\r\n File \"/usr/lib/python3.10/concurrent/futures/process.py\", line 205, in _process_chunk\r\n return [fn(*args) fo", "url": "https://github.com/pytorch/xla/issues/8295", "state": "closed", "labels": [], "created_at": "2024-10-21T16:43:31Z", "updated_at": "2024-10-22T00:54:24Z", "comments": 8, "user": "ttdd11" }, { "repo": "pytorch/torchtitan", "number": 636, "title": "DDP + Pipeline parallelism", "body": "For fine tuning/training with `PP + DDP`, is there documentation or modification that can be done to achieve this using torchtitan? \r\n\r\nThe following check in `parallelize_llama.py` was the point of error when trying the configuration on my end.\r\n`if world_mesh.ndim > 1:\r\n raise RuntimeError(\"DDP has not supported > 1D parallelism\")`\r\n\r\nThe use case I am imagining is: for a host with multiple GPUs that is responsible for a particular pipeline stage (part of model), as long as there is enough memory `DDP` might be a viable option.\r\n", "url": "https://github.com/pytorch/torchtitan/issues/636", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-20T12:36:55Z", "updated_at": "2024-11-08T00:03:05Z", "user": "prathameshtd" }, { "repo": "pytorch/torchtitan", "number": 635, "title": "data shuffling", "body": "I understand that the current version of the code doesn't shuffle the data during training, _i.e._ examples are consumed in order in each rank (in fact, there's a note to that effect [here](https://github.com/pytorch/torchtitan/blob/0edd2fb36c8c3468086986efd049e9bb0ff3414e/torchtitan/datasets/hf_datasets.py#L99)). I'm kind of new to large-scale LLM training, so I was just wondering if this is common practice in LLM training. It seems not ideal potentially, since consecutive gradients will likely be more correlated than under random shuffling.\r\n\r\nIf I wanted to randomly shuffle the data during training, how could I go about doing that? I thought about using `ds.shuffle()` before splitting the dataset by node [here](https://github.com/pytorch/torchtitan/blob/0edd2fb36c8c3468086986efd049e9bb0ff3414e/torchtitan/datasets/hf_datasets.py#L101C22-L101C43), but that would (pseudo-)shuffle the data rows, which doesn't seem quite right, since I think we really want to shuffle concatenated `seq_len` long chunks of text instead.\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/torchtitan/issues/635", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-20T03:39:35Z", "updated_at": "2024-10-24T02:08:43Z", "user": "eminorhan" }, { "repo": "pytorch/tutorials", "number": 3100, "title": "\ud83d\udca1 [REQUEST] - Add minGRU Tutorial for Efficient Sequence Modeling ", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\r\n\r\nI propose adding a tutorial on implementing and using minGRU (minimal Gated Recurrent Unit) to the PyTorch tutorials. This addition would provide valuable insights into efficient sequence modeling techniques for the PyTorch community.\r\n\r\n\r\n- Efficiency: Up to 1324x faster than standard GRU for 4096-token sequences, with comparable accuracy.\r\n- Competitive Performance: Matches state-of-the-art models like Mamba in language modeling and reinforcement learning.\r\n- Learning Tool: Bridges simple RNNs and complex attention-based models, aiding learner progression.\r\n\r\n### Benefits for PyTorch users:\r\n\r\n- Efficient Sequence Processing: Implement and train RNNs for long sequences, crucial for modern NLP and time series analysis.\r\n- Parallel Training Skills: Learn to leverage parallel computing for RNN training, applicable to various deep learning tasks.\r\n- Versatile Solution: Practical alternative to traditional RNNs and complex models, balancing efficiency and performance.\r\n\r\n\r\n### Paper \r\n[were rnns all we need](https://arxiv.org/pdf/2410.01201)\r\n\r\n\r\n\r\n### Existing tutorials on this topic\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nIf you guys like this idea, I'm ready to jump in! I could have a PR ready as soon as tomorrow. \r\nI'm thinking of contributing a tutorial on how to use or train minGRU for language modeling \r\n\r\n@svekars @albanD ", "url": "https://github.com/pytorch/tutorials/issues/3100", "state": "closed", "labels": [], "created_at": "2024-10-19T16:35:32Z", "updated_at": "2025-04-16T22:02:23Z", "comments": 1, "user": "dame-cell" }, { "repo": "pytorch/data", "number": 1344, "title": "Delete datapipes and dataloader 2 documentation", "body": "### \ud83d\udcda The doc issue\n\nSince these are gone on main, we should delete nightly documentation as well. Basically they need to disappear from here: https://pytorch.org/data/main/ \n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1344", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-10-18T23:14:59Z", "updated_at": "2024-10-19T20:29:46Z", "comments": 0, "user": "andrewkho" }, { "repo": "pytorch/pytorch", "number": 138280, "title": "Refactor FlexibleLayout to separate out \"this stride can be changed\" and \"how this buffer is allocated can be changed\" ", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrently, we have two layouts:\r\n- FixedLayout\r\n- FlexibleLayout\r\n\r\nWhere FixedLayout basically means \"We already decided the layout, don't change it\" while FlexibleLayout means \"we are free to change this layout\".\r\n\r\nHowever, I think there are actually two different components of \"decided this layout\":\r\n\r\n1. What is the output **stride** of this layout?\r\n2. Who allocates the actual buffer for this tensor?\r\n\r\nI believe conflating these causes some problems:\r\n\r\n- For inductor template tuning, we care about the **stride** of the output layout, but we don't care who allocated the buffer (e.g. if it's just a view into a larger concat buffer). And Elias points out that he noticed this too here: https://github.com/pytorch/pytorch/pull/132554#issue-2445835622\r\n- For Yifu's recent PR (https://github.com/pytorch/pytorch/pull/138029), he cares about \"who allocates the buffer for this layout\", but he doesn't care about \"what is the actual stride of this layout\".\r\n\r\nMy proposal is that we scrap our current subclasses of Layout and refactor it into:\r\n```\r\nclass Layout:\r\n stride: FlexibleStride or FixedStride\r\n allocator: NonOwningAllocator (view into another allocation) or Flexible or SymmMem\r\n```\r\n\r\ncc: @eellison @yifuwang @shunting314 @jansel \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @aakhundov @coconutruben @jataylo @ezyang @yf225 @chenyang78 @ColinPeppler @desertfire", "url": "https://github.com/pytorch/pytorch/issues/138280", "state": "open", "labels": [ "triaged", "oncall: pt2", "module: inductor", "internal ramp-up task" ], "created_at": "2024-10-17T23:10:36Z", "updated_at": "2025-12-02T17:11:15Z", "user": "Chillee" }, { "repo": "pytorch/pytorch", "number": 138179, "title": "How to resolve the libfmt.a conflict in React Native.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI want to develop a React Native module that primarily integrates LibTorch and includes some methods for loading models and making predictions.\r\n\r\nI created the module using `npx create-expo-module` and then proceeded with the development. \r\n\r\nWhen I run `pod install `in ios, it prompts me that `\"The 'Pods-expoptexample' target has libraries with conflicting names: libfmt.a.\" `This issue does not occur when I build without installing LibTorch. I would like to know what I should do to avoid this problem.\n\n### Alternatives\n\nI tried the following methods, but none of them resolved the issue:\r\n\r\n1.I added the following code to the Podfile to exclude libfmt.a:\r\n\r\n```ruby\r\ninstaller.pods_project.targets.each do |target|\r\n target.build_configurations.each do |config|\r\n config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] ||= []\r\n config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] << 'libfmt.a'\r\n end\r\n end\r\n```\r\n2.I tried `pod deintegrate` and then `pod install`.\r\n\r\n\n\n### Additional context\n\n**this is my Podfile**\r\n```ruby\r\nrequire File.join(File.dirname(`node --print \"require.resolve('expo/package.json')\"`), \"scripts/autolinking\")\r\nrequire File.join(File.dirname(`node --print \"require.resolve('react-native/package.json')\"`), \"scripts/react_native_pods\")\r\n\r\nrequire 'json'\r\npodfile_properties = JSON.parse(File.read(File.join(__dir__, 'Podfile.properties.json'))) rescue {}\r\n\r\nENV['RCT_NEW_ARCH_ENABLED'] = podfile_properties['newArchEnabled'] == 'true' ? '1' : '0'\r\nENV['EX_DEV_CLIENT_NETWORK_INSPECTOR'] = podfile_properties['EX_DEV_CLIENT_NETWORK_INSPECTOR']\r\n\r\nuse_autolinking_method_symbol = ('use' + '_native' + '_modules!').to_sym\r\norigin_autolinking_method = self.method(use_autolinking_method_symbol)\r\nself.define_singleton_method(use_autolinking_method_symbol) do |*args|\r\n if ENV['EXPO_UNSTABLE_CORE_AUTOLINKING'] == '1'\r\n Pod::UI.puts('Using expo-modules-autolinking as core autolinking source'.green)\r\n config_command = [\r\n 'node',\r\n '--no-warnings',\r\n '--eval',\r\n 'require(require.resolve(\\'expo-modules-autolinking\\', { paths: [require.resolve(\\'expo/package.json\\')] }))(process.argv.slice(1))',\r\n 'react-native-config',\r\n '--json',\r\n '--platform',\r\n 'ios'\r\n ]\r\n origin_autolinking_method.call(config_command)\r\n else\r\n origin_autolinking_method.call()\r\n end\r\nend\r\n\r\nplatform :ios, podfile_properties['ios.deploymentTarget'] || '13.4'\r\ninstall! 'cocoapods',\r\n :deterministic_uuids => false\r\n\r\nprepare_react_native_project!\r\n\r\ntarget 'expoptexample' do\r\n use_expo_modules!\r\n config = use_native_modules!\r\n\r\n use_frameworks! :linkage => podfile_properties['ios.useFrameworks'].to_sym if podfile_properties['ios.useFrameworks']\r\n use_frameworks! :linkage => ENV['USE_FRAMEWORKS'].to_sym if ENV['USE_FRAMEWORKS']\r\n\r\n use_react_native!(\r\n :path => config[:reactNativePath],\r\n :hermes_enabled => podfile_properties['expo.jsEngine'] == nil || podfile_properties['expo.jsEngine'] == 'hermes',\r\n # An absolute path to your application root.\r\n :app_path => \"#{Pod::Config.instance.installation_root}/..\",\r\n :privacy_file_aggregation_enabled => podfile_properties['apple.privacyManifestAggregationEnabled'] != 'false',\r\n )\r\n\r\n post_install do |installer|\r\n react_native_post_install(\r\n installer,\r\n config[:reactNativePath],\r\n :mac_catalyst_enabled => false,\r\n :ccache_enabled => podfile_properties['apple.ccacheEnabled'] == 'true',\r\n )\r\n\r\n # This is necessary for Xcode 14, because it signs resource bundles by default\r\n # when building for devices.\r\n installer.target_installation_results.pod_target_installation_results\r\n .each do |pod_name, target_installation_result|\r\n target_installation_result.resource_bundle_targets.each do |resource_bundle_target|\r\n resource_bundle_target.build_configurations.each do |config|\r\n config.build_settings['CODE_SIGNING_ALLOWED'] = 'NO'\r\n end\r\n end\r\n end\r\n\r\n # Exclude libfmt.a to avoid naming conflicts\r\n installer.pods_project.targets.each do |target|\r\n target.build_configurations.each do |config|\r\n config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] ||= []\r\n config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] << 'libfmt.a'\r\n end\r\n end\r\n end\r\n\r\n post_integrate do |installer|\r\n begin\r\n expo_patch_react_imports!(installer)\r\n rescue => e\r\n Pod::UI.warn e\r\n end\r\n end\r\nend\r\n```\r\n\r\n### this is my .podspec file\r\n```ruby\r\nrequire 'json'\r\n\r\npackage = JSON.parse(File.read(File.join(__dir__, '..', 'package.json')))\r\n\r\nPod::Spec.new do |s|\r\n s.name = 'ExpoPt'\r\n s.version = package['version']\r\n s.summary = package['description']\r\n s.description = package['description']\r\n s.license = package['license']\r\n s.author = package['author']\r\n s.homepage = package['homepage']\r\n s.platforms = { :ios => '13.4',", "url": "https://github.com/pytorch/pytorch/issues/138179", "state": "closed", "labels": [ "triage review" ], "created_at": "2024-10-17T06:37:21Z", "updated_at": "2024-10-21T17:35:00Z", "user": "wangyujiaoflag" }, { "repo": "pytorch/xla", "number": 8270, "title": "Clarify that torch_xla2 is only recommended for inference", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content is an issue. -->\r\nMy understanding is that torch_xla2 is only recommended for inference. Address this in the [README](https://github.com/pytorch/xla/tree/master/experimental/torch_xla2)", "url": "https://github.com/pytorch/xla/issues/8270", "state": "closed", "labels": [ "question", "documentation" ], "created_at": "2024-10-17T04:53:36Z", "updated_at": "2025-02-27T13:08:45Z", "user": "cloudchrischan" }, { "repo": "pytorch/pytorch", "number": 138073, "title": "`export()` fails for `full((n,), v)` but succeeds for `ones((n,)) * v` where `v` is dynamic", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWhen using `torch.full((n,), v)` to create a tensor with a dynamic value, one receives a `Pending unbacked symbols` error. A simple workaround is to use `torch.ones((n,)) * v`, but unless I'm missing something the former should work just as well.\r\n\r\nBelow is a minimal example to reproduce the error:\r\n\r\n```python\r\nimport torch\r\nimport torch._dynamo\r\nimport torch.export\r\n\r\nclass FullConstNDynamicV(torch.nn.Module):\r\n def forward(self, x):\r\n n = 7\r\n v = x[0, 0]\r\n out = torch.full((n,), v)\r\n # Replacing the above line with the following will fix export 'Pending unbacked symbols' error:\r\n # out = torch.ones((n,)) * v\r\n\r\n return out\r\n\r\ninput_tensor = torch.ones(1, 100)\r\ntorch.export.export(FullConstNDynamicV(), (input_tensor, ))\r\n```\r\n\r\nNote that an example that uses a dynamic value for `n` but non-dynamic `v` does work. I have [a sample notebook available](https://colab.research.google.com/drive/1L5lNvDs94tLj-qPwUzT52IADWY0WacS_?usp=sharing) to review the following cases:\r\n\r\n```\r\nOK: OnesConstNDynamicV\r\nError: FullConstNDynamicV\r\nOK: OnesDynamicNConstV\r\nOK: FullDynamicNConstV\r\nOK: OnesDynamicNDynamicV\r\nError: FullDynamicNDynamicV\r\n```\r\n\r\nWhere OK means the corresponding code was export OK, and Error if it produced an error... It is expected that all modules should be export OK.\r\n\r\n### Versions\r\n\r\nRan in Google Colab. \r\n\r\n```\r\nPyTorch version: 2.4.1+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: 14.0.0-1ubuntu1.1\r\nCMake version: version 3.30.4\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-6.1.85+-x86_64-with-glibc2.35\r\nIs CUDA available: False\r\nCUDA runtime version: 12.2.140\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 2\r\nOn-line CPU(s) list: 0,1\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) CPU @ 2.20GHz\r\nCPU family: 6\r\nModel: 79\r\nThread(s) per core: 2\r\nCore(s) per socket: 1\r\nSocket(s): 1\r\n```\n\ncc @ezyang @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/138073", "state": "closed", "labels": [ "oncall: pt2", "module: dynamic shapes", "module: dynamo", "oncall: export" ], "created_at": "2024-10-16T13:29:52Z", "updated_at": "2025-03-26T17:56:33Z", "user": "kwikwag" }, { "repo": "pytorch/torchtitan", "number": 620, "title": "Is there way to offload training memory to DRAM (using FSDP2?) for training Llama3-8B with torchtitan?", "body": "I am training Llama3-8B using 2 RTX A6000ada 48GB, but got OOM. Is there way to offload training memory to DRAM (using FSDP2?) for training Llama3-8B with torchtitan?\r\n\r\nThanks!\r\n\r\n***Error message:\r\ntorch.OutOfMemoryError: CUDA out of memory. Tried to allocate 112.00 MiB. GPU 0 has a total capacity of 47.48 GiB of which 92.81 MiB is free. Including non-PyTorch memory, this process has 46.71 GiB memory in use. Of the allocated memory 45.56 GiB is allocated by PyTorch, and 448.27 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\r\n\r\n***Here is my training config:\r\n\r\n# torchtitan Config.toml\r\n# NOTE: this toml config is a preset for 64 A100 GPUs.\r\n\r\n[job]\r\ndump_folder = \"./outputs\"\r\ndescription = \"Llama 3 8B training\"\r\n\r\n[profiling]\r\nenable_profiling = true\r\nsave_traces_folder = \"profile_trace\"\r\nprofile_freq = 100\r\n\r\n[metrics]\r\nlog_freq = 10\r\nenable_tensorboard = true\r\nsave_tb_folder = \"tb\"\r\n\r\n[model]\r\nname = \"llama3\"\r\nflavor = \"8B\"\r\nnorm_type = \"rmsnorm\" # layernorm / np_layernorm / rmsnorm / fused_rmsnorm\r\ntokenizer_path = \"./torchtitan/datasets/tokenizer/original/tokenizer.model\"\r\n\r\n[optimizer]\r\nname = \"AdamW\"\r\nlr = 3e-4\r\n\r\n[training]\r\nbatch_size = 2 #1\r\nseq_len = 256 #512 #8192\r\nwarmup_steps = 200 # lr scheduler warm up\r\nmax_norm = 1.0 # grad norm clipping\r\nsteps = 1000\r\ndata_parallel_replicate_degree = 1 #1\r\ndata_parallel_shard_degree = -1 #-1\r\ntensor_parallel_degree = 2 #1\r\ncompile = true\r\ndataset = \"c4\"\r\n\r\n[experimental]\r\npipeline_parallel_degree = 1 #1\r\nenable_async_tensor_parallel = true\r\n\r\n[checkpoint]\r\nenable_checkpoint = false #false\r\nfolder = \"checkpoint\"\r\ninterval_type = \"steps\"\r\ninterval = 500\r\nmodel_weights_only = false\r\nexport_dtype = \"bfloat16\" #32\r\nasync_mode = \"disabled\" # [\"disabled\", \"async\", \"async_with_pinned_mem\"]\r\n\r\n[activation_checkpoint]\r\nmode = 'selective' # ['none', 'selective', 'full']\r\nselective_ac_option = 'op' # 'int' = ac every positive int layer or 'op', ac based on ops policy\r\n\r\n[float8]\r\nenable_float8_linear = true\r\nenable_fsdp_float8_all_gather = true\r\nprecompute_float8_dynamic_scale_for_fsdp = true\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/torchtitan/issues/620", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-15T19:54:17Z", "updated_at": "2024-10-28T22:27:50Z", "user": "0781532" }, { "repo": "pytorch/serve", "number": 3348, "title": "Getting started guide client samples broken ?", "body": "### \ud83d\udc1b Describe the bug\n\nfollowing the getting started guide:\r\n\r\nhttps://github.com/pytorch/serve/blob/master/docs/getting_started.md\r\n\r\ni get following error messages when trying to run the client examples.\r\nAm I doing something wrong?\n\n### Error logs\n\n```\r\nserve$ python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=ts_scripts --grpc_python_out=ts_scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto\r\ngoogle/rpc/status.proto: File not found.\r\ninference.proto:6:1: Import \"google/rpc/status.proto\" was not found or had errors.\r\ninference.proto:32:14: \"google.rpc.Status\" is not defined.\r\n```\r\n\r\nand\r\n\r\n```\r\nserve$ python ts_scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg\r\nTraceback (most recent call last):\r\n File \"[..]serve/ts_scripts/torchserve_grpc_client.py\", line 7, in <module>\r\n import inference_pb2\r\nModuleNotFoundError: No module named 'inference_pb2'\r\n\r\n```\n\n### Installation instructions\n\nfollowed the getting started guide:\r\nhttps://github.com/pytorch/serve/blob/master/docs/getting_started.md\n\n### Model Packaging\n\nfrom getting started guide\n\n### config.properties\n\nfrom getting started guide\n\n### Versions\n\n------------------------------------------------------------------------------------------\r\nEnvironment headers\r\n------------------------------------------------------------------------------------------\r\nTorchserve branch: \r\n\r\ntorchserve==0.12.0\r\ntorch-model-archiver==0.12.0\r\n\r\nPython version: 3.10 (64-bit runtime)\r\nPython executable: /home/nikste/workspace-abnoba/serving_test/venv/bin/python\r\n\r\nVersions of relevant python libraries:\r\ncaptum==0.6.0\r\nnumpy==1.24.3\r\nnvgpu==0.10.0\r\npillow==10.3.0\r\npsutil==5.9.8\r\nrequests==2.32.0\r\ntorch==2.4.0+cu121\r\ntorch-model-archiver==0.12.0\r\ntorch-workflow-archiver==0.2.15\r\ntorchaudio==2.4.0+cu121\r\ntorchserve==0.12.0\r\ntorchvision==0.19.0+cu121\r\nwheel==0.42.0\r\ntorch==2.4.0+cu121\r\n**Warning: torchtext not present ..\r\ntorchvision==0.19.0+cu121\r\ntorchaudio==2.4.0+cu121\r\n\r\nJava Version:\r\n\r\n\r\nOS: Ubuntu 22.04.5 LTS\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: 14.0.0-1ubuntu1.1\r\nCMake version: version 3.30.3\r\n\r\nEnvironment:\r\nlibrary_path (LD_/DYLD_): /usr/local/cuda-11.8/lib64:\r\n\r\n\n\n### Repro instructions\n\ngetting started guide\n\n### Possible Solution\n\nmissing packages in the requirements?", "url": "https://github.com/pytorch/serve/issues/3348", "state": "open", "labels": [], "created_at": "2024-10-15T16:47:42Z", "updated_at": "2024-12-26T04:00:44Z", "comments": 1, "user": "nikste" }, { "repo": "pytorch/torchtitan", "number": 619, "title": "Question about torch.compile has better throughput with 128-GPUs than 8-GPUs", "body": "Thank you for publishing the paper. I hope to get your answers to the following questions.\uff1a\r\nNormally, the training speed will decline as the number of GPUs increases. However, in the paper, with the torch.compile technology, the speed with 128 GPUs is better than that with 8 GPUs.\r\n![compile](https://github.com/user-attachments/assets/d6ea4dc3-6dd1-4286-a5d4-aee754b22c55)\r\n", "url": "https://github.com/pytorch/torchtitan/issues/619", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-15T09:14:25Z", "updated_at": "2024-11-19T21:37:23Z", "user": "dz1iang" }, { "repo": "pytorch/torchchat", "number": 1297, "title": "Can torchat call /use the models already downloaded under Ollama?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCan torchat pick up the models that have already been downloaded by Ollama. Is there a way to use them without downloading them again with a hf user id?\r\n\r\n`PS C:\\Users\\siva> ollama list\r\n\r\nNAME ID SIZE \r\nqwen2.5-coder:latest 87098ba7390d 4.7 GB \r\nllama3.2:latest a80c4f17acd5 2.0 GB \r\n`\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/torchchat/issues/1297", "state": "closed", "labels": [], "created_at": "2024-10-12T16:35:12Z", "updated_at": "2024-10-15T15:22:03Z", "comments": 1, "user": "sivaramn" }, { "repo": "pytorch/torchtitan", "number": 610, "title": "[Compile] Understand why FSDP2 saves both SDPA out and wo in for bwd", "body": "With FSDP2 and transformer block compile, `torch.compile` saves both the SDPA output and the contiguous transposed tensor for backward:\r\nhttps://github.com/pytorch/torchtitan/blob/7e93822e402c3f470bb7ddb925bbc43701bf8573/torchtitan/models/llama/model.py#L210-L213\r\nHowever, with simpleFSDP with full model compile, `torch.compile` only saves the SDPA output. This means that FSDP2 saves an extra `(bs, seq_len, dim)` tensor per transformer block.\r\n\r\nTraditionally, SDPA output is required for SDPA backward, and the input to `wo` is required for the `wo` backward. However, it may be profitable memory-wise to recompute one from the other (e.g. recompute SDPA output from undo-ing the transpose of `wo` input).\r\n\r\nOne question is why the activations saved for backward differ between simple FSDP with full model compile vs. FSDP2 with transformer block compile.", "url": "https://github.com/pytorch/torchtitan/issues/610", "state": "open", "labels": [ "question", "module: torch.compile" ], "created_at": "2024-10-11T15:29:04Z", "updated_at": "2025-12-10T18:30:41Z", "user": "awgu" }, { "repo": "pytorch/ao", "number": 1057, "title": "How to use float8 with SM89 hardware - i.e. NVIDIA A6000 ADA?", "body": "I am running torchao: 0.5 and torch: '2.5.0a0+b465a5843b.nv24.09' on an NVIDIA A6000 ADA card (sm89) which supports FP8.\r\n\r\nI ran the generate.py code from the benchmark:\r\n\r\n python generate.py --checkpoint_path $CHECKPOINT_PATH --compile --compile_prefill --write_result /root/benchmark_results__baseline.txt\r\n\r\n> Average tokens/sec: 57.01\r\n> Average Bandwidth: 855.74 GB/s\r\n> Peak Memory Usage: 16.19 GB\r\n> Model Size: 15.01 GB\r\n\r\n> 20241011143042, tok/s= 57.01, mem/s= 855.74 GB/s, peak_mem=16.19 GB, model_size=15.01 GB quant: None, mod: Meta-Llama-3-8B, kv_quant: False, compile: True, compile_prefill: True, dtype: torch.bfloat16, device: cuda repro: python generate.py --checkpoint_path /models/Meta-Llama-3-8B/consolidated.00.pth --device cuda --precision torch.bfloat16 --compile --compile_prefill --num_samples 5 --max_new_tokens 200 --top_k 200 --temperature 0.8\r\n\r\n python generate.py --checkpoint_path $CHECKPOINT_PATH --compile --compile_prefill --quantization float8wo --write_result /root/benchmark_results__float8wo.txt`\r\n\r\n> Average tokens/sec: 57.00\r\n> Average Bandwidth: 855.62 GB/s\r\n> Peak Memory Usage: 16.19 GB\r\n> Model Size: 15.01 GB\r\n\r\n> 20241011143316, tok/s= 57.00, mem/s= 855.62 GB/s, peak_mem=16.19 GB, model_size=15.01 GB quant: float8wo, mod: Meta-Llama-3-8B, kv_quant: False, compile: True, compile_prefill: True, dtype: torch.bfloat16, device: cuda repro: python generate.py\r\n--quantization float8wo --checkpoint_path /models/Meta-Llama-3-8B/consolidated.00.pth --device cuda --precision torch.bfloat16 --compile --compile_prefill --num_samples 5 --max_new_tokens 200 --top_k 200 --temperature 0.8\r\n\r\nThe `float8wo` flag does not appear to be doing anything. Am I missing a step? Thanks!", "url": "https://github.com/pytorch/ao/issues/1057", "state": "closed", "labels": [ "question", "float8" ], "created_at": "2024-10-11T14:40:38Z", "updated_at": "2025-01-24T18:24:46Z", "user": "vgoklani" }, { "repo": "pytorch/pytorch", "number": 137779, "title": "Flex attention with mask depending on queries and keys lengths (or how to implement `causal_lower_right` masking)", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI tried to implement the `causal_lower_right` masking in flex attention. This requires the masking function to know the difference in lengths of keys and queries:\r\n```python\r\nQL = query.size(2)\r\nKL = key.size(2)\r\ndef causal_mask(b, h, q_idx, kv_idx):\r\n return q_idx - QL >= kv_idx - KL\r\n```\r\n\r\nIt is easy to use it with flex attention and it works on the first call to flex attention (regardless of using `torch.compile` on it or not). However, it fails on a call with differently shaped `query` and `key` matrices.\r\n\r\nI don't know if the usage of queries and keys shape is allowed. If it is, then the second call shouldn't fail. If it is not allowed, then how can one implement `causal_lower_right` masking, which requires knowing the shapes?\r\n\r\nFull reproduction code:\r\n```python\r\n\r\nimport torch\r\nfrom torch.nn.attention.flex_attention import create_block_mask, flex_attention\r\n\r\ndef causal_attention(\r\n query,\r\n key,\r\n value,\r\n):\r\n # all shapes Bs x Nh x Len x Dim\r\n B = query.size(0)\r\n H = query.size(1)\r\n QL = query.size(2)\r\n KL = key.size(2)\r\n\r\n def causal_mask(b, h, q_idx, kv_idx):\r\n return q_idx - QL >= kv_idx - KL\r\n\r\n block_mask = create_block_mask(causal_mask, B, H, QL, KL, device=query.device)\r\n return flex_attention(\r\n query,\r\n key,\r\n value,\r\n None,\r\n block_mask,\r\n )\r\n\r\n\r\ndef test(ql, kl):\r\n bs = 32\r\n nh = 8\r\n hd = 64\r\n q = torch.rand(\r\n bs, nh, ql, hd, dtype=torch.bfloat16, device=\"cuda\", requires_grad=True\r\n )\r\n k = torch.rand(\r\n bs, nh, kl, hd, dtype=torch.bfloat16, device=\"cuda\", requires_grad=True\r\n )\r\n v = torch.rand(\r\n bs, nh, kl, hd, dtype=torch.bfloat16, device=\"cuda\", requires_grad=True\r\n )\r\n causal_attention(q, k, v)\r\n print(f\"test({ql}, {kl}) worked\")\r\n\r\n\r\nprint(\"torch.__version__\", torch.__version__)\r\n\r\n# First calls always succeed.\r\ntest(512, 512)\r\ntest(512, 512)\r\n# These calls fail, unless the above are commented out. \r\ntest(512, 1024)\r\ntest(512, 1024)\r\ntest(512, 512)\r\n```\r\n\r\nTraceback:\r\n```\r\ntorch.__version__ 2.6.0.dev20241009\r\ntest(512, 512) worked\r\ntest(512, 512) worked\r\nTraceback (most recent call last):\r\n File \"/home/janek/projects/llm_ng/flex_trouble.py\", line 52, in <module>\r\n test(512, 1024)\r\n File \"/home/janek/projects/llm_ng/flex_trouble.py\", line 42, in test\r\n causal_attention(q, k, v)\r\n File \"/home/janek/projects/llm_ng/flex_trouble.py\", line 20, in causal_attention\r\n return flex_attention(\r\n File \"/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py\", line 1113, in flex_attention\r\n out, lse = torch.compile(\r\n File \"/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 487, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py\", line 1100, in _flex_attention_hop_wrapper\r\n def _flex_attention_hop_wrapper(*args, **kwargs):\r\n File \"/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 654, in _fn\r\n return fn(*args, **kwargs)\r\n File \"<eval_with_key>.9\", line 28, in forward\r\n File \"/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py\", line 113, in __call__\r\n raise RuntimeError(\"Other buffers must be tensors.\")\r\nRuntimeError: Other buffers must be tensors.\r\n```\r\n\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.6.0.dev20241009\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.4\r\nROCM used to build PyTorch: N/A\r\n\r\ncc @zou3519 @bdhirsh @penguinwu @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng @ezyang @chauhang @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/137779", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: pt2-dispatcher", "module: flex attention" ], "created_at": "2024-10-11T13:21:40Z", "updated_at": "2024-11-12T00:12:28Z", "user": "janchorowski" }, { "repo": "pytorch/benchmark", "number": 2499, "title": "How is TorchBench applied to testing new versions of PyTorch?", "body": "Hello, may I ask what tasks will be used for end-to-end testing before the release of the new version of PyTorch?\r\nWill the test focus on the consistency of metrics between the previous and subsequent versions, such as the loss of training tasks, iteration speed, etc", "url": "https://github.com/pytorch/benchmark/issues/2499", "state": "open", "labels": [], "created_at": "2024-10-10T16:40:53Z", "updated_at": "2024-10-16T20:28:47Z", "user": "HLH13297997663" }, { "repo": "pytorch/torchtitan", "number": 608, "title": "why is xformers not used for attention computation?", "body": "Curious why xformers is not used? Is it for simplicity or is there performance reason.", "url": "https://github.com/pytorch/torchtitan/issues/608", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-09T23:21:23Z", "updated_at": "2024-11-22T00:15:17Z", "user": "jason718" }, { "repo": "pytorch/xla", "number": 8245, "title": "Improve documentation for `get_memory_info`", "body": "## \ud83d\udcda Documentation\r\n\r\nImprove documentation for `get_memory_info`. This feature is lightly defined in [PyTorchXLA documentation page](https://pytorch.org/xla/release/r2.4/index.html#torch_xla.core.xla_model.get_memory_info). Please provide an explanation on what details it pulls and potentially offer examples.\r\n\r\nAdditionally, it's important to draw a documentation that clarifies how `get_memory_info` API works such that users can easily compare/contrast it against [`torch.cuda.mem_get_info`](https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html)\r\n\r\ncc @mikegre-google to help follow up\r\n@JackCaoG", "url": "https://github.com/pytorch/xla/issues/8245", "state": "open", "labels": [ "enhancement", "usability" ], "created_at": "2024-10-09T20:33:18Z", "updated_at": "2025-02-27T13:10:42Z", "comments": 0, "user": "miladm" }, { "repo": "pytorch/TensorRT", "number": 3224, "title": "\u2753 [Question] How to decide if an Op should support dynamic shape or not", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nSince only part of the ops support dynamic shapes, and some are not. What's the criteria to decide if an op supports dynamic shape or not?\r\n\r\nFor some existing ops, which are not marked as `supports_dynamic_shapes=True`, can I write a converter that wraps the existing converter, and mark my own converter with high priority? Is this the recommended way?\r\n\r\nor should I just turn on `assume_dynamic_shape_support`, which seems to be a flag globally for all converters ?\r\n\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.4.1\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.11.9\r\n - CUDA version: 12.1\r\n - GPU models and configuration: Nvidia L4\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3224", "state": "open", "labels": [ "question" ], "created_at": "2024-10-09T16:46:56Z", "updated_at": "2024-10-30T23:52:26Z", "user": "sean-xiang-applovin" }, { "repo": "pytorch/xla", "number": 8240, "title": "XLA2 does not work with jax 0.4.34 (but did work on jax 0.4.33)", "body": "## \ud83d\udc1b Bug\r\n\r\nA toy example of MNIST using XLA2 does not work on the latest version of jax (0.4.34) on Trillium machine of 64 cores (V6e-64) but downgrading to 0.4.33 fixes the issue\r\n\r\n\r\n## To Reproduce\r\n\r\n1. Download the toy training example from [here](https://gist.githubusercontent.com/Chaosruler972/2461fe9d5a7a558ff4cb257ce88ad702/raw/1c354fbdae9dae2ff83917341aea957172897e71/mnist.py)\r\n\r\n2. Allocate a V6e-64 trillium TPU at GCP\r\n\r\n3. copy that file using gcp scp to all the VM machines\r\n\r\n4. prepare an environment containing torch_xla2 (refer to the[ readme here](https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/README.md))\r\n\r\n5. install 0.4.43 jax/lib from pip\r\n```\r\ninstall jax==0.4.33 jaxlib==0.4.33 libtpu-nightly==0.1.dev20241008+nightly -f https://storage.googleapis.com/libtpu-releases/index.html\r\n```\r\n\r\n6. run your training, verify it is working well\r\n\r\n7. upgrade to jax 0.4.44\r\n```\r\ninstall jax==0.4.33 jaxlib==0.4.33 libtpu-nightly==0.1.dev20241008+nightly -f https://storage.googleapis.com/libtpu-releases/index.html\r\n```\r\n8. run your training again, note how the training loop exits without warning/messages after the loss was extracted\r\n\r\n\r\n## Expected behavior\r\n\r\nsmall varying results between the scripts when running on different version of jax\r\n\r\n## Environment\r\n\r\n - Reproducible on XLA backend TPU\r\n - Using Trillum 64 machine \r\n - torch_xla2 version: 0.0.1\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8240", "state": "closed", "labels": [ "bug", "torchxla2" ], "created_at": "2024-10-09T14:35:32Z", "updated_at": "2025-03-04T18:22:21Z", "comments": 3, "user": "zmelumian972" }, { "repo": "pytorch/audio", "number": 3838, "title": "How to train a real-time av-asr pretrain model", "body": "### \ud83d\ude80 The feature\n\nThere is an example for hubert training [here](https://github.com/pytorch/audio/tree/main/examples/self_supervised_learning), but has no example about real-time av-asr for other languages.\n\n### Motivation, pitch\n\nI'm woking on lipreading without a pretrained model to continue train the pretrained model like real-time av-asr.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3838", "state": "open", "labels": [], "created_at": "2024-10-07T12:23:32Z", "updated_at": "2024-10-07T12:23:32Z", "user": "Zhaninh" }, { "repo": "pytorch/torchchat", "number": 1278, "title": "AOTI Export ignores user --device flag - expected behavior? ", "body": "### \ud83d\udc1b Describe the bug\n\nHi all, \r\n\r\nI ran into some confusion when trying to export llama3 on my system. I have a small graphics card (8GB VRAM on an AMD GPU) but a decent amount of RAM (24GB). Obviously, the model won't fit on my GPU un-quantized but it should fit into my RAM + swap.\r\n\r\nI tried running:\r\n```\r\npython3 torchchat.py export llama3 --output-dso-path exportedModels/llama3.so --quantize torchchat/quant_config/desktop.json --device cpu\r\n```\r\n\r\nHowever, I ran into multiple HIP OOM errors (basically equivalent to CUDA). Why would we try to allocate CUDA memory if the target device is CPU?\r\n\r\nOn further inspection, during export, the device is replaced with whatever is present in the quantize config:\r\nIn `cli.py`\r\nhttps://github.com/pytorch/torchchat/blob/b21715835ab9f61e23dbcf32795b0c0a2d654908/torchchat/cli/cli.py#L491C10-L494C1\r\n```\r\nargs.device = get_device_str(\r\n args.quantize.get(\"executor\", {}).get(\"accelerator\", args.device)\r\n)\r\n```\r\n\r\nIn this case, the device in `desktop.json` is \"fast\". The `get_device_str` function replaces this with \"cuda\" simply based on `torch.cuda.is_available` without consulting the flag I passed in. \r\n\r\n## Other cases\r\nDoing a quick grep of the repo, I only found one other case in `generate.py` where `torch.cuda.is_available()` is consulted for monitoring memory usage. We should be careful switching based simply on `torch.cuda.is_available()` and make sure to pin to the user's request if we're using ambiguous devices like \"fast\".\r\n\r\nAnother small issue - since I use AMD GPU, the default `install/install_requirements.sh` will download the CPU only version instead of the ROCm version of PyTorch. To use my GPU, I have to re-run the torch installation manually. Luckily, it's quite easy to find this command at https://pytorch.org/get-started/locally/ . Should be straightforward to check of ROCm is available on the system during this script - we can just run `rocminfo` & check if the command is available.\n\n### Versions\n\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n--2024-10-06 12:03:44-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py\r\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8001::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ...\r\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8001::154|:443... connected.\r\nHTTP request sent, awaiting response... 200 OK\r\nLength: 23357 (23K) [text/plain]\r\nSaving to: \u2018collect_env.py\u2019\r\n\r\ncollect_env.py 100%[===========================================================================================>] 22.81K --.-KB/s in 0.02s \r\n\r\n2024-10-06 12:03:44 (1.10 MB/s) - \u2018collect_env.py\u2019 saved [23357/23357]\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.4.1+rocm6.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: N/A\r\nROCM used to build PyTorch: 6.1.40091-a8dbc0c19\r\n\r\nOS: Ubuntu 22.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0\r\nClang version: 14.0.0-1ubuntu1.1\r\nCMake version: version 3.30.4\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-6.1.4-060104-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: AMD Radeon RX 6700S (gfx1030)\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: 6.1.40091\r\nMIOpen runtime version: 3.1.0\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 48 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nVendor ID: AuthenticAMD\r\nModel name: AMD Ryzen 9 6900HS with Radeon Graphics\r\nCPU family: 25\r\nModel: 68\r\nThread(s) per core: 2\r\nCore(s) per socket: 8\r\nSocket(s): 1\r\nStepping: 1\r\nFrequency boost: enabled\r\nCPU max MHz: 4933.8862\r\nCPU min MHz: 1600.0000\r\nBogoMIPS: 6587.56\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce", "url": "https://github.com/pytorch/torchchat/issues/1278", "state": "closed", "labels": [ "bug", "good first issue", "actionable" ], "created_at": "2024-10-06T19:06:51Z", "updated_at": "2024-11-16T01:15:38Z", "comments": 5, "user": "vmpuri" }, { "repo": "pytorch/torchchat", "number": 1277, "title": "Android demo app poor model performance", "body": "### \ud83d\udc1b Describe the bug\n\nI wanted to try the new Llama 3.2 1B parameter model on mobile. I downloaded the model and generated the `pte` like so:\r\n\r\n```\r\npython torchchat.py download llama3.2-1b\r\npython torchchat.py export llama3.2-1b --quantize torchchat/quant_config/mobile.json --output-pte-path llama3_2-1b.pte\r\n```\r\n\r\nThen I pushed `llama3_2-1b.pte` file and `tokenizer.model` files to the mobile phone using `adb`. \r\n\r\nI executed the demo app in `torchchat/edge/android/torchchat` using Android Studio with `.aar` file provided on the TorchChat repo readme.\r\n\r\nHowever, when I chat with the AI its responses are very useless and feel quite different than what I get with the same prompt on my computer:\r\n\r\n![example](https://github.com/user-attachments/assets/8e9d7128-6afd-46b5-8c1f-6b03ad3bccbb)\r\n![terminal-interaction](https://github.com/user-attachments/assets/beb9733a-3b23-40e7-9354-43a97cb05fa0)\r\n\r\nIs there a problem with the default quantization parameters? I tried to not quantize but then the app crashed when loading the model.\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.5.0.dev20240901\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 14.4 (arm64)\r\nGCC version: Could not collect\r\nClang version: 15.0.0 (clang-1500.3.9.4)\r\nCMake version: version 3.30.4\r\nLibc version: N/A\r\n\r\nPython version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)\r\nPython platform: macOS-14.4-arm64-arm-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nApple M2 Pro\r\n\r\nVersions of relevant libraries:\r\n[pip3] executorch==0.5.0a0+286799c\r\n[pip3] numpy==1.26.4\r\n[pip3] torch==2.5.0.dev20240901\r\n[pip3] torchao==0.5.0+git0916b5b\r\n[pip3] torchaudio==2.5.0.dev20240901\r\n[pip3] torchsr==1.0.4\r\n[pip3] torchtune==0.3.0.dev20240928+cpu\r\n[pip3] torchvision==0.20.0.dev20240901\r\n[conda] executorch 0.5.0a0+286799c pypi_0 pypi\r\n[conda] numpy 1.26.4 pypi_0 pypi\r\n[conda] torch 2.5.0.dev20240901 pypi_0 pypi\r\n[conda] torchaudio 2.5.0.dev20240901 pypi_0 pypi\r\n[conda] torchsr 1.0.4 pypi_0 pypi\r\n[conda] torchtune 0.3.0.dev20240928+cpu pypi_0 pypi\r\n[conda] torchvision 0.20.0.dev20240901 pypi_0 pypi", "url": "https://github.com/pytorch/torchchat/issues/1277", "state": "closed", "labels": [ "actionable", "Mobile - Android", "ExecuTorch" ], "created_at": "2024-10-06T15:10:55Z", "updated_at": "2024-10-25T08:19:10Z", "comments": 11, "user": "fran-aubry" }, { "repo": "pytorch/xla", "number": 8223, "title": "how to use torch.float16 in diffusers pipeline with pytorch xla ", "body": "## \u2753 Questions and Help\r\n```\r\nimport diffusers, torch, os\r\nimport torch_xla.core.xla_model as xm\r\n\r\npipeline = diffusers.DiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\", safety_checker=None, use_safetensors=True, torch_dtype=torch.float16)\r\n# Move the model to the first TPU core\r\npipeline = pipeline.to(xm.xla_device())\r\nimage = pipeline(\"a cloud tpu winning a kaggle competition\", num_inference_steps=20).images[0]\r\nimage\r\n```\r\nI run the above code in kaggle\r\nand get\r\n```\r\nRuntimeError Traceback (most recent call last)\r\nCell In[2], line 8\r\n 6 # Move the model to the first TPU core\r\n 7 pipeline = pipeline.to(xm.xla_device())\r\n----> 8 image = pipeline(\"a cloud tpu winning a kaggle competition\", num_inference_steps=20).images[0]\r\n 9 image\r\n\r\nFile /usr/local/lib/python3.8/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1000, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)\r\n 997 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)\r\n 999 # predict the noise residual\r\n-> 1000 noise_pred = self.unet(\r\n 1001 latent_model_input,\r\n 1002 t,\r\n 1003 encoder_hidden_states=prompt_embeds,\r\n 1004 timestep_cond=timestep_cond,\r\n 1005 cross_attention_kwargs=self.cross_attention_kwargs,\r\n 1006 added_cond_kwargs=added_cond_kwargs,\r\n 1007 return_dict=False,\r\n 1008 )[0]\r\n 1010 # perform guidance\r\n 1011 if self.do_classifier_free_guidance:\r\n\r\nFile /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /usr/local/lib/python3.8/site-packages/diffusers/models/unets/unet_2d_condition.py:1169, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, down_intrablock_additional_residuals, encoder_attention_mask, return_dict)\r\n 1164 encoder_hidden_states = self.process_encoder_hidden_states(\r\n 1165 encoder_hidden_states=encoder_hidden_states, added_cond_kwargs=added_cond_kwargs\r\n 1166 )\r\n 1168 # 2. pre-process\r\n-> 1169 sample = self.conv_in(sample)\r\n 1171 # 2.5 GLIGEN position net\r\n 1172 if cross_attention_kwargs is not None and cross_attention_kwargs.get(\"gligen\", None) is not None:\r\n\r\nFile /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /usr/local/lib/python3.8/site-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)\r\n 462 def forward(self, input: Tensor) -> Tensor:\r\n--> 463 return self._conv_forward(input, self.weight, self.bias)\r\n\r\nFile /usr/local/lib/python3.8/site-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)\r\n 455 if self.padding_mode != 'zeros':\r\n 456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),\r\n 457 weight, bias, self.stride,\r\n 458 _pair(0), self.dilation, self.groups)\r\n--> 459 return F.conv2d(input, weight, bias, self.str", "url": "https://github.com/pytorch/xla/issues/8223", "state": "open", "labels": [ "bug" ], "created_at": "2024-10-06T00:02:41Z", "updated_at": "2025-02-27T13:17:50Z", "user": "ghost" }, { "repo": "pytorch/xla", "number": 8222, "title": " unsupported operand type(s) for %: 'int' and 'NoneType'", "body": "## \u2753 Questions and Help\r\nI follow the https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb\r\n\r\nbut the code in image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]\r\nget\r\n```\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 4\r\n 1 generator = torch.Generator().manual_seed(0)\r\n 2 # xm.mark_step compiles and executes the graph after each iteration.\r\n 3 # The first few steps will be much slower than the rest.\r\n----> 4 image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]\r\n 5 image\r\n\r\nFile /usr/local/lib/python3.8/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1035, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)\r\n 1033 if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):\r\n 1034 progress_bar.update()\r\n-> 1035 if callback is not None and i % callback_steps == 0:\r\n 1036 step_idx = i // getattr(self.scheduler, \"order\", 1)\r\n 1037 callback(step_idx, t, latents)\r\n\r\nTypeError: unsupported operand type(s) for %: 'int' and 'NoneType'\r\n```\r\nhow to fix the problem?", "url": "https://github.com/pytorch/xla/issues/8222", "state": "closed", "labels": [ "question", "xla:tpu" ], "created_at": "2024-10-05T12:11:52Z", "updated_at": "2025-02-27T13:20:08Z", "user": "ghost" }, { "repo": "pytorch/xla", "number": 8216, "title": "Random OOM and crashes", "body": "## \u2753 Questions and Help\r\n\r\nI've found that I'm unable to train more than ~20-80K steps without a crash and it's difficult to figure out how to debug this. In a typical PyTorch training run, I would get a clear OOM message at a particular line, or any other error and this would be printed to log/console.\r\n\r\nHowever, about half the time, my training run simply exits with no message on any rank, and the other half the time it's clearly due to memory with a \"Resource Exhausted\" message. The issue is it's not clear where this new allocation happens (I have a fairly standard decoder based transformer, not even any eval batches, and I'm not using any eager modes). I tried to switch to nightly to get a recent dataloader memory fix, but that doesn't seem to fix it.\r\n\r\nI know there are many flags that can be used for debugging, but it's unclear exactly which ones can be used during training without a large performance hit. I've done all the suggested steps including profiling, and making sure there isn't re-compiliation happening, etc. Perhaps it would be good to clarify the impact of the flags somewhere to make it clear which are safe\u2014and any other advice on how to debug this would be great!\r\n\r\nAlso, I should note this occurs with SPMD multi-node training, I have not spent time testing other modes, but this has happened with between 2 and 8 TPUv4 VMs, both in DDP-like configurations and several other mesh configurations", "url": "https://github.com/pytorch/xla/issues/8216", "state": "closed", "labels": [ "question", "distributed", "xla:tpu" ], "created_at": "2024-10-04T18:51:52Z", "updated_at": "2025-02-27T13:21:33Z", "user": "alexanderswerdlow" }, { "repo": "pytorch/xla", "number": 8215, "title": "how to use all tpu core in pytorch xla", "body": "## \u2753 Questions and Help\r\nI follow the code in https://github.com/pytorch/xla/blob/master/contrib/kaggle/distributed-pytorch-xla-basics-with-pjrt.ipynb\r\n\r\nBut use xmp.spawn(print_device, args=(lock,), nprocs=8, start_method='fork')\r\n\r\nthe source code\r\n```\r\nimport os\r\nos.environ.pop('TPU_PROCESS_ADDRESSES')\r\n\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_multiprocessing as xmp\r\nimport multiprocessing as mp\r\nlock = mp.Manager().Lock()\r\n\r\ndef print_device(i, lock):\r\n device = xm.xla_device()\r\n with lock:\r\n print('process', i, device)\r\n \r\nxmp.spawn(print_device, args=(lock,), nprocs=8, start_method='fork')\r\n```\r\nWARNING:root:Unsupported nprocs (8), ignoring...\r\nprocess 4 xla:0\r\nprocess 5 xla:1\r\nprocess 0 xla:0\r\nprocess 1 xla:1\r\nprocess 2 xla:0\r\nprocess 3 xla:1\r\nprocess 6 xla:0\r\nprocess 7 xla:1\r\n\r\nxla just can see 2 xla device. But when I run xm.get_xla_supported_devices() it list all ['xla:0', 'xla:1', 'xla:2', 'xla:3', 'xla:4', 'xla:5', 'xla:6', 'xla:7'] I want to know how to use all tpu cores?", "url": "https://github.com/pytorch/xla/issues/8215", "state": "closed", "labels": [ "question", "distributed", "xla:tpu" ], "created_at": "2024-10-04T02:54:18Z", "updated_at": "2025-02-27T13:22:25Z", "user": "ghost" }, { "repo": "pytorch/torchchat", "number": 1262, "title": "Support Granite Code 3B/8B", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe `torchchat` framework provides an excellent platform for embedding models into many different edge-centric platforms.\r\n\r\nThe [Granite Code models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec322e4c148c8b330), specifically the [3B-128k](https://huggingface.co/ibm-granite/granite-3b-code-instruct-128k) and [8B-128k](https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k) variants, are a family of models from IBM that support a wide variety of code-related tasks. The models are released under the Apache-3 license and are therefore well-suited to embedded use-cases where code intelligence is needed. \r\n\r\nThe request here is to extend the model support in `torchchat` to support running the 3B and 8B long-context variants of Granite Code in order to enable usage of these models across embedded use-cases.\n\n### Alternatives\n\nDepending on the goals of the `torchchat` framework, extending support to non-llama models may or may not be a project goal. There are other embedded frameworks out there (notably `llama.cpp` and the many projects that wrap it), so these can be used to run Granite Code in embedded environments. Our goal at IBM is to provide users with as many choices as possible on how to run all of our Granite family models, so our hope is that `torchchat` can be a strong piece of this story!\n\n### Additional context\n\nThe 3B and 8B models use the `llama` architecture in `transformers`, so they are _close_ to fully supported as-is. There are a few crucial pieces that are present in the `transformers` implementation that are missing in `torchchat`:\r\n\r\n* Safetensors support: https://github.com/pytorch/torchchat/issues/1249\r\n* Tied word embeddings: https://github.com/pytorch/torchchat/issues/1252\r\n* Bias tensors: https://github.com/pytorch/torchchat/issues/1250\r\n* Non-tiktoken/sentencepiece tokenizers: https://github.com/pytorch/torchchat/issues/1251\n\n### RFC (Optional)\n\nI've worked through the initial steps of solving all of these outstanding issues (see the corresponding issues). Once these are solved, the addition of these Granite Code models should consist of the following steps:\r\n\r\n* Adding new entries to [models.json](https://github.com/pytorch/torchchat/blob/main/torchchat/model_config/models.json)\r\n* Adding the right set of model-specific params to [model_params](https://github.com/pytorch/torchchat/tree/main/torchchat/model_params)", "url": "https://github.com/pytorch/torchchat/issues/1262", "state": "closed", "labels": [], "created_at": "2024-10-03T16:18:08Z", "updated_at": "2024-12-19T10:13:55Z", "comments": 0, "user": "gabe-l-hart" }, { "repo": "pytorch/serve", "number": 3339, "title": "Clarification on minWorkers and maxWorkers parameters", "body": "### \ud83d\udcda The doc issue\n\nI have some questions related to model parameters:\r\n1. I know there is no autoscaling in Torchserve, and looking at code, models will scale `minWorkers` number of workers on startup. `maxWorkers` seems to be only used when downscaling a model, meaning if `currentWorkers > maxWorkers`, it will kill `currentWorkers - maxWorkers` workers (`WorkloadManager.java:151`). Given that we'll only scale/downscale number of workers on `scaleWorkers` API call, is there any practical use case of setting `minWorkers` != `maxWorkers`? For example in `examples/cloud_storage_stream_inference/config.properties` `minWorkers` is set to 10 and `maxWorkers` to 1000, when do we want that?\r\n2. In `docs/getting_started.md` it reads: `If you specify model(s) when you run TorchServe, it automatically scales backend workers to the number equal to available vCPUs (if you run on a CPU instance) or to the number of available GPUs (if you run on a GPU instance).`. I can't find any evidence of this behavior in the code, could somebody clarify how if this statement is true and how does it work?\r\n\r\nThank you!\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3339", "state": "open", "labels": [], "created_at": "2024-10-03T13:07:00Z", "updated_at": "2024-10-03T13:07:00Z", "comments": 0, "user": "krzwaraksa" }, { "repo": "pytorch/serve", "number": 3338, "title": "throughput increase non-linearly with number of workers", "body": "### \ud83d\udc1b Describe the bug\n\nI am hosting a bert-like model using below torchserve config.\r\n```\r\ninference_address=http://localhost:8080\r\nmanagement_address=http://localhost:8081\r\nmetrics_address=http://localhost:8082\r\nload_models=model_name=weights.mar\r\nasync_logging=true\r\njob_queue_size=200\r\n\r\nmodels={ \"model_name\": { \"1.0\": { \"minWorkers\": 8 , \"batchSize\": 8 , \"maxBatchDelay\": 10 } } }\r\n``` \r\nI have 8 GPUs, this setting will give me 1 worker per gpu.\r\n\r\nthen I did load test with both k6 and locust, and below shows the relationship between number of workers(from 1 to 8) and throughput.\r\n![output](https://github.com/user-attachments/assets/d5b2129c-7dee-47a2-b311-0921e3507554)\r\n\r\n\r\nAs can be seen in the chart, gpu usage is dropping when number of workers increased, so it feels like the load balancer in torchserve leads to the inefficiency. Anyone can give me some clues how can I improve the throughput further?\r\n\n\n### Error logs\n\nthroughput increase non-linearly with number of workers\n\n### Installation instructions\n\ntorchserve = \"^0.10.0\"\n\n### Model Packaging\n\ntorchserve = \"0.10.0\"\n\n### config.properties\n\ninference_address=http://localhost:8080\r\nmanagement_address=http://localhost:8081\r\nmetrics_address=http://localhost:8082\r\nload_models=model_name=weights.mar\r\nasync_logging=true\r\njob_queue_size=200\r\n\r\nmodels={ \"model_name\": { \"1.0\": { \"minWorkers\": 8 , \"batchSize\": 8 , \"maxBatchDelay\": 10 } } }\n\n### Versions\n\n$ python serve/ts_scripts/print_env_info.py\r\n------------------------------------------------------------------------------------------\r\nEnvironment headers\r\n------------------------------------------------------------------------------------------\r\nTorchserve branch:\r\n\r\ntorchserve==0.10.0\r\ntorch-model-archiver==0.11.0\r\n\r\nPython version: 3.11 (64-bit runtime)\r\nPython executable: /home/me/.cache/pypoetry/virtualenvs/pre-deploy-j4GApv9r-py3.11/bin/python\r\n\r\nVersions of relevant python libraries:\r\nnumpy==1.24.3\r\nnvgpu==0.10.0\r\npillow==10.4.0\r\npsutil==6.0.0\r\nrequests==2.32.3\r\ntorch==2.3.1+cu121\r\ntorch-model-archiver==0.11.0\r\ntorch_tensorrt==2.3.0+cu121\r\ntorchserve==0.10.0\r\ntorchvision==0.18.1\r\ntransformers==4.44.2\r\nwheel==0.44.0\r\ntorch==2.3.1+cu121\r\n**Warning: torchtext not present ..\r\ntorchvision==0.18.1\r\n**Warning: torchaudio not present ..\r\n\r\nJava Version:\r\n\r\n\r\nOS: Debian GNU/Linux 12 (bookworm)\r\nGCC version: (Debian 12.2.0-14) 12.2.0\r\nClang version: 14.0.6\r\nCMake version: version 3.25.1\r\n\r\nIs CUDA available: Yes\r\nCUDA runtime version: N/A\r\nGPU models and configuration:\r\nGPU 0: NVIDIA A100-SXM4-80GB\r\nGPU 1: NVIDIA A100-SXM4-80GB\r\nGPU 2: NVIDIA A100-SXM4-80GB\r\nGPU 3: NVIDIA A100-SXM4-80GB\r\nGPU 4: NVIDIA A100-SXM4-80GB\r\nGPU 5: NVIDIA A100-SXM4-80GB\r\nGPU 6: NVIDIA A100-SXM4-80GB\r\nGPU 7: NVIDIA A100-SXM4-80GB\r\nNvidia driver version: 550.54.15\r\ncuDNN version: None\r\n\r\n\r\nEnvironment:\r\nlibrary_path (LD_/DYLD_):\n\n### Repro instructions\n\nwget http://mar_file.mar\r\ntorch-model-archiver ...\r\ntorchserve --start\n\n### Possible Solution\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3338", "state": "open", "labels": [], "created_at": "2024-10-03T07:32:22Z", "updated_at": "2024-10-08T10:33:28Z", "comments": 2, "user": "vandesa003" }, { "repo": "pytorch/ao", "number": 1002, "title": "How to calibrate a w8a8 quantized model\uff1f", "body": "I used the following code to quantize an LLM, employing an w8a8 quantization setting:\r\n\r\n```python\r\nmodel = AutoModelForCausalLM.from_pretrained(\"./Qwen1.5-0.5B-Chat\").to(dtype=torch.bfloat16, device='cpu')\r\nquantize_(model, int8_dynamic_activation_int8_weight())\r\n```\r\n\r\nEverything is running smoothly, but the model's accuracy has decreased significantly. How can I calibrate a quantized model to enhance its accuracy?\r\n\r\n---\r\n\r\nI have another question:\r\n\r\nI printed out a parameter and noticed that the weights were quantized using per-channel quantization. What is the purpose of the fp16 AffineQuantizedTensor? Shouldn't the activation only require one scale parameter when using per-tensor quantization?\r\n\r\nI'm not very familiar with the quantization mechanism in PyTorch, and I hope you can give me some tips.\r\n\r\n```plaintxt\r\nParameter Name: model.layers.0.self_attn.q_proj.weight\r\nParameter Shape: torch.Size([1024, 1024])\r\nParameter Values: LinearActivationQuantizedTensor(AffineQuantizedTensor(data=tensor([[ 0.2148, -0.1196, -0.0898, ..., -0.0388, 0.0869, 0.0898],\r\n [ 0.0830, -0.2188, -0.1436, ..., 0.0566, 0.0679, 0.0830],\r\n [ 0.0552, -0.2480, -0.1621, ..., 0.0242, 0.0688, 0.0830],\r\n ...,\r\n [ 0.0742, -0.0417, -0.1641, ..., -0.0356, 0.1543, -0.0566],\r\n [-0.0640, 0.0771, 0.2695, ..., 0.0537, -0.1982, 0.0938],\r\n [-0.1216, 0.1025, -0.1074, ..., -0.0327, 0.1592, -0.1123]],\r\n dtype=torch.bfloat16)..., shape=torch.Size([1024, 1024]), block_size=(1, 1024), device=cpu, dtype=torch.bfloat16, requires_grad=False, layout_tensor=PlainAQTLayout(data=tensor([[ 72, -40, -30, ..., -13, 29, 30],\r\n [ 22, -58, -38, ..., 15, 18, 22],\r\n [ 16, -72, -47, ..., 7, 20, 24],\r\n ...,\r\n [ 25, -14, -55, ..., -12, 52, -19],\r\n [-19, 23, 80, ..., 16, -59, 28],\r\n [-26, 22, -23, ..., -7, 34, -24]], dtype=torch.int8)... , scale=tensor([0.0030, 0.0038, 0.0034, ..., 0.0030, 0.0034, 0.0047],\r\n dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, ..., 0, 0, 0])... , layout_type=PlainLayoutType())), <function _int8_symm_per_token_reduced_range_quant at 0x751a4815fe20>)\r\n```", "url": "https://github.com/pytorch/ao/issues/1002", "state": "closed", "labels": [], "created_at": "2024-10-03T03:55:31Z", "updated_at": "2024-10-04T01:26:58Z", "user": "chenghuaWang" }, { "repo": "pytorch/vision", "number": 8669, "title": "performance degradation in to_pil_image after v0.17", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n`torchvision.transforms.functional.to_pil_image `is much slower when converting torch.float16 image tensors to PIL Images based on my benchmarks (serializing 360 images):\r\n\r\nDependencies: \r\n```\r\nPython 3.11\r\nPillow 10.4.0\r\n```\r\nBefore (torch 2.0.1, torchvision v0.15.2, [Code here](https://github.com/pytorch/vision/blob/fa99a5360fbcd1683311d57a76fcc0e7323a4c1e/torchvision/transforms/functional.py#L244)): 23 seconds\r\nAfter ( torch 2.2.0, torchvision v0.17, [Code here](https://github.com/pytorch/vision/blob/b2383d44751bf85e58cfb9223bbf4e5961c09fa1/torchvision/transforms/functional.py#L245)): 53 seconds\r\n\r\nHow to reproduce:\r\n```python\r\nimport torch\r\nfrom torchvision.transforms.functional import to_pil_image\r\n\r\nrand_img_tensor = torch.rand(3, 512, 512, dtype=torch.float16)\r\nstart_time = time.time()\r\nfor _ in range(50):\r\n pil_img = to_pil_image(rand_img_tensor)\r\n\r\nend_time = time.time()\r\nprint(end_time - start_time) # seconds\r\n```\r\n\r\nRun the above script with both versions of dependencies listed, and the time difference is apparent.\r\n\r\nThe cause seems to be [this PR](https://github.com/pytorch/vision/commit/15c166ac127db5c8d1541b3485ef5730d34bb68a)", "url": "https://github.com/pytorch/vision/issues/8669", "state": "open", "labels": [], "created_at": "2024-10-02T08:25:01Z", "updated_at": "2024-10-25T13:06:15Z", "comments": 5, "user": "seymurkafkas" }, { "repo": "pytorch/torchchat", "number": 1249, "title": "Support Huggingface models from safetensors", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nThere are many models on Huggingface that are published as `safetensors` rather than `model.pth` checkpoints. The request here is to support converting and loading those checkpoints into a format that is usable with `torchchat`.\r\n\r\nThere are several places where this limitation is currently enforced:\r\n\r\n* [_download_hf_snapshot](https://github.com/pytorch/torchchat/blob/main/torchchat/cli/download.py#L36) method explicitly ignores `safetensors` files.\r\n* [convert_hf_checkpoint](https://github.com/pytorch/torchchat/blob/main/torchchat/cli/convert_hf_checkpoint.py#L44) explicitly looks for `pytorch_model.bin.index.json` which would be named differently for models that use `safetensors` (e.g. `model.safetensors.index.json`)\r\n* [convert_hf_checkpoint](https://github.com/pytorch/torchchat/blob/main/torchchat/cli/convert_hf_checkpoint.py#L99) only supports `torch.load` to load the `state_dict` rather than `safetensors.torch.load`\r\n\r\n### Alternatives\r\n\r\nCurrently, this `safetensors` -> `model.pth` can be accomplished manually after downloading a model locally, so this could be solved with documentation instead of code.\r\n\r\n### Additional context\r\n\r\nThis issue is a piece of the puzzle for adding support for Granite Code 3b/8b which use the `llama` architecture in `transormers`, but take advantage several pieces of the architecture that are not currently supported by `torchchat`. The work-in-progress for Granite Code can be found on my fork: https://github.com/gabe-l-hart/torchchat/tree/GraniteCodeSupport\r\n\r\n### RFC (Optional)\r\n\r\nI have a working implementation to support `safetensors` during download and conversion that I plan to submit as a PR. The changes address the three points in code referenced above:\r\n\r\n1. Allow the download of `safetensors` files in `_download_hf_snapshot`\r\n * I'm not yet sure how to avoid double-downloading weights for models that have both `safetensors` and `model.pth`, so will look to solve this before concluding the work\r\n2. When looking for the tensor index file, search for all files ending in `.index.json`, and if a single file is found, use that one\r\n3. When loading the `state_dict`, use the correct method based on the type of file (`torch.load` or `safetensors.torch.load`)", "url": "https://github.com/pytorch/torchchat/issues/1249", "state": "closed", "labels": [], "created_at": "2024-10-01T22:07:59Z", "updated_at": "2024-10-04T19:18:22Z", "comments": 2, "user": "gabe-l-hart" }, { "repo": "pytorch/torchtitan", "number": 594, "title": "Support Gemma2 in torchtitan", "body": "Are there any plans to support Gemma2 in the torchtitan? I tried to use torchtitan to finetune Gemma2 model, but stuck on the following problem: how to parallelize tied layer in Gemma2 model? Maybe somebody kwon the solution for this problem \ud83d\ude04 ", "url": "https://github.com/pytorch/torchtitan/issues/594", "state": "closed", "labels": [ "bug", "question" ], "created_at": "2024-10-01T11:50:15Z", "updated_at": "2025-03-20T18:32:31Z", "user": "pansershrek" }, { "repo": "pytorch/torchchat", "number": 1222, "title": "Clear model download documents", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nFrom the README, its not very clear how to download different flavor/sizes of the models from HF, unless someone go to the next section and find the inventory list https://github.com/pytorch/torchchat#download-weights\r\nmight be helpful to add the inventory list command upper before the the download command.\r\n\r\nAlso as we have 3.2 it would be great to update the docs. \r\n\r\n\r\n```\r\n\r\n/torchchat$ python3 torchchat.py list\r\n\r\nModel Aliases Downloaded \r\n-------------------------------------------- ---------------------------------------------------------- -----------\r\nmeta-llama/llama-2-7b-hf llama2-base, llama2-7b \r\nmeta-llama/llama-2-7b-chat-hf llama2, llama2-chat, llama2-7b-chat \r\nmeta-llama/llama-2-13b-chat-hf llama2-13b-chat \r\nmeta-llama/llama-2-70b-chat-hf llama2-70b-chat \r\nmeta-llama/meta-llama-3-8b llama3-base \r\nmeta-llama/meta-llama-3-8b-instruct llama3, llama3-chat, llama3-instruct Yes \r\nmeta-llama/meta-llama-3-70b-instruct llama3-70b \r\nmeta-llama/meta-llama-3.1-8b llama3.1-base \r\nmeta-llama/meta-llama-3.1-8b-instruct llama3.1, llama3.1-chat, llama3.1-instruct \r\nmeta-llama/meta-llama-3.1-70b-instruct llama3.1-70b \r\nmeta-llama/meta-llama-3.1-8b-instruct-tune llama3.1-tune, llama3.1-chat-tune, llama3.1-instruct-tune \r\nmeta-llama/meta-llama-3.1-70b-instruct-tune llama3.1-70b-tune \r\nmeta-llama/meta-llama-3.2-1b llama3.2-1b-base \r\nmeta-llama/meta-llama-3.2-1b-instruct llama3.2-1b, llama3.2-1b-chat, llama3.2-1b-instruct \r\nmeta-llama/llama-guard-3-1b llama3-1b-guard, llama3.2-1b-guard \r\nmeta-llama/meta-llama-3.2-3b llama3.2-3b-base \r\nmeta-llama/meta-llama-3.2-3b-instruct llama3.2-3b, llama3.2-3b-chat, llama3.2-3b-instruct \r\nmeta-llama/llama-3.2-11b-vision llama3.2-11B-base, Llama-3.2-11B-Vision-base \r\nmeta-llama/llama-3.2-11b-vision-instruct llama3.2-11B, Llama-3.2-11B-Vision, Llama-3.2-mm \r\nmeta-llama/codellama-7b-python-hf codellama, codellama-7b \r\nmeta-llama/codellama-34b-python-hf codellama-34b \r\nmistralai/mistral-7b-v0.1 mistral-7b-v01-base \r\nmistralai/mistral-7b-instruct-v0.1 mistral-7b-v01-instruct \r\nmistralai/mistral-7b-instruct-v0.2 mistral, mistral-7b, mistral-7b-instruct \r\nopenlm-research/open_llama_7b open-llama, open-llama-7b \r\nstories15m \r\nstories42m \r\nstories110m \r\n\r\n```\r\n### Versions\r\n\r\n```\r\nCollecting environment information...\r\nPyTorch version: 2.5.0.dev20240901+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.6 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.30.3\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-1068-aws-x86_64-with-glibc2.31\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nCPU(s): ", "url": "https://github.com/pytorch/torchchat/issues/1222", "state": "closed", "labels": [ "documentation", "actionable" ], "created_at": "2024-09-27T22:16:38Z", "updated_at": "2024-09-30T16:02:55Z", "comments": 4, "user": "HamidShojanazeri" }, { "repo": "pytorch/xla", "number": 8088, "title": "Is this content still relevant?", "body": "## \ud83d\udcda Documentation\r\n\r\nxla/docs/README contains the following text. Is this text still relevant? The link to CircleCi is broken and I'm not sure if this information is useful:\r\n-------------------------------\r\n## Publish documentation for a new release.\r\n\r\nCI job `pytorch_xla_linux_debian11_and_push_doc` is specified to run on `release/*` branches, but it was not\r\nrun on release branches due to \"Only build pull requests\" setting. Turning off \"Only build pull requests\" will result\r\nin much larger volumes in jobs which is often unnecessary. We're waiting for [this feature request](https://ideas.circleci.com/ideas/CCI-I-215)\r\nto be implemented so that we could override this setting on some branches.\r\n\r\nBefore the feature is available on CircleCi side, we'll use a manual process to publish documentation for release.\r\n[Documentation for master branch](http://pytorch.org/xla/master/) is still updated automatically by the CI job.\r\nBut we'll need to manually commit the new versioned doc and point http://pytorch.org/xla to the documentation of new\r\nstable release.\r\n\r\nTake 2.3 release as example:\r\n```\r\n# Build pytorch/pytorch:release/2.3 and pytorch/xla:release/2.3 respectively.\r\n# In pytorch/xla/docs\r\n./docs_build.sh\r\ngit clone -b gh-pages https://github.com/pytorch/xla.git /tmp/xla\r\ncp -r build/* /tmp/xla/release/2.3\r\ncd /tmp/xla\r\n# Update `redirect_url` in index.md\r\ngit add .\r\ngit commit -m \"Publish 2.3 documentation.\"\r\ngit push origin gh-pages\r\n```\r\n--------------------------------------\r\nI would suggest we remove this and replace it with instuctions on how to update index.rst to include any new documentation on pytorch.org.", "url": "https://github.com/pytorch/xla/issues/8088", "state": "closed", "labels": [ "question", "documentation" ], "created_at": "2024-09-27T22:02:37Z", "updated_at": "2025-03-06T13:05:38Z", "user": "mikegre-google" }, { "repo": "pytorch/TensorRT", "number": 3192, "title": "\u2753 [Question] When should I use Torch-TensorRT instead of TensorRT ?", "body": "I generally use NVIDIA's TensorRT as the inference framework. I want to know the advantages and disadvantages of Torch-TensorRT compared to TensorRT, so that I can decide when to use Torch-TensorRT. I guess Torch-TensorRT might be simpler and more user-friendly. Also, have you tested and compared their inference speed and GPU memory usage amont?", "url": "https://github.com/pytorch/TensorRT/issues/3192", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-27T15:51:32Z", "updated_at": "2024-10-02T16:22:54Z", "user": "EmmaThompson123" }, { "repo": "pytorch/vision", "number": 8661, "title": "references/segmentation/coco_utils might require merging rles?", "body": "https://github.com/pytorch/vision/blob/6d7851bd5e2bedc294e40e90532f0e375fcfee04/references/segmentation/coco_utils.py#L27-L41 Above seems to assume that objects are not occluded, not merging rles from `frPyObjects`. In such case, i think it must be changed to \r\n```python\r\nrles = coco_mask.frPyObjects(polygons, height, width) \r\nrle = coco_mask.merge(rles)\r\nmask = coco_mask.decode(rle)\r\n```\r\nIs there any specific reason for this, or am I wrong? ", "url": "https://github.com/pytorch/vision/issues/8661", "state": "open", "labels": [], "created_at": "2024-09-26T02:53:47Z", "updated_at": "2024-10-11T13:36:25Z", "comments": 1, "user": "davidgill97" }, { "repo": "pytorch/xla", "number": 8071, "title": "Optimizer Memory in AdamW/Adam vs SGD", "body": "## \u2753 Questions and Help\r\n\r\nIt is to my understanding that Adam should use more memory than SGD because it keeps track of more parameters. However, when I look at my profiles between Adam and SGD optimizers and see that they use roughly the same amount of memory. \r\n\r\nDoes torch XLA somehow do optimizations on the optimizers to reduce the memory usage or something else? Any guidance on how to investigate this would be appreciated!", "url": "https://github.com/pytorch/xla/issues/8071", "state": "closed", "labels": [], "created_at": "2024-09-25T16:01:53Z", "updated_at": "2024-11-16T20:30:20Z", "comments": 1, "user": "dangthatsright" }, { "repo": "pytorch/audio", "number": 3835, "title": "Not building CUDA 12.6", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nIt's not building with last version of cuda 12.6.1 in jetson agx orin\r\n```bash\r\n#!/usr/bin/env bash\r\nset -ex\r\necho \"Building torchaudio ${TORCHAUDIO_VERSION}\"\r\n \r\napt-get update\r\napt-get install -y --no-install-recommends \\\r\n\t\tgit \\\r\n\t\tpkg-config \\\r\n\t\tlibffi-dev \\\r\n\t\tlibsndfile1\r\n\r\nrm -rf /var/lib/apt/lists/*\r\napt-get clean\r\n\r\ngit clone --branch v${TORCHAUDIO_VERSION} --recursive --depth=1 https://github.com/pytorch/audio /opt/torchaudio\r\ncd /opt/torchaudio\r\ngit checkout v${TORCHAUDIO_VERSION}\r\n\r\nBUILD_SOX=1 python3 setup.py bdist_wheel --verbose --dist-dir /opt\r\n\r\ncd ../\r\nrm -rf /opt/torchaudio\r\n\r\npip3 install --no-cache-dir --verbose /opt/torchaudio*.whl\r\npip3 show torchaudio && python3 -c 'import torchaudio; print(torchaudio.__version__);'\r\n\r\ntwine upload --verbose /opt/torchaudio*.whl || echo \"failed to upload wheel to ${TWINE_REPOSITORY_URL}\"\r\n```\r\n```bash\r\nsrc/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda/include -Wall -D_GLIBCXX_USE_CXX11_ABI=1 -O3 -DNDEBUG -std=gnu++17 -fPIC -D_GLIBCXX_USE_CXX11_ABI=1 -MD -MT src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg4.dir/pybind/pybind.cpp.o -MF src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg4.dir/pybind/pybind.cpp.o.d -o src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg4.dir/pybind/pybind.cpp.o -c /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp\r\nIn file included from /usr/local/lib/python3.10/dist-packages/torch/include/c10/util/Exception.h:5,\r\n from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/BlasBackend.h:3,\r\n from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/Context.h:3,\r\n from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/ATen.h:7,\r\n from /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,\r\n from /opt/torchaudio/src/libtorio/ffmpeg/ffmpeg.h:3,\r\n from /opt/torchaudio/src/libtorio/ffmpeg/hw_context.h:3,\r\n from /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:1:\r\n/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp: In function \u2018int torio::io::{anonymous}::{anonymous}::read_func(void*, uint8_t*, int)\u2019:\r\n/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:125:19: warning: comparison of integer expressions of different signedness: \u2018long unsigned int\u2019 and \u2018int\u2019 [-Wsign-compare]\r\n 125 | chunk_len <= request,\r\n | ~~~~~~~~~~^~~~~~~~~~\r\nIn file included from /opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavutil/avutil.h:296,\r\n from /opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavutil/samplefmt.h:24,\r\n from /opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavcodec/avcodec.h:31,\r\n from /opt/torchaudio/src/libtorio/ffmpeg/ffmpeg.h:10,\r\n from /opt/torchaudio/src/libtorio/ffmpeg/hw_context.h:3,\r\n from /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:1:\r\n/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp: In function \u2018int torio::io::{anonymous}::read_bytes(void*, uint8_t*, int)\u2019:\r\n/opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavutil/common.h:105:25: warning: comparison of integer expressions of different signedness: \u2018std::basic_string_view<char>::size_type\u2019 {aka \u2018long unsigned int\u2019} and \u2018int\u2019 [-Wsign-compare]\r\n 105 | #define FFMIN(a,b) ((a) > (b) ? (b) : (a))\r\n | ~~~~^~~~~\r\n/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:202:19: note: in expansion of macro \u2018FFMIN\u2019\r\n 202 | auto num_read = FFMIN(wrapper->src.size() - wrapper->index, buf_size);\r\n | ^~~~~\r\n[82/92] /usr/bin/c++ -DTORIO_FFMPEG_EXT_NAME=_torio_ffmpeg6 -DUSE_C10D_GLOO -DUSE_C10D_MPI -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_torio_ffmpeg6_EXPORTS -I/opt/torchaudio/src -I/usr/include/python3.10 -I/opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f6-src/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda/include -Wall -D_GLIBCXX_USE_CXX11_ABI=1 -O3 -DNDEBUG -std=gnu++17 -fPIC -D_GLIBCXX_USE_CXX11_ABI=1 -MD -MT src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg6.dir/pybind/pybind.cpp.o -MF src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg6.dir/pybind/pybind.cpp.o.d -o src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg6.dir/pybind/pybind.cpp.o -c /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp\r\nIn file included from /usr/local/lib/python3.10/dist-packages/torch/include/c10/util/Exception.h:5,\r\n from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/BlasBackend.h:3,\r\n from /usr/loca", "url": "https://github.com/pytorch/audio/issues/3835", "state": "closed", "labels": [], "created_at": "2024-09-25T10:10:21Z", "updated_at": "2025-01-08T12:54:20Z", "comments": 2, "user": "johnnynunez" }, { "repo": "pytorch/examples", "number": 1289, "title": "Does torchrun + FSDP create multiple copies of the same dataset and model?", "body": "In the [example T5 training code](https://github.com/pytorch/examples/blob/cdef4d43fb1a2c6c4349daa5080e4e8731c34569/distributed/FSDP/T5_training.py#L77C24-L77C35), the main function creates a copy of the model and dataset regardless of the worker rank before passing it to FSDP. Does this mean that there are n copies of the model and dataset when running the script with torchrun and n processes? ", "url": "https://github.com/pytorch/examples/issues/1289", "state": "open", "labels": [], "created_at": "2024-09-25T03:59:24Z", "updated_at": "2024-09-25T04:25:55Z", "comments": 1, "user": "tsengalb99" }, { "repo": "pytorch/xla", "number": 8059, "title": "Poor performance with 1 GPU?", "body": "Hello, I am trying to evaluate the impact of XLA in our models but before that I want to be sure that I know how to adapt our code and execute XLA models without problem.\r\n\r\nGPU: Nvidia 4090 GTX 24GB\r\nCuda 12.2\r\n```bash\r\n$ pip freeze | grep torch\r\ntorch==2.4.0\r\ntorch-xla==2.4.0\r\ntorch_xla_cuda_plugin @ https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla_cuda_plugin-2.4.0-py3-none-any.whl#sha256=208085526f67739c2ea2ab15f1707935b2cfee7c1501116a524cfaa8d7b252d2\r\ntorchvision==0.19.0\r\n```\r\n\r\nI have been trying a simple model with MNIST\r\n\r\n```python\r\nimport numpy as np\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torchvision\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.runtime as xr\r\nfrom tqdm import tqdm\r\nimport random\r\nfrom torch_xla.amp import syncfree, GradScaler, autocast\r\n\r\nimport torch_xla.debug.metrics as met\r\n\r\n\r\ndef random_seed(seed_value, use_cuda):\r\n np.random.seed(seed_value) # cpu vars\r\n torch.manual_seed(seed_value) # cpu vars\r\n random.seed(seed_value) # Python\r\n if use_cuda:\r\n torch.cuda.manual_seed(seed_value)\r\n torch.cuda.manual_seed_all(seed_value) # gpu vars\r\n torch.backends.cudnn.deterministic = True #needed\r\n torch.backends.cudnn.benchmark = False\r\n\r\nrandom_seed(42,True)\r\n\r\nXLA = True\r\n\r\n# Enable XLA SPMD execution mode.\r\n# xr.use_spmd()\r\nif XLA:\r\n device = xm.xla_device()\r\nelse:\r\n device = \"cuda\"\r\n\r\nclass ToyModel(nn.Module):\r\n def __init__(self):\r\n super(ToyModel, self).__init__()\r\n self.conv1 = nn.Conv2d(1, 32, 3, 1)\r\n self.conv2 = nn.Conv2d(32, 64, 3, 1)\r\n self.dropout1 = nn.Dropout(0.25)\r\n self.dropout2 = nn.Dropout(0.5)\r\n self.fc1 = nn.Linear(9216, 128)\r\n self.fc2 = nn.Linear(128, 10)\r\n\r\n def forward(self, x):\r\n x = self.conv1(x)\r\n x = F.relu(x)\r\n x = self.conv2(x)\r\n x = F.relu(x)\r\n x = F.max_pool2d(x, 2)\r\n x = self.dropout1(x)\r\n x = torch.flatten(x, 1)\r\n x = self.fc1(x)\r\n x = F.relu(x)\r\n x = self.dropout2(x)\r\n x = self.fc2(x)\r\n output = F.log_softmax(x, dim=1)\r\n return output\r\n\r\n\r\nmodel = ToyModel()\r\nmodel.to(device)\r\n\r\ntransform = torchvision.transforms.Compose([\r\n torchvision.transforms.ToTensor(),\r\n torchvision.transforms.Normalize((0.1307,), (0.3081,))\r\n])\r\n\r\ntrain_dataset = torchvision.datasets.MNIST(\r\n '.', train=True, download=True, transform=transform)\r\n\r\ntrain_loader = torch.utils.data.DataLoader(\r\n train_dataset, batch_size=32, shuffle=False\r\n)\r\n\r\nn_epochs = 10\r\ncriterion = torch.nn.MSELoss()\r\nif XLA:\r\n optimizer = syncfree.SGD(model.parameters(), lr=0.1) # torch_xla\r\nelse:\r\n optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\r\n\r\nif XLA:\r\n scaler = GradScaler(use_zero_grad=True) # torch_xla\r\nelse:\r\n scaler = torch.amp.GradScaler()\r\n\r\nfor epoch in tqdm(range(n_epochs)):\r\n xm.mark_step()\r\n for i, (images, labels) in tqdm(enumerate(train_loader), leave=False):\r\n if not XLA:\r\n optimizer.zero_grad()\r\n if i >= 2000:\r\n break\r\n images = images.to(device)\r\n labels = labels.to(device)\r\n # Forward pass\r\n if XLA:\r\n autoamp = autocast(device, dtype=torch.bfloat16)\r\n else:\r\n autoamp = torch.autocast(device)\r\n \r\n with autoamp:\r\n outputs = model(images)\r\n loss = F.nll_loss(outputs, labels)\r\n # Backward\r\n scaler.scale(loss).backward()\r\n if XLA:\r\n gradients = xm._fetch_gradients(optimizer)\r\n xm.all_reduce('sum', gradients, scale=1.0 / xr.world_size())\r\n scaler.step(optimizer)\r\n scaler.update()\r\n xm.mark_step()\r\n\r\n print(loss)\r\n```\r\n\r\nAnd I haven't see any performance improvement, at best the execution time is the same. I thought that maybe the model was being recompiled too many times or something, so I followed https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md\r\n\r\nMetrics are\r\n```\r\nMetric: DeviceLockWait\r\n TotalSamples: 37520\r\n Accumulator: 113ms908.380us\r\n ValueRate: 475.217us / second\r\n Rate: 159.174 / second\r\n Percentiles: 1%=000.972us; 5%=000.989us; 10%=000.999us; 20%=001.010us; 50%=004.627us; 80%=004.978us; 90%=005.046us; 95%=005.112us; 99%=005.205us\r\nMetric: InputOutputAliasCount\r\n TotalSamples: 2\r\n Accumulator: 42.00\r\n ValueRate: 21.95 / second\r\n Rate: 1.04547 / second\r\n Percentiles: 1%=8.00; 5%=8.00; 10%=8.00; 20%=8.00; 50%=34.00; 80%=34.00; 90%=34.00; 95%=34.00; 99%=34.00\r\nMetric: IrValueTensorToXlaData\r\n TotalSamples: 37508\r\n Accumulator: 02s925ms072.075us\r\n ValueRate: 007ms438.792us / second\r\n Rate: 159.175 / second\r\n Percentiles: 1%=030.320us; 5%=030.752us; 10%=030.926us; 20%=031.205us; 50%=059.240us; 80%=061.600us; 90%=062.326us; 95%=062.728us; 99%=067.959us\r\nMetric: LazyTracing\r\n TotalSamples: 3525066\r\n Accumulator: 46s352ms512.571us\r\n ValueRate: 216ms224.1", "url": "https://github.com/pytorch/xla/issues/8059", "state": "closed", "labels": [], "created_at": "2024-09-24T13:24:42Z", "updated_at": "2024-11-17T19:39:48Z", "comments": 3, "user": "Patataman" }, { "repo": "pytorch/xla", "number": 8057, "title": "PjRtComputationClient::ExecuteReplicated core dump when encountering a scalar", "body": "## \u2753 Questions and Help\r\nIn my test code, I found that there might be PjRtData as the type argument(the argument is a scalar), and then the core dump.\r\nhttps://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/pjrt_computation_client.cc#L806\r\nI wrote a test function earlier that tried to transform all arguments manually, but core dumped.\r\n![image](https://github.com/user-attachments/assets/982c4200-f708-42c4-9865-3c4f5f4b3488)\r\n![image](https://github.com/user-attachments/assets/19f0b9a1-f3b5-406b-a87e-fe3b4d610442)\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/8057", "state": "open", "labels": [ "question", "distributed" ], "created_at": "2024-09-24T10:35:31Z", "updated_at": "2025-03-31T21:30:22Z", "user": "mars1248" }, { "repo": "pytorch/audio", "number": 3834, "title": "Ability to build manylinux2014 compliant wheels for other archs (ppc64le)", "body": "### \ud83d\ude80 The feature\n\nI'd like to have the possibility to create manylinux2014 compliant wheels for ppc64le. Is there a documentation for this?\n\n### Motivation, pitch\n\nPowerPC has in-core accelerator engines (MMA, Matrix-mulitply assist) which focused on AI inferencing and packages such as torch/audio/vision are preferred to have prebuilt manylinux wheels.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3834", "state": "open", "labels": [], "created_at": "2024-09-23T21:59:39Z", "updated_at": "2024-09-23T21:59:39Z", "comments": 0, "user": "mgiessing" }, { "repo": "pytorch/xla", "number": 8049, "title": "How to run XLA with CPU offloaded models", "body": "## \u2753 Questions and Help\r\n\r\nHow do you run models that are offloaded to the CPU, Trying to work with ```enable_sequential_cpu_offload``` or ```enable_model_cpu_offload```, when running ```torch_xla.sync()/xm.mark_step() ```, the graph seems to not exclude such factor, and in turn takes much more memory than when only running the model on CPU. For example, reportedly running maximum at 25GB on the CPU but takes up 170GB on XLA devices, this is tested with EasyAnimate V4 model generating a 960x1680 24fps video. If needed, I can provide code if this has not been implemented.\r\n\r\n```RuntimeError: Bad StatusOr access: RESOURCE_EXHAUSTED: Compilation failure: Aborting compilation early because it's unlikely to have enough device memory. Requires 170.73G, has 14.71G available. If more detailed logging is desired, set --xla_tpu_impure_oom_fast_exit_threshold=-1```", "url": "https://github.com/pytorch/xla/issues/8049", "state": "open", "labels": [ "enhancement", "performance" ], "created_at": "2024-09-23T10:59:06Z", "updated_at": "2025-03-31T15:42:09Z", "user": "radna0" }, { "repo": "pytorch/TensorRT", "number": 3173, "title": "\u2753 [Question] torchscript int8 quantization degradation in recent versions", "body": "TS INT8 degradation later versions\r\n\r\nHi all, I get a degradation in results after an INT8 quantization with torchscript, after updating my torch_tensorrt, torch and tensorrt versions. I have listed the dependencies for both cases below, is this expected?\r\n\r\nEarlier Version (Works Well):\r\nTorch: 2.0.1\r\nCUDA: 11.8\r\ntorch_tensorrt: 1.4.0\r\nTensorrt: 8.5.3.1\r\nGPU: A100\r\nPython: 3.9\r\n\r\nLater Version (Degradation in Results): Torch 2.4.0\r\nCUDA 12.1\r\ntorch_tensorrt: 2.4.0\r\nTensorrt: 10.1.0\r\nGPU: A100\r\nPython: 3.11\r\n\r\nScript (Approximately, as I can't submit the model):\r\n```\r\nimport torch\r\nimport time\r\nfrom pathlib import Path\r\nimport PIL\r\nimport PIL.Image\r\nimport torch_tensorrt\r\n\r\nimport torch_tensorrt.ptq\r\nfrom torchvision.transforms.functional import to_tensor, center_crop\r\nfrom torch.utils.data import Dataset, DataLoader\r\n\r\nclass CalibrationDataset(Dataset):\r\n def __init__(self, tile_size: int, model: torch.nn.Module, dtype: torch.dtype) -> None:\r\n self._tile_size = tile_size\r\n self._images = [f for f in Path(\"images\").glob(\"**/*\")]\r\n self._length = len(self._images)\r\n print(\"Dataset size:\", self._length)\r\n self._model = model\r\n self._dtype = dtype\r\n\r\n def __len__(self) -> int:\r\n return self._length\r\n\r\n def _to_tensor(self, img_path: Path) -> torch.Tensor:\r\n pil_img = PIL.Image.open(img_path).convert(\"RGB\")\r\n return to_tensor(pil_img).to(device=\"cuda\", dtype=self._dtype).unsqueeze(0)\r\n\r\n def __getitem__(self, idx: int) -> tuple[torch.Tensor, torch.Tensor]:\r\n print(f\"CalibrationDataset called with {idx=}\")\r\n input_file = self._images[idx]\r\n input_tensor = center_crop(self._to_tensor(input_file), output_size=self._tile_size)\r\n return input_tensor, self._model(input_tensor)\r\n\r\n\r\n\r\ndef compile_to_tensort_and_quantize() -> None:\r\n HALF = True\r\n dtype = torch.float16\r\n batch_size, tile_size = 1, 538\r\n\r\n model = ImageToImageModel.create(checkpoint = \"base\", half=HALF, device=torch.device(\"cuda\"))# Proprietary upscaling model, cannot submit code\r\n with torch.no_grad():\r\n calibration_dataset = CalibrationDataset(tile_size=tile_size, model=model, dtype=dtype)\r\n testing_dataloader = DataLoader(\r\n calibration_dataset, batch_size=4, shuffle=True, num_workers=0,)\r\n\r\n calibrator = torch_tensorrt.ptq.DataLoaderCalibrator(\r\n testing_dataloader,\r\n cache_file=\"./calibration.cache\",\r\n use_cache=False,\r\n algo_type=torch_tensorrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,\r\n device=torch.device(\"cuda\"),\r\n )\r\n dummy_input = torch.randn(1, 3, tile_size, tile_size, device=torch.device(\"cuda\"), dtype=dtype)\r\n inputs = torch.randn(1, 3, tile_size, tile_size, device=torch.device(\"cuda\"), dtype=dtype)\r\n torch_script_module = torch.jit.trace(model, example_inputs=inputs)\r\n\r\n with torch_tensorrt.logging.debug():\r\n trt_ts_module = torch_tensorrt.compile(\r\n torch_script_module,\r\n truncate_long_and_double=True,\r\n inputs=[dummy_input],\r\n enabled_precisions={torch.int8},\r\n calibrator=calibrator,\r\n device={\r\n \"device_type\": torch_tensorrt.DeviceType.GPU,\r\n \"gpu_id\": 0,\r\n \"dla_core\": 0,\r\n \"allow_gpu_fallback\": False,\r\n \"disable_tf32\": False\r\n },\r\n )\r\n\r\n\r\n torch.jit.save(trt_ts_module, \"trt_OLD.ts\")\r\n\r\n print(\"Benchmark\")\r\n times = []\r\n for _ in range(5):\r\n t1 = time.monotonic()\r\n out = trt_ts_module(inputs)\r\n print(out)\r\n torch.cuda.synchronize()\r\n times.append(time.monotonic() - t1)\r\n\r\n print(times)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n compile_to_tensort_and_quantize()\r\n\r\n```\r\nNote: In the later version, need to switch `import torch_tensorrt.ptq` to `import torch_tensorrt.ts.ptq`, the rest of the script is identical\r\n\r\n\r\nWhile the previous versions work well (I get a quantized model that produces close-enough results to the original model), for the later version, I get garbage outputs (I can see there is something wrong with the calibration as the output tensor values is always within a small range 0.18-0.21, whereas it should take any value between -1,1). I'm posting the quantization script approximately, however, I cannot post the model details unfortunately, as it's proprietary. \r\n\r\nWould appreciate all forms of help :), also would love to submit a fix for the underlying issue (if one is present).", "url": "https://github.com/pytorch/TensorRT/issues/3173", "state": "open", "labels": [ "question" ], "created_at": "2024-09-22T14:46:00Z", "updated_at": "2024-09-23T16:44:03Z", "user": "seymurkafkas" }, { "repo": "pytorch/serve", "number": 3325, "title": "Kserve management api for registering new models", "body": "I have a setup where the Kserve endpoint is mounted to PVC, which reads model files on startup and loads them.\r\n\r\nIs it possible to register a new version of the model (after I added it to PVC) without restarting whole Kserve endpoints with other models and expanding config.properties?\r\n\r\nTorchserve supports this use case but I can't find documentation to do it on Kserve.", "url": "https://github.com/pytorch/serve/issues/3325", "state": "open", "labels": [ "question" ], "created_at": "2024-09-20T10:47:44Z", "updated_at": "2024-09-20T19:28:03Z", "user": "matej14086" }, { "repo": "pytorch/xla", "number": 8022, "title": "Add documentation for `pip install[pallas]`", "body": "## \ud83d\udcda Documentation\r\n\r\nPlease add installation documentation for `pip install[pallas]` to the landing page README instructions: https://github.com/pytorch/xla/blob/master/setup.py#L318\r\n\r\nAccordingly, this documentation should clearly explain how users choose between the two: https://pypi.org/project/torch-xla/\r\n\r\ncc @JackCaoG @ManfeiBai @jiawenliu64 @zpcore ", "url": "https://github.com/pytorch/xla/issues/8022", "state": "open", "labels": [ "documentation" ], "created_at": "2024-09-16T15:50:14Z", "updated_at": "2024-09-16T15:50:15Z", "comments": 0, "user": "miladm" }, { "repo": "pytorch/torchchat", "number": 1147, "title": "[distributed][perf] ensure that all decoding ops are happening on gpu with no cpu sync", "body": "### \ud83d\udc1b Describe the bug\n\nper @kwen2501 - when we are doing decoding step:\r\n~~~\r\nnext_token = torch.tensor([decode_results[0][0]], device=device)\r\n~~~\r\n\"nit: I am not sure if the use of torch.tensor here would cause a sync from GPU to CPU (to get the scalar) then move to the GPU again (to create the tensor).\r\nIf there is no use of next_token in CPU domain, better to just use index op here.\r\n\r\nOr, is decode_results already on CPU? Hmm, then we'd need to think about how to arrange these CPU ops and GPU ops. Ideally, you would like to fire the send right after step().\"\r\n\r\n\n\n### Versions\n\nn/a", "url": "https://github.com/pytorch/torchchat/issues/1147", "state": "open", "labels": [ "performance", "Distributed" ], "created_at": "2024-09-15T00:09:56Z", "updated_at": "2024-09-17T22:57:11Z", "comments": 0, "user": "lessw2020" }, { "repo": "pytorch/PiPPy", "number": 1142, "title": "How to train a model with pippy", "body": "It seems that the examples here are all examples of inference, where are the examples of training\uff1f", "url": "https://github.com/pytorch/PiPPy/issues/1142", "state": "open", "labels": [], "created_at": "2024-09-14T09:27:38Z", "updated_at": "2024-11-20T07:18:01Z", "user": "sunkun1997" }, { "repo": "pytorch/data", "number": 1317, "title": "StatefulDataloader is slower than Dataloader. Is there any best practice of StatefulDataloader?", "body": "### \ud83d\udcda The doc issue\n\nHello,\r\n\r\nThank you for your awesome implementation of StatefulDataloader.\r\n\r\nI use the compare the speed of Dataloader and StatefulDataloader, the StatefulDataloader is much slower than Dataloader. For example, Dataloader costs 10ms per iter, but StatefulDataloader costs about 2s per iter.\r\n\r\nIs there any best practice of StatefulDataloader?\r\n\r\ncc @andrewkho \n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1317", "state": "closed", "labels": [], "created_at": "2024-09-13T09:43:45Z", "updated_at": "2024-09-13T09:50:08Z", "comments": 1, "user": "by2101" }, { "repo": "pytorch/torchtitan", "number": 577, "title": "DDP (replicate) + TP?", "body": "Currently, when there are two device meshes (`tp` and `dp`), torchtitan should choose FSDP as the **only** backend for DP. Ref:\r\nhttps://github.com/pytorch/torchtitan/blob/d2a4904f58accc683c17c66a360026cb3c8109af/torchtitan/parallelisms/parallelize_llama.py#L97-L98\r\n\r\nHowever, the `replicate` should support >1D mesh and be used with TP enabled. [Ref](https://github.com/pytorch/pytorch/blob/7dc1788396fc9e2860c0c236e0c0e108e96b83c8/torch/distributed/_composable/replicate.py#L218-L237).\r\n\r\n**Q1:** Why does torchtitan not support DDP (replicate) + TP? Is it only an implementation choice?\r\n\r\nI have [handwritten DDP + TP in torchtitan](https://github.com/pytorch/torchtitan/compare/main...yzs981130:torchtitan:yzs/ddp_tp) and surprisingly found that the loss never goes down. It seems there are no gradients after `loss.backward()`.\r\n\r\n![image](https://github.com/user-attachments/assets/9af2b7e6-a866-4883-b493-9d206f22d378)\r\n\r\nTo reproduce, use the branch above and run `run_llama_train.sh` on an 8-GPU machine.\r\n\r\n**Q2:** Is it a bug or an intended feature that DDP+TP is not used, and that results in missing gradients?\r\n\r\nAnd collect_env:\r\n```\r\nCollecting environment information...\r\nPyTorch version: 2.5.0.dev20240903+cu118\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.8\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Debian GNU/Linux 9.13 (stretch) (x86_64)\r\nGCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516\r\nClang version: Could not collect\r\nCMake version: version 3.21.2\r\nLibc version: glibc-2.24\r\n\r\nPython version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.4.56.bsk.2-amd64-x86_64-with-glibc2.24\r\nIs CUDA available: True\r\nCUDA runtime version: 12.6.20\r\nCUDA_MODULE_LOADING set to: LAZY\r\n...\r\n\r\nNvidia driver version: 560.28.03\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n...\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.4\r\n[pip3] optree==0.12.1\r\n[pip3] pytorch-triton==3.0.0+dedb7bdf33\r\n[pip3] torch==2.5.0.dev20240903+cu118\r\n[pip3] torchaudio==2.5.0.dev20240903+cu118\r\n[pip3] torchdata==0.8.0\r\n[pip3] torchvision==0.20.0.dev20240903+cu118\r\n[conda] numpy 1.26.4 pypi_0 pypi\r\n[conda] optree 0.12.1 pypi_0 pypi\r\n[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi\r\n[conda] torch 2.5.0.dev20240903+cu118 pypi_0 pypi\r\n[conda] torchaudio 2.5.0.dev20240903+cu118 pypi_0 pypi\r\n[conda] torchdata 0.8.0 pypi_0 pypi\r\n[conda] torchvision 0.20.0.dev20240903+cu118 pypi_0 pypi\r\n```\r\n\r\nP.S. \r\n- Torch 2.4.0 shares the similar abnormal results\r\n- Using `DistributedDataParallel` (class) rather than `replicate` behaves well\r\n\r\nThanks in advance! ", "url": "https://github.com/pytorch/torchtitan/issues/577", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-13T08:10:05Z", "updated_at": "2025-03-19T21:22:12Z", "user": "yzs981130" }, { "repo": "pytorch/xla", "number": 8000, "title": "[RFC] `torch_xla` Backward Compatibility Proposal", "body": "Recently, we have started the process to reduce the torch_xla API footprint in favor of torch API to improve the usability. This RFC focuses on the process to deprecate any functions.\r\n\r\n## Backward compatibility\r\nWe propose to offer a 6 months (2 releases) grace period before completely removing the deprecated API. As is shown in the graph below:\r\n\r\n<img width=\"1052\" alt=\"Screenshot 2024-09-12 at 1 47 03\u202fPM\" src=\"https://github.com/user-attachments/assets/9d91f784-8915-4908-9778-eed28a3ecd22\">\r\n\r\nDevelopers should follow the illustrated timeline with the following action:\r\n- Before version X-1 branch cut, developers check in API changes and wrap the function to be deprecated with the warning message. The API to be deprecated should still be usable but it should print out the warning message once if any code is calling into the function. In this way, starting from version X and version X+1, we should see the deprecated message that mentions `API xxx will be deprecated in release X+2`. \r\n- Before version X+2 branch cut, developers completely delete the deprecated functions along with the warning deprecated message. \r\n\r\nIf we follow the timeline, the deprecated API should still be usable for two releases, in which we guarantee backward compatibility. \r\n\r\nFor each deprecated API, mention it in the release X\u2019s release note including what\u2019s the suggested new APIs and when to completely deprecate the old one.\r\n## Actions to take for deprecation: \r\n### Github actions for API deprecation \r\nBefore deprecate any APIs, create a github issue to include the following details:\r\n- Function to be deprecated and whether we have a new API as a replacement.\r\n- Proposed timeline before completely deprecating the function. We need to guarantee the deprecated message lasts for at least 2 releases. \r\n### How to mark function to be deprecated \r\nHere is the example on the code changes if we want to deprecate `torch_xla/core/xla_model.py:xrt_world_size()` with ` torch_xla/runtime.py:world_size()`. There are two ways to mark a function as deprecated:\r\n- Use deprecated function (full example [PR](https://github.com/pytorch/xla/pull/7679)):\r\n```python\r\n# In torch_xla/core/xla_model.py:\r\nfrom torch_xla.experimental.deprecation import deprecated\r\nfrom . import xla_model as this_module\r\nxrt_world_size = deprecated(this_module, torch_xla.runtime.world_size,\r\n 'xrt_world_size() will be removed in release 2.7.')\r\n# Remember to comment out or remove the original xrt_world_size in the file.\r\n\"\"\"\r\ndef xrt_world_size():\r\n ...\r\n\"\"\"\r\n\r\n# In torch_xla/runtime.py\r\ndef world_size():\r\n ...\r\n```\r\n\r\n- Use @mark_deprecated decorator:\r\n```python\r\n# In torch_xla/core/xla_model.py:\r\nfrom torch_xla.experimental.deprecation import mark_deprecated\r\n\r\n@mark_deprecated(torch_xla.runtime.world_size, extra_msg='xrt_world_size() will be removed in release 2.7.')\r\ndef xrt_world_size():\r\n ...\r\n\r\n\r\n# In torch_xla/[runtime.py](http://runtime.py/), define the new function:\r\ndef world_size():\r\n ...\r\n```", "url": "https://github.com/pytorch/xla/issues/8000", "state": "open", "labels": [ "documentation", "2.5 release" ], "created_at": "2024-09-12T20:58:55Z", "updated_at": "2025-07-11T17:38:19Z", "comments": 4, "user": "zpcore" }, { "repo": "pytorch/torchchat", "number": 1134, "title": "Failures when using PyTorch local build vs. binaries", "body": "### \ud83d\udc1b Describe the bug\n\nI ran into an issue with loading the tokenizer, which was root caused to me using my local PyTorch build.\r\n\r\nAfter building the aoti runner, I ran the following command: `cmake-out/aoti_run exportedModels/stories15M.so -z /home/angelayi/.torchchat/model-cache/stories15M/tokenizer.model -i \"Once upon a time\u201d`\r\n\r\nWith my local build, the above command ran into the error: `couldn't load /home/angelayi/.torchchat/model-cache/stories15M/tokenizer.model` which is from the sentencepiece tokenizer. Specifying `-l 2` doesn't change anything as this is the default setting. \r\n\r\nChanging to `-l 3` results in the following error:\r\n```\r\nterminate called after throwing an instance of 'std::invalid_argument'\r\n what(): invalid encoder line: \r\nzsh: IOT instruction (core dumped) cmake-out/aoti_run ../lucy_stories15M.so -z ../tokenizer.model -l 3 -i \r\n```\r\n\r\nAfter re-running `./install/install_requirements.sh`, this installs PyTorch version at 08142024, and runs successfully.\r\nSo I tried today's nightly (09112024) using `pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121`, and this also runs successfully.\r\nGoing back to my local PyTorch build, I checked out the commit `26e5572` which corresponds to the cutoff of today's nightly, and built PyTorch locally. This runs into the initial error with the tokenizers. \r\n\r\nI still didn't figure out how to run with my local PyTorch build, but quoting Nikita, this is motivation to create a docker/venv story :P \r\n\r\ncc @malfet @Jack-Khuu \n\n### Versions\n\nmain", "url": "https://github.com/pytorch/torchchat/issues/1134", "state": "open", "labels": [ "bug", "enhancement" ], "created_at": "2024-09-11T23:57:18Z", "updated_at": "2024-09-12T01:01:24Z", "comments": 0, "user": "angelayi" }, { "repo": "pytorch/pytorch", "number": 135645, "title": "[ONNX] How to export the FlashAttention kernel", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n1. code\r\n```\r\n import sys\r\n import torch \r\n from modeling_intern_vit import FlashAttention # FlashAttention of InternVL2-2B model\r\n sys.path.append(\"/home/InternVL2-2B\") \r\n qkv=torch.load(\"/home/qkv.pth\") \r\n falsh=FlashAttention().eval().cuda()\r\n out=falsh(qkv.cuda())\r\n with torch.no_grad(): \r\n torch.onnx.export( \r\n falsh, \r\n (qkv,),\r\n \"/home/qkv.onnx\",\r\n input_names = [\"input0\"],\r\n output_names = [\"qkv_out\"],\r\n opset_version = 11\r\n )\r\n```\r\n\r\n3. output\r\n\r\n```\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(\r\n/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py:90: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n```\r\n\r\n5. onnx-file image \r\n \r\n![image](https://github.com/user-attachments/assets/f8a36f5d-1772-41ec-8e3e-0cde2b37e791)\r\n\r\n6.needed help\r\n \"My goal is to export an ONNX file for the visual part of the InternVL2-2B model, which uses the Flash-Attention module. The ONNX file I export produces inference results that differ significantly from those of PyTorch. I then tried exporting the ONNX file for Flash-Attention alone and testing it. However, the ONNX file only includes inputs and outputs, while the Flash-Attention includes many operations like reshape, which are missing in the exported ONNX file. This is the issue I\u2019m facing. I hope to export a functional ONNX file where the inference results are similar to those obtained from PyTorch. This is my requirement.\"\r\n\r\n \r\n \r\n\r\n\r\n \r\n\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.4.0+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 12.4.131\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090\r\nNvidia driver version: 550.107.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nVendor ID: GenuineIntel\r\nModel name: 11th Gen Intel(R) Core(TM) i7-11700F @ 2.50GHz\r\nCPU family: 6\r\nModel: 167\r\nThread(s) per core: 2\r\nCore(s) per socket: 8\r\nSocket(s): 1\r\nStepping: 1\r\nCPU max MHz: 4900.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 4992.00\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nL1d cache: 384 KiB (8 instances)\r\nL1i cache: 256 KiB (8 instances)\r\nL2 cache: 4 MiB (8 instances)\r\nL3 cache: 16 MiB (1 instance)\r\nNUMA node(s): 1\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Gather data sampling: Mitigation; Microcode\r\nVulnerability Itlb multihit: Not aff", "url": "https://github.com/pytorch/pytorch/issues/135645", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-triaged" ], "created_at": "2024-09-11T01:40:30Z", "updated_at": "2024-09-27T01:46:09Z", "user": "scuizhibin" }, { "repo": "pytorch/tutorials", "number": 3050, "title": "Improve example by adding missing import", "body": "The [example \"Creating a Custom Dataset for your files\"](https://github.com/pytorch/tutorials/blob/8a8331eb2796c05113c8a98bc03a7a164407fcbf/beginner_source/basics/data_tutorial.py#L123) is missing the import `from torch.utils.data import Dataset`. Since other imports are shown and the purpose of this example is to show how to create a custom dataset, this import is crucial and should be added.\r\n", "url": "https://github.com/pytorch/tutorials/issues/3050", "state": "closed", "labels": [], "created_at": "2024-09-10T09:04:01Z", "updated_at": "2025-04-14T18:43:31Z", "comments": 0, "user": "avitase" }, { "repo": "pytorch/xla", "number": 7987, "title": "Speeding up computation while using SPMD on large TPU pod", "body": "## \u2753 Questions and Help\r\nWhen running on vp-128 TPU pod (even when sharding only by batch dimension) we are experiencing very low performance comparing to the same pod without SPMD.\r\n\r\nDo you have any tips how to increase the performance? some SPMD arguments? things we need to think about when using it? anything that might help because right now the performance is lower than regular in a factor.\r\n@JackCaoG ", "url": "https://github.com/pytorch/xla/issues/7987", "state": "closed", "labels": [ "question", "performance" ], "created_at": "2024-09-10T07:59:14Z", "updated_at": "2025-03-31T15:57:15Z", "user": "dudulightricks" }, { "repo": "pytorch/torchtitan", "number": 572, "title": "How to calculate the total batchsize", "body": "Hi, it is me again~ I have a quick simple question: I am using the following training config with 4 GPUs. What is the total number of tokens per optimizer step? Is it 2 * 2048 or 2 * 2048 * 4?\r\n\r\n```\r\n[training]\r\nbatch_size = 2\r\nseq_len = 2048 \r\nwarmup_steps = 2000 # lr scheduler warm up, normally 20% of the train steps\r\nmax_norm = 1.0 # grad norm clipping\r\nsteps = 10000\r\ndata_parallel_degree = -1\r\ntensor_parallel_degree = 1\r\nfp8_linear = \"\"\r\ncompile = false\r\n```", "url": "https://github.com/pytorch/torchtitan/issues/572", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-09T09:47:50Z", "updated_at": "2024-09-10T05:43:47Z", "user": "zyushun" }, { "repo": "pytorch/xla", "number": 7972, "title": "Registering CUDA custom calls with the C++ FFI", "body": "## \u2753 Questions and Help\r\n\r\nCurious how to build and register a CUDA custom call with XLAC - have followed https://jax.readthedocs.io/en/latest/ffi.html and read https://openxla.org/xla/custom_call and wondering what the equivalent process is for torch / whether it is currently supported.", "url": "https://github.com/pytorch/xla/issues/7972", "state": "open", "labels": [ "question" ], "created_at": "2024-09-07T01:27:35Z", "updated_at": "2025-03-31T16:08:31Z", "user": "skrider" }, { "repo": "pytorch/torchchat", "number": 1114, "title": "What is the future plan of this torchchat project?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nTorchchat provides a solution of running LLM with PyTorch optimization on servers, desktop and mobile.\r\n\r\nMay I know what is the future plan of this project? Is there any new features to finish to encourage users to use Torchchat as a solution?\r\n", "url": "https://github.com/pytorch/torchchat/issues/1114", "state": "closed", "labels": [], "created_at": "2024-09-06T06:03:17Z", "updated_at": "2024-09-09T15:40:39Z", "user": "yanbing-j" }, { "repo": "pytorch/pytorch", "number": 135098, "title": "How to gracefully mask CompositeImplicitAutograd for different backends", "body": "### \ud83d\udc1b Describe the bug\n\nI implemented torch.compile\u2019s backend for my hardware via privateUserOne. I also found that torch.compile by default decomposes upsample_nearest2d into a bunch of small operators, just like _upsample_nearest does. But on my hardware, the _unsafe_index operator doesn\u2019t perform well, so I\u2019d like to be able to call the custom upsample_nearest2d operator directly for better performance. I don't know if this is a bug or if there could be a better implementation.\n\n### Error logs\n\n_No response_\n\n### Minified repro\n\n_No response_\n\n### Versions\n\nIt is irrelevant to the execution environment and is related to code implementation.\n\ncc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4", "url": "https://github.com/pytorch/pytorch/issues/135098", "state": "closed", "labels": [ "oncall: pt2", "oncall: export" ], "created_at": "2024-09-04T09:11:28Z", "updated_at": "2024-11-01T06:20:49Z", "user": "yangxiaorun" }, { "repo": "pytorch/vision", "number": 8626, "title": "Better decoder docs", "body": "Our decoding docs are poor, disorganized, and don't have any example. \r\nWe should improve those to clarify what is supported, how, and encourage users to rely on those.", "url": "https://github.com/pytorch/vision/issues/8626", "state": "closed", "labels": [], "created_at": "2024-09-03T14:47:11Z", "updated_at": "2024-10-01T12:19:14Z", "comments": 0, "user": "NicolasHug" }, { "repo": "pytorch/torchtitan", "number": 566, "title": "Multi-node training without AWS EFA clusters", "body": "Thank you so much for releasing code for this great project!\r\n\r\nFor multi-node training, right now I've only found commands in `multinode_trainer.slurm`, which seem to be specific to AWS EFA slurm clusters.\r\n\r\nI'm wondering if it's possible to try multi-node training without ASW setup, say with simply the IPs of 2 nodes instead?\r\n\r\nThank you very much for your help!", "url": "https://github.com/pytorch/torchtitan/issues/566", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-31T22:41:04Z", "updated_at": "2024-09-04T20:55:50Z", "user": "LeoXinhaoLee" }, { "repo": "pytorch/pytorch", "number": 134901, "title": "How to calculate second derivative using PyTorch with GPU (cuda)", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nI have a python code segment related to a deep RL algorithm where it calculates the second order optimization and second derivative with Hessian matrix and fisher information matrix. Normally I run the whole code on GPU (cuda), but since I got a computational issue to calculate second derivative in cuda,\r\n```\r\nNotImplementedError: the derivative for '_cudnn_rnn_backward' is not implemented. Double backwards is not supported for CuDNN RNNs due to limitations in the CuDNN API. To run double backwards, please disable the CuDNN backend temporarily while running the forward pass of your RNN. For example: \r\nwith torch.backends.cudnn.flags(enabled=False):\r\n output = model(inputs)\r\n```\r\nI had to move to CPU for this code segment, and now the code is executing sequentially instead of in parallel, which takes a long time to run:\r\n```\r\ngrads = torch.autograd.grad(policy_loss, self.policy.Actor.parameters(), retain_graph=True)\r\nloss_grad = torch.cat([grad.view(-1) for grad in grads])\r\n\r\ndef Fvp_fim(v = -loss_grad):\r\n with torch.backends.cudnn.flags(enabled=False):\r\n M, mu, info = self.policy.Actor.get_fim(states_batch)\r\n #pdb.set_trace()\r\n mu = mu.view(-1)\r\n filter_input_ids = set([info['std_id']])\r\n\r\n t = torch.ones(mu.size(), requires_grad=True, device=mu.device)\r\n mu_t = (mu * t).sum()\r\n Jt = compute_flat_grad(mu_t, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True)\r\n Jtv = (Jt * v).sum()\r\n Jv = torch.autograd.grad(Jtv, t)[0]\r\n MJv = M * Jv.detach()\r\n mu_MJv = (MJv * mu).sum()\r\n JTMJv = compute_flat_grad(mu_MJv, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True).detach()\r\n JTMJv /= states_batch.shape[0]\r\n std_index = info['std_index']\r\n JTMJv[std_index: std_index + M.shape[0]] += 2 * v[std_index: std_index + M.shape[0]]\r\n return JTMJv + v * self.damping\r\n```\r\nAbove is the main function, where it calculates the second derivative. below are the supportive functions and relevant classes it has used.\r\n```\r\ndef compute_flat_grad(output, inputs, filter_input_ids=set(), retain_graph=True, create_graph=False):\r\n if create_graph:\r\n retain_graph = True\r\n\r\n inputs = list(inputs)\r\n params = []\r\n for i, param in enumerate(inputs):\r\n if i not in filter_input_ids:\r\n params.append(param)\r\n\r\n grads = torch.autograd.grad(output, params, retain_graph=retain_graph, create_graph=create_graph, allow_unused=True)\r\n\r\n j = 0\r\n out_grads = []\r\n for i, param in enumerate(inputs):\r\n if (i in filter_input_ids):\r\n out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype))\r\n else:\r\n if (grads[j] == None):\r\n out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype))\r\n else:\r\n out_grads.append(grads[j].view(-1))\r\n j += 1\r\n grads = torch.cat(out_grads)\r\n\r\n for param in params:\r\n param.grad = None\r\n return grads\r\n\r\n------\r\n\r\nimport torch\r\nimport torch.nn as nn\r\n\r\n\r\nfrom agents.models.feature_extracter import LSTMFeatureExtractor\r\nfrom agents.models.policy import PolicyModule\r\nfrom agents.models.value import ValueModule\r\n\r\n\r\nclass ActorNetwork(nn.Module):\r\n def __init__(self, args):\r\n super(ActorNetwork, self).__init__()\r\n self.FeatureExtractor = LSTMFeatureExtractor(args)\r\n self.PolicyModule = PolicyModule(args)\r\n\r\n def forward(self, s):\r\n lstmOut = self.FeatureExtractor.forward(s)\r\n mu, sigma, action, log_prob = self.PolicyModule.forward(lstmOut)\r\n return mu, sigma, action, log_prob\r\n \r\n def get_fim(self, x):\r\n mu, sigma, _, _ = self.forward(x)\r\n\r\n if sigma.dim() == 1:\r\n sigma = sigma.unsqueeze(0)\r\n\r\n cov_inv = sigma.pow(-2).repeat(x.size(0), 1)\r\n\r\n param_count = 0\r\n std_index = 0\r\n id = 0\r\n std_id = id\r\n for name, param in self.named_parameters():\r\n if name == \"sigma.weight\":\r\n std_id = id\r\n std_index = param_count\r\n param_count += param.view(-1).shape[0]\r\n id += 1\r\n\r\n return cov_inv.detach(), mu, {'std_id': std_id, 'std_index': std_index}\r\n```\r\nIn the bigger picture there are large amounts of batches going through this function, since all of 'em have to go sequentially through this function, it highly increases the total running time. Is there a possible way to calculate the second derivative with Pytorch while running on cuda/GPU?\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n\ncc @csarofeen @ptrblck @xwang233 @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @mikaylagawarecki @zou3519 @Chillee @samdow @kshitij12345", "url": "https://github.com/pytorch/pytorch/issues/134901", "state": "open", "labels": [ "module: double backwards", "module: cudnn", "module: autograd", "module: rnn", "triaged", "module: functorch" ], "created_at": "2024-08-31T04:01:40Z", "updated_at": "2024-09-04T01:48:21Z", "user": "Damika-Anupama" }, { "repo": "pytorch/xla", "number": 7925, "title": "Prepare a documentation to explain the use cases for `torch.compile`, `torch_xla.compile`, torch_xla eager mode, torchxla2", "body": "## \ud83d\udcda Documentation\r\n\r\nAuthor a documentation to explain the use cases for `torch.compile`, `torch_xla.compile`, torch_xla eager mode, torchxla2. Users and customers look for clarity on the \"the utility\" of each option, pros/cons, small example to demonstrate correct use.\r\n\r\ncc @ManfeiBai @JackCaoG @will-cromar @qihqi \r\n", "url": "https://github.com/pytorch/xla/issues/7925", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-08-29T17:01:06Z", "updated_at": "2024-09-24T18:33:39Z", "comments": 2, "user": "miladm" }, { "repo": "pytorch/torchtitan", "number": 562, "title": "Pipeline Parallelism + FSDP", "body": "On `PP + FSDP` and `PP + TP + FSDP`:\r\n- Is there any documentation on how these different parallelisms compose?\r\n- What are the largest training runs these strategies have been tested on?\r\n- Are there benchmarks for how these strategies compare against other distributed training frameworks that expose similar parallelisms?\r\n\r\nParticularly interested in how `PP + FSDP` work together as it seems DeepSpeed explicitly disallows `ZeRO 2/3 + PP` (see [here](https://github.com/microsoft/DeepSpeed/blob/4864991f53bd2e12446198bcc655f919eb9157f9/deepspeed/runtime/pipe/engine.py#L77-L78) specifically, and [here](https://github.com/microsoft/DeepSpeed/issues/1110) for discussion).\r\n\r\n@wconstab @weifengpy @wanchaol ", "url": "https://github.com/pytorch/torchtitan/issues/562", "state": "open", "labels": [ "enhancement", "question", "module: pipelining" ], "created_at": "2024-08-29T14:19:58Z", "updated_at": "2025-10-30T06:21:51Z", "user": "jeromeku" }, { "repo": "pytorch/pytorch", "number": 134760, "title": "How to correctly release the memory of a tensor", "body": "i have fined the memory increase at this line.\r\n[param.copy_(input_param)](https://github.com/pytorch/pytorch/blob/d01a7a9faa5a742a3df7374b97bbc1db1205b6ed/torch/nn/modules/module.py#L2425)\r\nbut the memory cant be released clean after the module use.\r\nwhat happen in it and how to correctly release the memory of a tensor.\r\n\r\n[more detail](https://github.com/comfyanonymous/ComfyUI/issues/4655#issuecomment-2317354203)\n\ncc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki", "url": "https://github.com/pytorch/pytorch/issues/134760", "state": "closed", "labels": [ "module: nn", "module: memory usage", "triaged" ], "created_at": "2024-08-29T11:39:12Z", "updated_at": "2024-08-30T08:24:23Z", "user": "huangqiaobo" }, { "repo": "pytorch/TensorRT", "number": 3124, "title": "\u2753 [Question] dynamo conversion failing w/ TRTInterpreter", "body": "## \u2753 Question\r\n\r\nim able to `torch.export` and generate an ExportedProgram with no issues for my model. upon compiling with `torch_tensorrt`... \r\n```python\r\nep = torch.export.load(\"...\")\r\nexample_inputs = ep.example_inputs[0]\r\nmodel = ep.module().to(\"cuda\")\r\n\r\ncompile_spec = {\r\n \"ir\": \"torch_compile\",\r\n \"inputs\": example_inputs,\r\n \"enabled_precisions\": enabled_precisions,\r\n \"workspace_size\": workspace_size,\r\n \"min_block_size\": min_block_size,\r\n \"torch_executed_ops\": {},\r\n \"sparse_weights\": True,\r\n}\r\n\r\noptimized_model = torch_tensorrt.compile(model, **compile_spec)\r\n```\r\n\r\n... i run into this error:\r\n\r\n```\r\nERROR:torch_tensorrt [TensorRT Conversion Context]:INetworkDefinition::addConstant: Error Code 3: API Usage Error (Parameter check failed, condition: !weights.values == !weights.count. )\r\nTraceback (most recent call last):\r\n...\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 479, in run\r\n self._construct_trt_network_def()\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 325, in _construct_trt_network_def\r\n super().run()\r\n File \".../lib/python3.10/site-packages/torch/fx/interpreter.py\", line 145, in run\r\n self.env[node] = self.run_node(node)\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 529, in run_node\r\n trt_node: torch.fx.Node = super().run_node(n)\r\n File \".../lib/python3.10/site-packages/torch/fx/interpreter.py\", line 202, in run_node\r\n return getattr(self, n.op)(n.target, args, kwargs)\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 638, in call_function\r\n return converter(self.ctx, target, args, kwargs, self._cur_node_name)\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/aten_ops_converters.py\", line 242, in aten_ops_cat\r\n return impl.cat.cat(\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/impl/cat.py\", line 31, in cat\r\n each_input = get_trt_tensor(ctx, each_input, f\"{name}_tensor_{i}\")\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/converter_utils.py\", line 384, in get_trt_tensor\r\n return create_constant(ctx, input_val, name, dtype, min_rank)\r\n File \".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/converter_utils.py\", line 349, in create_constant\r\n constant.name = name\r\ntorch._dynamo.exc.BackendCompilerFailed: backend='torch_tensorrt_backend' raised:\r\nAttributeError: 'NoneType' object has no attribute 'name'\r\n```\r\n\r\nim currently able to cleanly generate an `ExportedProgram` via `torch.export`, and outputs from the trace match the original PyTorch model. in particular, its unclear to me why `!weights.values == !weights.count` would be an `API Usage Error`, and the discrepancy between torch.compile and how torch_tensorrt interprets / performs the op conversion (torch.compile on the ExportedProgram module works fine)\r\n\r\n## What you have already tried\r\n\r\ni've narrowed the issue down to a single module that does positional encoding. the output of this module is then concat'd with another tensor, which is the error above. without this module, everything works as expected, and i'm able to see about a 5x speedup. \r\n\r\nthe only unique thing about this module is that it has a buffer and some in-place operations; however, i've dumped and manually inspected the fx Graph and the trace looks correct (buffer lifted as a constant input). other things ive done are: re-writing the forward so that they are no in-place operations to make graph capture easier.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.4\r\n - CPU Architecture: aarch64\r\n - OS (e.g., Linux): Ubuntu\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): modified bazel build rules + install\r\n - Are you using local sources or building from archives: local build from source\r\n - Python version: 3.10\r\n - CUDA version: 12.4\r\n - GPU models and configuration: Ampere (Jetson Nano, JetPack 6.0)\r\n - Any other relevant information: i compiled torch_tensorrt on HEAD of main as of last Friday (8/23)\r\n\r\n## Additional context\r\n\r\ncc @narendasan not sure if you have any insight here. thanks!", "url": "https://github.com/pytorch/TensorRT/issues/3124", "state": "open", "labels": [ "question" ], "created_at": "2024-08-28T20:09:48Z", "updated_at": "2024-09-06T19:36:58Z", "user": "patrick-botco" }, { "repo": "pytorch/tutorials", "number": 3017, "title": "\ud83d\udca1 [REQUEST] - What is purpose of `out.backward(torch.randn(1, 10))` in neural_networks_tutorial", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nIn [neural networks tutorial for beginners](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html), we have the following:\r\n\r\nZero the gradient buffers of all parameters and backprops with random gradients:\r\n```\r\nnet.zero_grad()\r\nout.backward(torch.randn(1, 10))\r\n```\r\n\r\nWhat is the purpose of this? It is not part of standard ML workflows and can be confusing to beginners. (As evidence,I am helping some people learn basics of ML and I got questions about this line. This is how I found out about it!)\r\n\r\nIf there is no good reason for it, then I suggest:\r\n- dropping these few lines\r\n- changing wording of other parts of the page if needed. E.g. 'at this point we covered... calling backward'\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @subramen @albanD", "url": "https://github.com/pytorch/tutorials/issues/3017", "state": "open", "labels": [ "question", "intro", "core" ], "created_at": "2024-08-28T14:51:46Z", "updated_at": "2025-04-16T18:24:08Z", "user": "Lovkush-A" }, { "repo": "pytorch/pytorch", "number": 134668, "title": "Whether tensor parallelism supports the overlap of communication calculations for gradient computation, and how to implement it", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI want to know How to achieve the overlap of communication calculations when finding the gradient after row cutting/column cutting of the linear layer\uff0cthanks\r\nThe following is\r\nhttps://pytorch.org/docs/2.3/distributed.tensor.parallel.html\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o", "url": "https://github.com/pytorch/pytorch/issues/134668", "state": "open", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2024-08-28T11:06:58Z", "updated_at": "2024-08-30T17:54:43Z", "user": "Xingzhi107" }, { "repo": "pytorch/ao", "number": 763, "title": "How to reduce autoquant compilation time", "body": "Autoquant has been popular among the diffusers crowd since its OOB performance has been the best but the main issue is compile times are quite long. There's a few strategies to mitigate this\r\n1. Tune faster: either with better heuristics or a faster tuning core loop\r\n2. Cache things: It's fine if tuning takes a long time if subsequent tunings take less time so we could have a cache. Right now some users are conflating the kernel autotune cach as an autoquant cache. Probably makes sense to hide the autotune cache \r\n3. Print progress more verbosely: Since people are waiting for a long time we can print a nice report to make things more appealing", "url": "https://github.com/pytorch/ao/issues/763", "state": "open", "labels": [], "created_at": "2024-08-27T20:49:09Z", "updated_at": "2024-08-28T17:36:03Z", "user": "msaroufim" }, { "repo": "pytorch/ao", "number": 750, "title": "Question RE AO MX formats", "body": "I noticed the [MX readme](https://github.com/pytorch/ao/tree/main/torchao/prototype/mx_formats) has this line: \"we match bitwise to other implementations of the OCP MX spec (code not in this repo), with a couple of edge cases left to resolve.\" Is there a list of edge cases where AO does not match reference implementations? Also, is https://github.com/microsoft/microxcaling the reference implementation AO is trying to match or something else? ", "url": "https://github.com/pytorch/ao/issues/750", "state": "closed", "labels": [ "question", "mx" ], "created_at": "2024-08-26T17:37:40Z", "updated_at": "2024-08-27T17:15:01Z", "user": "tsengalb99" }, { "repo": "pytorch/xla", "number": 7911, "title": "Documentation: Discoverability of http://pytorch.org/xla", "body": "## \ud83d\udcda Documentation: Discoverability of http://pytorch.org/xla\r\n\r\nThe docs are very hard to find _despite_ being hosted on [pytorch.org](http://pytorch.org/). If I visit [pytorch.org](http://pytorch.org/) I can't find any link that goes to [pytorch.org/xla](http://pytorch.org/xla). The closest I could find is somewhere deep in https://pytorch.org/pytorch-domains and even then it links to version 2.1! I think the discoverability can use some support possibly after we've polished up the landing page.", "url": "https://github.com/pytorch/xla/issues/7911", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-08-24T02:58:03Z", "updated_at": "2024-11-04T17:38:23Z", "comments": 7, "user": "tengyifei" }, { "repo": "pytorch/vision", "number": 8608, "title": "loss_box_reg increasing while training mask rcnn ", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI am trying to train maskRcnn model using detectron2 on my custom LVO deteset. My dataset is a single class dataset and some of the image have no annotation in it. The architecture need to learn negative examples as well for proper training as the test data contains both positive and negative lvo cases. I have segmentation annotation in coco format and have registered it using CocoRegistration.\r\nWhen I try to train the maskrcnn model the overall loss decreases but the loss_box_reg increases, and the prediction results bounding box have scores less then 0.1 for every cases (even positive cases). Why is this happening.\r\n\r\nHow to reproduce this error:\r\n\r\n```\r\ncfg = get_cfg()\r\n# cfg.merge_from_file(model_zoo.get_config_file(\"COCO-Detection/retinanet_R_101_FPN_3x.yaml\"))\r\ncfg.merge_from_file(model_zoo.get_config_file(\"COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml\"))\r\ncfg.DATASETS.TRAIN = (\"train\",)\r\ncfg.DATASETS.TEST = () # no metrics implemented for this dataset\r\ncfg.DATALOADER.NUM_WORKERS = 2\r\ncfg.INPUT.MAX_SIZE_TRAIN = 512 # every training image have size 512\r\ncfg.INPUT.MIN_SIZE_TRAIN = (512,)\r\ncfg.INPUT.MAX_SIZE_TEST = 512\r\ncfg.INPUT.MIN_SIZE_TEST = 512\r\ncfg.INPUT.MASK_FORMAT = \"bitmask\"\r\n# cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(\"COCO-Detection/retinanet_R_101_FPN_3x.yaml\") # initialize from model zoo\r\ncfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(\"COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml\")\r\ncfg.SOLVER.IMS_PER_BATCH = 2\r\ncfg.SOLVER.BASE_LR = 0.00025\r\ncfg.SOLVER.MAX_ITER = 2000\r\ncfg.SOLVER.CHECKPOINT_PERIOD = 200\r\ncfg.SOLVER.STEPS = [] # do not decay learning rate\r\ncfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # faster, and good enough for this toy dataset\r\ncfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon)\r\ncfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS = False\r\ncfg.OUTPUT_DIR = out_dir\r\ntrainer = DefaultTrainer(cfg) \r\ntrainer.resume_or_load(resume=False)\r\ntrainer.train()\r\n```\r\nMy positive and negative dataset sample\r\n![image](https://github.com/user-attachments/assets/b59b688a-9709-44a2-beb5-53cd12916e41)\r\nAnnotation example:\r\n ``` \r\n {\r\n \"id\": 80,\r\n \"image_id\": 180,\r\n \"category_id\": 1,\r\n \"segmentation\": {\r\n \"counts\": [\r\n large list\r\n ],\r\n \"size\": [512, 512]\r\n },\r\n \"area\": 247.0,\r\n \"bbox\": [302.0, 227.0, 24.0, 13.0],\r\n \"iscrowd\": 0,\r\n \"attributes\": { \"occluded\": false\r\n }},\r\n```\r\n \r\nIssue:\r\nTotal loss:\r\n![image](https://github.com/user-attachments/assets/5095c1e6-c692-48a2-845d-5f4c54b77be9)\r\n\r\nLoss_box_reg:\r\n![image](https://github.com/user-attachments/assets/8ce4cfe0-820a-4646-9921-10d527ce3987)\r\n\r\nMy prediction scoreexample for positive cases:\r\nscores: tensor([0.0901, 0.0862, 0.0737, 0.0697, 0.0679, 0.0670, 0.0668, 0.0665, 0.0664, ........])\r\n\r\nHelp me solve this problem\r\n\r\n### Versions\r\n\r\nVersions:\r\nPyTorch version: 2.0.0+cu117\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64)\r\nGCC version: (GCC) 11.3.0\r\nClang version: Could not collect\r\nCMake version: version 3.28.3\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.9.18 (main, May 16 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)\r\nPython platform: Linux-5.14.0-427.18.1.el9_4.x86_64-x86_64-with-glibc2.34\r\nIs CUDA available: False\r\nCUDA runtime version: 11.7.64\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 57 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 64\r\nOn-line CPU(s) list: 0-63\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz\r\nCPU family: 6\r\nModel: 106\r\nThread(s) per core: 2\r\nCore(s) per socket: 16\r\nSocket(s): 2\r\nStepping: 6\r\nCPU(s) scaling MHz: 100%\r\nCPU max MHz: 3500.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 5800.00\r\nFlags: -------some giberish-----------\r\nVirtualization: VT-x\r\nL1d cache: 1.5 MiB (32 instances)\r\nL1i cache: 1 MiB (32 instances)\r\nL2 cache: 40 MiB (32 instances)\r\nL3 cache: 48 MiB (2 instances)\r\nNUMA node(s): 2\r\nNUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,1", "url": "https://github.com/pytorch/vision/issues/8608", "state": "closed", "labels": [], "created_at": "2024-08-23T15:15:27Z", "updated_at": "2024-08-27T10:13:48Z", "comments": 1, "user": "ArpanGyawali" }, { "repo": "pytorch/TensorRT", "number": 3115, "title": "\u2753 [Question] JetPack 6.0", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\ni'd like to use torch_tensorrt w/ JetPack 6.0, but from `setup.py`, it seems like latest supported version is JetPack 5.0 https://github.com/pytorch/TensorRT/blob/main/setup.py#L147-L164\r\n\r\n## What you have already tried\r\n1. added JetPack 6.0 to setup.py, setting `JETPACK_VERSION` to 6.0.\r\n2. downloaded bazelisk, manually added to PATH \r\n3. ran setup.py:\r\n```bash\r\npython setup.py bdist_wheel --jetpack-version 6.0 --use-cxx11-abi\r\n```\r\n4. tried creating a new WORKSPACE under `toolchains/jp_workspaces/WORKSPACE.jp60`, effectively copying and pasting `jp50` - but changing `libtorch` to be from Python 3.10. ran `bazel clean --expunge`, eventually ending with `ValueError: Can't find the directory of package torch_tensorrt: I looked in ./src/torch_tensorrt and ./torch_tensorrt`\r\n\r\npotentially missing something obvious here. thank you!\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.4\r\n - CPU Architecture: ARM64\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda env, pip install\r\n - Build command you used (if compiling from source): setup.py w/ Bazel (through Bazelisk)\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.10\r\n - CUDA version: 12.2\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\r\n\r\ncc @narendasan @zewenli98 (seeing a lot of your commits around setup.py and jp50 :))", "url": "https://github.com/pytorch/TensorRT/issues/3115", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-22T17:52:45Z", "updated_at": "2024-08-26T17:11:33Z", "user": "patrick-botco" }, { "repo": "pytorch/TensorRT", "number": 3114, "title": "\u2753 [Question] Revisit the argument types of normalization converters", "body": "## \u2753 Question\r\n\r\nhttps://github.com/pytorch/TensorRT/pull/3099#issuecomment-2303600863\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3114", "state": "open", "labels": [ "question" ], "created_at": "2024-08-22T16:17:01Z", "updated_at": "2024-08-22T18:04:10Z", "user": "peri044" }, { "repo": "pytorch/pytorch", "number": 134207, "title": "How to fallback the operators those are unsupported by XLA back to cpu backend?", "body": "I'm using the xla backend, and there are some operators that are not supported by the xla backend.\r\nHow can I use the backend fallback mechanism to fallback these unsupported operators to CPU backend?\r\n\r\nThanks!\n\ncc @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/134207", "state": "closed", "labels": [ "triaged", "module: xla" ], "created_at": "2024-08-22T06:46:21Z", "updated_at": "2024-09-05T06:52:20Z", "user": "wwtghx" }, { "repo": "pytorch/serve", "number": 3296, "title": "integrating the Torch Serve hosted model with a third party application", "body": "I have an application that takes an image converts that image into base64 to create a input request for API call.\r\n\r\nThe input schema structure created by my application looks something like this,\r\n{\r\n\"instances\":\r\n [\r\n {\r\n \"base64\": \"base64 string of image\",\r\n \"mode_type\": \"some value\"\r\n \"metadata\": \"some metadata like timestamp\"\r\n }\r\n ]\r\n}\r\n\r\nNow, I have to use this application to call a torch serve hosted application. From going through the Torch Serve documents I understood that the torch serve hosted API would accept an input in the below structure,\r\n{\r\n\"instances\":\r\n [\r\n {\r\n \"data\": [input_data]\r\n }\r\n ]\r\n}\r\nwhere the **input_data** is the data that is directly accepted by the model. For understanding purpose lets say it is Numpy array.\r\n\r\nHere is my question:\r\nIf I wanted to use my application to call a Torch Serve API, How easy or difficult it would be? Having in account that similar discrepancy is there in the output structure which might require some pre or post processing of the base64 into appropriate format. \r\n\r\nHow can I integrate my application with Torch Serve API seamlessly?", "url": "https://github.com/pytorch/serve/issues/3296", "state": "open", "labels": [], "created_at": "2024-08-22T06:18:20Z", "updated_at": "2024-08-22T16:27:35Z", "comments": 1, "user": "tarunsk1998" }, { "repo": "pytorch/TensorRT", "number": 3109, "title": "\u2753 [Question] how to specify dynamic shape when using torch_tensorrt.save", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI was following [the documentation](https://pytorch.org/TensorRT/user_guide/dynamic_shapes.html#dynamic-shapes) on compiling a model with dynamic input shape. When saving the compiled graph module (following [this](https://pytorch.org/TensorRT/user_guide/saving_models.html)), the new `torch_tensorrt.save(module, path, inputs)` API requires `inputs` to be all tensors. How do I pass dynamic shapes to `torch_tensorrt.save`? Error: https://github.com/pytorch/TensorRT/blob/77278fe395d6ffdd456fd7a8a94852cd27ee63a9/py/torch_tensorrt/_compile.py#L420\r\n\r\n```\r\nimport torch\r\nimport torch_tensorrt\r\n\r\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50', pretrained=True)\r\nmodel.eval().cuda()\r\ninputs = [torch_tensorrt.Input(min_shape=[1, 3, 224, 224],\r\n opt_shape=[4, 3, 224, 224],\r\n max_shape=[8, 3, 224, 224],\r\n dtype=torch.float32)]\r\ntrt_gm = torch_tensorrt.compile(model, ir=\"dynamo\", inputs=inputs)\r\ntorch_tensorrt.save(trt_gm, \"trt_gm.ep\", inputs=inputs)\r\n\r\n```\r\n```\r\nWARNING:torch_tensorrt.dynamo.conversion.aten_ops_converters:Unable to import quantization op. Please install modelopt library (https://github.com/NVIDIA/TensorRT-Model-Optimizer?tab=readme-ov-file#installation) to add support for compiling quantized models\r\nINFO:torch_tensorrt.dynamo._compiler:Compilation Settings: CompilationSettings(enabled_precisions={<dtype.f32: 7>}, debug=False, workspace_size=0, min_block_size=5, torch_executed_ops=set(), pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False, device=Device(type=DeviceType.GPU, gpu_id=0), require_full_compilation=False, disable_tf32=False, assume_dynamic_shape_support=False, sparse_weights=False, refit=False, engine_capability=<EngineCapability.STANDARD: 1>, num_avg_timing_iters=1, dla_sram_size=1048576, dla_local_dram_size=1073741824, dla_global_dram_size=536870912, dryrun=False, hardware_compatible=False, timing_cache_path='/tmp/timing_cache.bin')\r\n\r\nINFO:torch_tensorrt.dynamo._compiler:Partitioning the graph via the fast partitioner\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageChange] Init CUDA: CPU +1, GPU +0, now: CPU 449, GPU 1622 (MiB)\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageChange] Init builder kernel library: CPU +1622, GPU +288, now: CPU 2218, GPU 1910 (MiB)\r\nWARNING:torch_tensorrt.dynamo.conversion.converter_utils:Detected unparseable type in node formatting: <class 'torch.SymInt'>\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT INetwork construction elapsed time: 0:00:00.609398\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Global timing cache in use. Profiling results in this builder pass will be stored.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Detected 1 inputs and 1 output network tensors.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Total Host Persistent Memory: 343968\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Total Device Persistent Memory: 7168\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Total Scratch Memory: 6424576\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:[BlockAssignment] Started assigning block shifts. This will take 86 steps to complete.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:[BlockAssignment] Algorithm ShiftNTopDown took 0.644934ms to assign 4 blocks to 86 nodes requiring 65830912 bytes.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Total Activation Memory: 65830912\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Total Weights Memory: 127383968\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Engine generation completed in 0.553365 seconds.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 16 MiB, GPU 129 MiB\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageStats] Peak memory usage during Engine building and serialization: CPU: 4064 MiB\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:Build TRT engine elapsed time: 0:00:00.649827\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT Engine uses: 129675836 bytes of Memory\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Serialized 26 bytes of code generator cache.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Serialized 292352 bytes of compilation cache.\r\nINFO:torch_tensorrt [TensorRT Conversion Context]:Serialized 3744 timing cache entries\r\nWARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)\r\nTraceback (most recent call last):\r\n File \"test.py\", line 11, in <module>\r\n ", "url": "https://github.com/pytorch/TensorRT/issues/3109", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-21T18:35:28Z", "updated_at": "2024-09-26T20:38:44Z", "user": "Qi-Zha0" }, { "repo": "pytorch/ao", "number": 724, "title": "What is the difference between WeightNormSparsifier and torch.nn.utils.prune.l1_unstructured ?", "body": "", "url": "https://github.com/pytorch/ao/issues/724", "state": "open", "labels": [ "question" ], "created_at": "2024-08-21T18:14:19Z", "updated_at": "2024-08-23T15:03:35Z", "user": "mayank64ce" }, { "repo": "pytorch/xla", "number": 7897, "title": "Import \"torch_xla.core.xla_model\" could not be resolved", "body": "getting issues on torch_xla.core.xla_model. , while installing package also getting errors : \"ERROR: Could not find a version that satisfies the requirement torch-xla (from versions: none)\r\nERROR: No matching distribution found for torch-xla\"\r\nI have installed python version is : Python 3.10.0\r\n\r\nAny Solution ?\r\n", "url": "https://github.com/pytorch/xla/issues/7897", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-21T05:25:35Z", "updated_at": "2025-04-01T12:26:48Z", "user": "hiralU" }, { "repo": "pytorch/xla", "number": 7890, "title": "In spmd training of multiple machines, xp.trace is problematic", "body": "## \u2753 Questions and Help\r\nI printed all the thunk that was executed and found that there were a lot of thunk that didn't appear in my tensorboard. And the order of the front and back is also wrong.\r\nI trace according to this example\uff1ahttps://github.com/pytorch/xla/blob/master/test/spmd/test_train_spmd_imagenet.py#L318-L333\r\nxla_version: latest\r\ndevice: 2 * 8 A100", "url": "https://github.com/pytorch/xla/issues/7890", "state": "open", "labels": [ "question" ], "created_at": "2024-08-20T12:48:39Z", "updated_at": "2025-04-01T12:28:34Z", "user": "mars1248" }, { "repo": "pytorch/serve", "number": 3290, "title": "model_yaml_config usage is not explained well enough", "body": "### \ud83d\udcda The doc issue\n\n### Expected : \r\nThe [documentation ](https://github.com/pytorch/serve/blob/master/docs/configuration.md#config-model)about `model_yaml_config` sounds as if we could use it as below in `config.properties` and access it later.\r\n\r\n- file name : `config.properties`\r\n- content :\r\n```\r\ninference_address=https://127.0.0.1:8443\r\nmanagement_address=https://127.0.0.1:8444\r\nmetrics_address=https://127.0.0.1:8445\r\nmodel_yaml_config={\\\r\n \"pippy\": {\\\r\n \"rpc_timeout\": <some value>\\\r\n }\\\r\n}\r\n```\r\n\r\nand I can't access the `model_yaml_config` property through `context.model_yaml_config` and actually it throws an error.\r\n\r\n---\r\n\r\n### Reality :\r\n\r\nHowever, the way we could use the property is as below.\r\n- command : `torch-model-archiver --model-name <something> --serialized-file <some path> ... --config-file <yaml file path>`\r\n\r\nand this logic is very confusing when compared with what is written in the documentation\r\n\n\n### Suggest a potential alternative/fix\n\nThe logic seems like when my handler, having inherited `BaseHandler`, doesn't acutally assign `self.model_yaml_config` in its `initialize` [method.](https://github.com/pytorch/serve/blob/ef196c0f1d5f14bb0e01f65b7b21d43c3c143814/ts/torch_handler/base_handler.py#L151) Actually, it is assigned when `Service` is instantiated with `.__init__` [method](https://github.com/pytorch/serve/blob/ef196c0f1d5f14bb0e01f65b7b21d43c3c143814/ts/service.py#L34)\r\n\r\nI suggest either of the two\r\n1. Modify the documentation to use `model_yaml_config` property with `torch-model-archiver --config-file <path>` argument\r\n2. Or modify the code to assign `model_yaml_config` through `config.properties` as it sounds in the current documentation.", "url": "https://github.com/pytorch/serve/issues/3290", "state": "open", "labels": [], "created_at": "2024-08-20T00:34:32Z", "updated_at": "2024-08-26T18:49:27Z", "comments": 1, "user": "Foundsheep" }, { "repo": "pytorch/torchchat", "number": 1041, "title": "Improve support for and documentation of custom models", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\ntorchchat supports adding models to the \"known_model\" list and has CLI support for local models not hosted in torchchat's, but this can be better documented. \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nSome PR's Related to this theme:\r\n* https://github.com/pytorch/torchchat/issues/1038 \r\n* https://github.com/pytorch/torchchat/issues/1040\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/torchchat/issues/1041", "state": "closed", "labels": [ "documentation", "enhancement", "Known Gaps", "triaged" ], "created_at": "2024-08-19T16:43:48Z", "updated_at": "2025-02-04T18:22:48Z", "comments": 1, "user": "Jack-Khuu" }, { "repo": "pytorch/torchtitan", "number": 528, "title": "How to train using bfloat16?", "body": "Hi! I have a quick question: how to train using bfloat16? I found the default setting using fp32. \r\n\r\nI changed ''data_parallel_degree\" to 4 (my number of GPUs) but still did not use bfloat16.\r\n\r\nThanks in advance!", "url": "https://github.com/pytorch/torchtitan/issues/528", "state": "closed", "labels": [], "created_at": "2024-08-19T07:38:12Z", "updated_at": "2024-08-20T13:45:47Z", "user": "zyushun" }, { "repo": "pytorch/ao", "number": 704, "title": "Question: How to use Float8InferenceLinear with FSDP1/2? ", "body": "Hey Team,\r\n\r\nI'm trying to use FSDP1/2 with Float8InferenceLinear but seems have some issues (with torch 2.3.1+cu118). Do you suggestion to bump to higher version of torch and have a try or maybe use the training setup without using the inference layer? I also tried using the Flont8linear layer without using the quantization function to convert to Float8InferenceLinear but seems face some issues when using FSDP1 that when computing the amax, some input x tensors are empty (x.numel()=0) and some are NaN.\r\n\r\nBest regards,\r\nQQ", "url": "https://github.com/pytorch/ao/issues/704", "state": "open", "labels": [ "float8", "inference" ], "created_at": "2024-08-19T07:33:07Z", "updated_at": "2024-08-26T02:40:18Z", "user": "qingquansong" }, { "repo": "pytorch/TensorRT", "number": 3098, "title": "\u2753 [Question] When using torch_tensorrt.compile to optimize Mask2Former's multi_scale_deformable_attn layer, an error occurs.", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI was preparing to export a TRT model for Mask2Former using the command **optimized_model = torch_tensorrt.compile(model, inputs=imgs, enabled_precisions={torch.half})**, where model is a Mask2Former loaded through mmseg.\r\nHowever, I encountered an error at the line **value_l_ = value_list[0].flatten(2).transpose(1, 2).reshape(4 * 8, 32, 16, 16)**:\r\nThe error message was: \r\n`\"Failed running call_method reshape(*(FakeTensor(..., device='cuda:0', size=(1, 256, 256),\r\n grad_fn=<TransposeBackward0>), 32, 32, 16, 16), **{}):\r\nshape '[32, 32, 16, 16]' is invalid for input of size 65536\"`\r\n\r\nThe original code was **value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_)**. Even after fixing all variables with constants, **During training, this can be reshaped normally**, but the above error occurs when using torch_tensorrt.compile.\r\n\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - pytorch: 2.3.0\r\n - torch_tensorrt: 2.3.0\r\n - OS: ubuntu20:\r\n - mmsegmentation: 1.2.1\r\n\r\n\r\n## Additional context\r\n\r\nThe complete code is as follows:\r\n\r\n```\r\n value_list = value.split([16*16,32*32,64*64], dim=1)\r\n value_l_ = value_list[0].flatten(2).transpose(1, 2).reshape(4 * 8, 32, 16, 16)\r\n sampling_grid_l_ = sampling_grids[:, :, :,0].transpose(1, 2).flatten(0, 1)\r\n sampling_value_l_ = F.grid_sample(\r\n value_l_,\r\n sampling_grid_l_,\r\n mode='bilinear',\r\n padding_mode='zeros',\r\n align_corners=False)\r\n sampling_value_list.append(sampling_value_l_)\r\n```\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3098", "state": "open", "labels": [ "question" ], "created_at": "2024-08-19T03:03:03Z", "updated_at": "2024-09-24T18:38:56Z", "user": "edition3234" }, { "repo": "pytorch/TensorRT", "number": 3095, "title": "\u2753 [Question] Why does the speed (fps) of torch-tensorrt perform so badly in `torch.multiprocessing`?", "body": "## \u2753 Question\r\nHello, dear developer:\r\nThank your for your amazing job!\r\nWhy does the speed (fps) of torch-tensorrt perform so badly in `torch.multiprocessing`?\r\nCurrently I use `torch.multiprocessing` to create and run 3 Process (in 1 GPU) of resnet18, resnet50 and resnet101 at the same time. But I find their speeds of inference are worse than single process.\r\nHere is my single process code:\r\n```\r\n# single process\r\nimport time\r\n\r\nimport torch\r\nimport tensorrt\r\nimport torch_tensorrt\r\nfrom torchvision.models import resnet18, resnet50, resnet101\r\nif __name__ == '__main__':\r\n # --------------------------------ResNet18---------------------------------------\r\n model0 = torch.jit.load(\"res18_trt_fp16.ts\")\r\n inputs = [torch.randn((10, 3, 224, 224)).half().cuda()]\r\n print(\"Warm up ...\")\r\n with torch.no_grad():\r\n for _ in range(10):\r\n features = model0(*inputs)\r\n torch.cuda.synchronize()\r\n t0 = time.time()\r\n with torch.no_grad():\r\n _ = model0(*inputs)\r\n torch.cuda.synchronize()\r\n t1 = time.time()\r\n print('res18: ', (t1 - t0) * 1000, 'ms')\r\n\r\n # --------------------------------ResNet50---------------------------------------\r\n model1 = torch.jit.load(\"res50_trt_fp16.ts\")\r\n inputs = [torch.randn((10, 3, 224, 224)).half().cuda()]\r\n print(\"Warm up ...\")\r\n with torch.no_grad():\r\n for _ in range(10):\r\n features = model1(*inputs)\r\n torch.cuda.synchronize()\r\n t0 = time.time()\r\n with torch.no_grad():\r\n _ = model1(*inputs)\r\n torch.cuda.synchronize()\r\n t1 = time.time()\r\n print('res50: ', (t1 - t0) * 1000, 'ms')\r\n\r\n # --------------------------------ResNet101--------------------------------------\r\n model2 = torch.jit.load(\"res101_trt_fp16.ts\")\r\n inputs = [torch.randn((10, 3, 224, 224)).half().cuda()]\r\n\r\n with torch.no_grad():\r\n for _ in range(10):\r\n features = model2(*inputs)\r\n\r\n torch.cuda.synchronize()\r\n t0 = time.time()\r\n with torch.no_grad():\r\n res = model2(*inputs)\r\n torch.cuda.synchronize()\r\n t1 = time.time()\r\n print('res101: ', (t1 - t0) * 1000, 'ms')\r\n```\r\nThe results are:\r\n```\r\nres18: 1.2104511260986328 ms\r\nres50: 2.7513504028320312 ms\r\nres101: 5.034923553466797 ms\r\n```\r\n\r\nAnd here is my multiprocessing code\r\n```\r\n# multiprocess\r\nimport pycuda.driver as cuda\r\nimport pycuda.autoinit\r\n\r\nimport os\r\nimport time\r\nimport numpy as np\r\n\r\nimport torch\r\nimport torch.multiprocessing as mp\r\nimport torch_tensorrt\r\n\r\ndef Worker1():\r\n print('Worker1 PID:', os.getpid())\r\n net = torch.jit.load(\"res18_trt_fp16.ts\")\r\n x = torch.randn(10, 3, 224, 224).half().cuda()\r\n\r\n for i in range(10):\r\n _ = net(x)\r\n\r\n with torch.no_grad():\r\n while True:\r\n # infer\r\n torch.cuda.synchronize()\r\n t0 = time.time()\r\n\r\n results = net(x)\r\n\r\n torch.cuda.synchronize()\r\n t1 = time.time()\r\n print('Res18', (t1 - t0) * 1000, 'ms')\r\n\r\ndef Worker2():\r\n print('Worker2 PID:', os.getpid())\r\n net = torch.jit.load(\"res50_trt_fp16.ts\")\r\n x = torch.randn(10, 3, 224, 224).half().cuda()\r\n\r\n for i in range(10):\r\n _ = net(x)\r\n\r\n with torch.no_grad():\r\n while True:\r\n # infer\r\n torch.cuda.synchronize()\r\n t0 = time.time()\r\n\r\n results = net(x)\r\n\r\n torch.cuda.synchronize()\r\n t1 = time.time()\r\n print('Res50', (t1 - t0) * 1000, 'ms')\r\n\r\n\r\ndef Worker3():\r\n print('Worker3 PID:', os.getpid())\r\n net = torch.jit.load(\"res101_trt_fp16.ts\")\r\n x = torch.randn(10, 3, 224, 224).half().cuda()\r\n\r\n for i in range(10):\r\n _ = net(x)\r\n\r\n with torch.no_grad():\r\n while True:\r\n # infer\r\n torch.cuda.synchronize()\r\n t0 = time.time()\r\n\r\n results = net(x)\r\n\r\n torch.cuda.synchronize()\r\n t1 = time.time()\r\n print('Res101', (t1 - t0) * 1000, 'ms')\r\n\r\n\r\n\r\nif __name__ == '__main__':\r\n mp.set_start_method('spawn', force=True)\r\n\r\n # create\r\n processes = [\r\n mp.Process(target=Worker1, args=()),\r\n mp.Process(target=Worker2, args=()),\r\n mp.Process(target=Worker3, args=()),\r\n ]\r\n\r\n # start\r\n for p in processes:\r\n p.start()\r\n\r\n # main loop\r\n while True:\r\n continue\r\n```\r\nBUT the results are (average):\r\n```\r\nRes18: 5.539894104003906 ms\r\nRes50: 7.973670959472656 ms\r\nRes101:13.53001594543457 ms\r\n```\r\nThe results of multiprocessing are so wired. They are much slower than single process, which confuses me a lot.\r\nIs there any way to fix them up or speed them up?\r\nThank you in advance!\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.3.0 stable\r\n - PyTorch-Tensorrt Version (e.g., 1.0): 2.3.0\r\n - Tensorrt Version (e.g., 1.0): 10.0.1\r\n - CPU Architecture: x64\r\n - OS (e.g., Linux): ubuntu 22.04\r\n - How y", "url": "https://github.com/pytorch/TensorRT/issues/3095", "state": "open", "labels": [ "question" ], "created_at": "2024-08-17T08:32:46Z", "updated_at": "2025-04-15T13:54:47Z", "user": "zhongqiu1245" }, { "repo": "pytorch/torchx", "number": 945, "title": "Using torchx as a SDK", "body": "## \u2753 Questions and Help\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\nThe examples on the documentation refer to using torchx via the cli implementation. I was wondering if there was any way that torchx can be used in a sdk format. For instance:\r\n```\r\nclass MyCustomComponent\r\n\r\nclass MyScheduler\r\n\r\nrunner = torchx.runner()\r\nrunner.with_scheduler(MyScheduler())\r\nrunner.run_component(MyCustomComponent())\r\n```\r\nIf its possible, is there any documentation or a sample project that provides an example of how this can be used, in particular using a custom scheduler and component? \r\n\r\nThank You!", "url": "https://github.com/meta-pytorch/torchx/issues/945", "state": "open", "labels": [], "created_at": "2024-08-17T03:21:30Z", "updated_at": "2024-08-19T14:18:45Z", "comments": 1, "user": "juinquok" }, { "repo": "pytorch/torchchat", "number": 1038, "title": "How to deploy a new model by torchchat?", "body": "I want to use torchchat to load the trained model directly from the local. How to change the torchchat/config/data/models.json? Need to change download _ and _ convert in download.py?And, what other documents may need to be changed?", "url": "https://github.com/pytorch/torchchat/issues/1038", "state": "open", "labels": [ "bug" ], "created_at": "2024-08-16T09:33:29Z", "updated_at": "2024-08-19T18:24:37Z", "user": "liu8060" }, { "repo": "pytorch/TensorRT", "number": 3092, "title": "\u2753 [Question] Is there any way to deploy on a single machine with multi-gpus\uff1f", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\nAs the title, I have a machine with multiple GPUs and I would like to know if there is any way to evenly distribute the model across these GPUs. Is there any way to achieve this?\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3092", "state": "open", "labels": [ "question" ], "created_at": "2024-08-16T02:01:21Z", "updated_at": "2024-08-16T17:58:02Z", "user": "SZ-ing" }, { "repo": "pytorch/pytorch", "number": 133643, "title": "How to Manage CPU Memory Usage in PyTorch After Moving Model to CPU?", "body": "### \ud83d\udcda The doc issue\n\nHi everyone,\r\n\r\nI'm currently working on a deep learning project using PyTorch, and I've run into some issues with managing CPU memory after transferring a model to the GPU.\r\n\r\nIn specific, I'm loading a pre-trained model using PyTorch, then moving the model to the GPU. However, I've noticed that after moving the model to the GPU, the CPU memory usage doesn't decrease as much as I expected.\r\n\r\nI used 'memory_profiler' to analyze memory usage, and here's what I found:\r\n\r\nBefore moving to GPU: The model uses a significant amount of CPU memory during the loading and preparation stages.\r\n\r\nAfter moving to GPU: The memory usage on the CPU doesn't drop much. It seems like some data or buffers might still be retained in CPU memory.\r\n\r\nI've tried deleting references to the model on CPU using 'del' and forced garbage collection using 'gc.collect()' but this doesn't seem to affect the memory.\r\n\r\nSo is that because PyTorch inherently keep some CPU memory for caching or other purposes? Is it possible to fully release the CPU memory after moving a model to GPU in PyTorch?\r\n\r\nI would appreciate any insights or advice on how to better manage CPU memory in this context. Thanks in advance for your help!Hi everyone,\r\n\r\nI'm currently working on a deep learning project using PyTorch, and I've run into some issues with managing CPU memory after transferring a model to the GPU.\r\n\r\nIn specific, I'm loading a pre-trained model using PyTorch, then moving the model to the GPU. However, I've noticed that after moving the model to the GPU, the CPU memory usage doesn't decrease as much as I expected.\r\n\r\nI used **'memory_profiler'** to analyze memory usage, and here's what I found:\r\n\r\nBefore moving to GPU: The model uses a significant amount of CPU memory during the loading and preparation stages.\r\n\r\nAfter moving to GPU: The memory usage on the CPU doesn't drop much. It seems like some data or buffers might still be retained in CPU memory.\r\n\r\nI've tried deleting references to the model on CPU using **'del'** and forced garbage collection using **'gc.collect()'** but this doesn't seem to affect the memory.\r\n\r\nSo is that because PyTorch inherently keep some CPU memory for caching or other purposes? Is it possible to fully release the CPU memory after moving a model to GPU in PyTorch?\r\n\r\nI would appreciate any insights or advice on how to better manage CPU memory in this context. Thanks in advance for your help!\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/133643", "state": "closed", "labels": [], "created_at": "2024-08-15T23:23:15Z", "updated_at": "2024-08-16T20:43:11Z", "user": "prisnguyen" }, { "repo": "pytorch/xla", "number": 7858, "title": "[Bug] Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated", "body": "## \ud83d\udc1b Bug\r\n\r\nOfficial Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated\r\n\r\n## To Reproduce:\r\nRun [Stable Diffusion with PyTorch/XLA 2.0 Notebook](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) on Kaggle TPU VM v3-8\r\n\r\n## Environment\r\nKaggle TPU VM v3-8\r\n\r\n## Expected behavior\r\nGenerate and show image.\r\n\r\n## Error:\r\n```shel\r\nFutureWarning: `callback` is deprecated and will be removed in version 1.0.0. Passing `callback` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`\r\n deprecate(\r\n 2%|\u258f | 1/50 [00:00<00:32, 1.51it/s]\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 4\r\n 1 generator = torch.Generator().manual_seed(0)\r\n 2 # xm.mark_step compiles and executes the graph after each iteration.\r\n 3 # The first few steps will be much slower than the rest.\r\n----> 4 image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]\r\n 5 image\r\n\r\nFile /usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1041, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)\r\n 1039 if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):\r\n 1040 progress_bar.update()\r\n-> 1041 if callback is not None and i % callback_steps == 0:\r\n 1042 step_idx = i // getattr(self.scheduler, \"order\", 1)\r\n 1043 callback(step_idx, t, latents)\r\n\r\nTypeError: unsupported operand type(s) for %: 'int' and 'NoneType'\r\n```", "url": "https://github.com/pytorch/xla/issues/7858", "state": "open", "labels": [ "bug", "documentation", "xla:tpu" ], "created_at": "2024-08-15T11:21:01Z", "updated_at": "2025-05-02T23:15:34Z", "comments": 2, "user": "steveepreston" }, { "repo": "pytorch/xla", "number": 7857, "title": "Why do the communication in my spmd training have control-predecessors", "body": "## \u2753 Questions and Help\r\nIn my formal training task, there are some control-predecessors in the communication operator, but the single test I constructed cannot reproduce this situation. I would like to know under what circumstances these control-predecessors can be generated.\r\n```\r\nall-gather-start.12 = (f32[256]{0}, f32[4096]{0}) all-gather-start(param.639.0), channel_id=25, replica_groups={{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}}, dimensions={0}, use_global_device_ids=true, control-predecessors={all-gather-done.11}, metadata={op_type=\"aten__add\" op_name=\"train_loop.1/aten__add.123/aten__add\" source_file=\"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py\" source_line=118}, backend_config={\"operation_queue_id\":\"0\",\"wait_on_operation_queues\":[],\"collective_backend_config\":{\"is_sync\":false,\"no_parallel_custom_call\":false},\"force_earliest_schedule\":false}\r\nall-gather-done.12 = f32[4096]{0} all-gather-done(all-gather-start.12), metadata={op_type=\"aten__add\" op_name=\"train_loop.1/aten__add.123/aten__add\" source_file=\"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py\" source_line=118}\r\nall-gather-start.13 = (f32[256]{0}, f32[4096]{0}) all-gather-start(param.640.0), channel_id=26, replica_groups={{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}}, dimensions={0}, use_global_device_ids=true, control-predecessors={all-gather-done.12}, metadata={op_type=\"aten__mul\" op_name=\"train_loop.1/aten__mul.126/aten__mul\" source_file=\"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/normalization.py\" source_line=205}, backend_config={\"operation_queue_id\":\"0\",\"wait_on_operation_queues\":[],\"collective_backend_config\":{\"is_sync\":false,\"no_parallel_custom_call\":false},\"force_earliest_schedule\":false}\r\nall-gather-done.13 = f32[4096]{0} all-gather-done(all-gather-start.13), metadata={op_type=\"aten__mul\" op_name=\"train_loop.1/aten__mul.126/aten__mul\" source_file=\"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/normalization.py\" source_line=205}\r\n```", "url": "https://github.com/pytorch/xla/issues/7857", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-08-15T11:17:08Z", "updated_at": "2025-04-01T12:33:52Z", "user": "mars1248" }, { "repo": "pytorch/xla", "number": 7855, "title": "How to sync TPUs when using a pod with more than 1 VM in SPMD", "body": "## \u2753 Questions and Help\r\n\r\nGenerally we feel that since in SPMD most of the work is under the hood its hard to understand what is required from us when using it in order to sync between TPUs on a pod with multiple VMs.\r\n\r\nWe would like to know the stages of syncing in that case, and how is it different from the regular syncing required on TPUs (a list of stages by name will be nice).\r\nSpecifically, If all the VMs run the same command and they all work as they are run alone (global index 0, global count 1) who should log the loss? should we use torch.distributed.get_rank() == 0 to determine the \"master\" for logging? @JackCaoG ", "url": "https://github.com/pytorch/xla/issues/7855", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-08-14T18:51:04Z", "updated_at": "2025-04-01T12:35:29Z", "user": "dudulightricks" }, { "repo": "pytorch/xla", "number": 7854, "title": "Using mark_sharding vs. MpDeviceLoader with input_sharding=xs.ShardingSpec", "body": "## \u2753 Questions and Help\r\nIf we have a few tensors in a batch with different sizes and we use mark_sharding on each of them, we lose something comparing to input_sharding=xs.ShardingSpec in the MpDeviceLoader (which only works for a single size of tensor in the batch)? @JackCaoG ", "url": "https://github.com/pytorch/xla/issues/7854", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-08-14T18:41:34Z", "updated_at": "2025-04-01T12:36:56Z", "user": "dudulightricks" }, { "repo": "pytorch/xla", "number": 7850, "title": "SPMD - how to use different dataloader on each VM of a TPU pod in SPMD", "body": "## \u2753 Questions and Help\r\nWhile in SPMD mode If we run the train command of a model on all the VMs together (single program multiple machines) each VM has its own dataloader using cpu cores. \r\nThen, when we use mark_sharding on the batch its practically copy the batch of the first VM (rank 0) to all the TPUs and ignore the batches of other VMs (which were loaded with different dataloaders).\r\nIn order to solve that (use all the dataloaders on the different VMs to load different data and use it all) we have added torch.distributed.all_gather_object on the batch object to get one huge batch before using mark_sharding.\r\nThe problem is that in this case we afraid that the huge batch is held in the memory of one VM before the sharding. The ideal solution for us would have been something like batch.mark_sharding(gather_all=True) in which instead of ignoring the different batches on all the VMs it will gather them all together logically and use mark_sharding on the result huge batch (which is practically splited over the TPUs). This way we will use all the loaded data without exploding the memory of the first VM. \r\nIs there anything like that command? How can we use the data loaded in all the dataloaders on the different VMs? In our case its important because the data is large and it takes time to load it. @JackCaoG ", "url": "https://github.com/pytorch/xla/issues/7850", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-08-14T17:50:09Z", "updated_at": "2025-04-01T12:41:07Z", "user": "dudulightricks" }, { "repo": "pytorch/vision", "number": 8588, "title": "size mismatch for rpn", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI created a Mask R-CNN model using a set of parameters that I saved in a JSON file. Once the model was trained, I saved the weights using `torch.save(model.state_dict(), \"MaskRCNN.pt\")`. Later, I recreated the same model and loaded the saved weights `model.load_state_dict(torch.load(\"MaskRCNN.pt\", map_location=Device))`.\r\n\r\nOn my laptop (MacBook Pro M2) using Torch 2.2.2, TorchVision 0.17.2 (most up to date for this environment), and CPU only, everything works just fine.\r\n\r\nHowever, on a cluster based on Centos with Torch 2.4, TorchVision 0.19 (most up to date for this environment), and Cuda 12.1.1, I get the following error when loading the weights:\r\n\r\n File \"/home/XXX//MaskRCNN.py\", line 84, in Load\r\n model.load_state_dict(torch.load(WeightsPath, map_location=Device))\r\n File \"/home/XXX/torch/nn/modules/module.py\", line 2215, in load_state_dict\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n RuntimeError: Error(s) in loading state_dict for MaskRCNN:\r\n \tsize mismatch for rpn.head.cls_logits.weight: copying a param with shape torch.Size([6, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 256, 1, 1]).\r\n \tsize mismatch for rpn.head.cls_logits.bias: copying a param with shape torch.Size([6]) from checkpoint, the shape in current model is torch.Size([14]).\r\n \tsize mismatch for rpn.head.bbox_pred.weight: copying a param with shape torch.Size([24, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([56, 256, 1, 1]).\r\n \tsize mismatch for rpn.head.bbox_pred.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([56]).\r\n\r\nThe code is exactly the same on my laptop and on the cluster.\r\nI double checked, and I used exactly the same parameters to create ALL the models.\r\n\r\nHow can I fix this?\r\n\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.4.0+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: CentOS Linux release 7.9.2009 (Core) (x86_64)\r\nGCC version: (GCC) 13.2.0\r\nClang version: Could not collect\r\nCMake version: version 2.8.12.2\r\nLibc version: glibc-2.17\r\n\r\nPython version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-3.10.0-957.10.1.el7.x86_64-x86_64-with-glibc2.17\r\nIs CUDA available: True\r\nCUDA runtime version: 12.1.105\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: Tesla V100-SXM2-32GB\r\nGPU 1: Tesla V100-SXM2-32GB\r\nGPU 2: Tesla V100-SXM2-32GB\r\nGPU 3: Tesla V100-SXM2-32GB\r\nGPU 4: Tesla V100-SXM2-32GB\r\nGPU 5: Tesla V100-SXM2-32GB\r\nGPU 6: Tesla V100-SXM2-32GB\r\nGPU 7: Tesla V100-SXM2-32GB\r\n\r\nNvidia driver version: 550.90.07\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7.0.5\r\n/usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.2.1\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 80\r\nOn-line CPU(s) list: 0-79\r\nThread(s) per core: 2\r\nCore(s) per socket: 20\r\nSocket(s): 2\r\nNUMA node(s): 2\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 85\r\nModel name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz\r\nStepping: 4\r\nCPU MHz: 1000.000\r\nCPU max MHz: 2401.0000\r\nCPU min MHz: 1000.0000\r\nBogoMIPS: 4800.00\r\nVirtualization: VT-x\r\nL1d cache: 32K\r\nL1i cache: 32K\r\nL2 cache: 1024K\r\nL3 cache: 28160K\r\nNUMA node0 CPU(s): 0-19,40-59\r\nNUMA node1 CPU(s): 20-39,60-79\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke spec_ctrl intel_stibp flush_l1d\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.4\r\n[pip3] numpydoc==1.8.0\r\n[pip3] torch==2.4.0\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchvision==0.19.0\r\n[pip3] triton==3.0.0\r\n[conda] numpy 1.26.4 pypi_0 pypi\r\n[conda] numpyd", "url": "https://github.com/pytorch/vision/issues/8588", "state": "closed", "labels": [], "created_at": "2024-08-14T11:08:41Z", "updated_at": "2024-08-15T09:49:41Z", "comments": 4, "user": "FiReTiTi" }, { "repo": "pytorch/xla", "number": 7849, "title": "Is it possible free TPU memory without restarting in pytorch xla?", "body": "## \ud83d\udcda Documentation\r\nI have tried to move a TPU tensor to CPU or delete the tensor. However, the memory is not released.\r\n\r\nhttps://colab.research.google.com/drive/1pTTDu_eJssUwjsrjBDiiyo6tlOEZTjMf?usp=sharing\r\n<!-- A clear and concise description of what content is an issue. -->\r\n", "url": "https://github.com/pytorch/xla/issues/7849", "state": "closed", "labels": [], "created_at": "2024-08-14T10:48:37Z", "updated_at": "2024-08-26T01:25:00Z", "comments": 6, "user": "fengyang0317" }, { "repo": "pytorch/pytorch", "number": 133397, "title": "Don't know how to explain but here's the error", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nFile \"C:\\Users\\USER\\Downloads\\pytorch\\main.py\", line 3, in <module>\r\n import torch\r\n File \"C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\torch\\__init__.py\", line 148, in <module>\r\n raise err\r\nOSError: [WinError 126] The specified module could not be found. Error loading \"C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\torch\\lib\\fbgemm.dll\" or one of its dependencies.\r\n\r\n### Versions\r\n\r\nPyTorch version: N/A\r\nIs debug build: N/A\r\nCUDA used to build PyTorch: N/A\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Microsoft Windows 11 Home\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: N/A\r\n\r\nPython version: 3.12.5 (tags/v3.12.5:ff3bc82, Aug 6 2024, 20:45:27) [MSC v.1940 64 bit (AMD64)] (64-bit runtime)\r\nPython platform: Windows-11-10.0.22631-SP0\r\nIs CUDA available: N/A\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: N/A\r\n\r\nCPU:\r\nArchitecture=9\r\nCurrentClockSpeed=2419\r\nDeviceID=CPU0\r\nFamily=205\r\nL2CacheSize=5120\r\nL2CacheSpeed=\r\nManufacturer=GenuineIntel\r\nMaxClockSpeed=2419\r\nName=11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz\r\nProcessorType=3\r\nRevision=\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.4\r\n[pip3] onnx==1.16.2\r\n[pip3] onnxruntime==1.18.1\r\n[pip3] torch==2.4.0\r\n[pip3] torchaudio==2.4.0\r\n[pip3] torchvision==0.19.0\r\n[conda] Could not collect\n\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex", "url": "https://github.com/pytorch/pytorch/issues/133397", "state": "closed", "labels": [ "module: windows" ], "created_at": "2024-08-14T02:53:02Z", "updated_at": "2024-08-15T00:59:59Z", "user": "Nohj9984" }, { "repo": "pytorch/xla", "number": 7846, "title": "Is pytorch xla spmd working as expected?", "body": "## \ud83d\udc1b Bug\r\nI tried to run [test_train_spmd_linear_model.py](https://github.com/pytorch/xla/blob/master/test/spmd/test_train_spmd_linear_model.py) with `sharding='batch'`. The input data sharing is {devices=[8,1]0,1,2,3,4,5,6,7}, which is expected. However, after a linear layer, the fc1 output sharding becomes 'replicated'. I am wonder whether all the following layers are running without sharding?\r\n\r\nPrint the sharding_spec during forward.\r\n```\r\n def forward(self, x):\r\n print('x', torch_xla._XLAC._get_xla_sharding_spec(x))\r\n fc1 = self.fc1(x)\r\n print('fc1', torch_xla._XLAC._get_xla_sharding_spec(fc1))\r\n y = self.relu(fc1)\r\n print('y', torch_xla._XLAC._get_xla_sharding_spec(y))\r\n z = self.fc2(y)\r\n print('z', torch_xla._XLAC._get_xla_sharding_spec(z))\r\n o = self.fc3(z)\r\n print('o', torch_xla._XLAC._get_xla_sharding_spec(o))\r\n return o\r\n```\r\n\r\nObtained outputs\r\n```\r\nx {devices=[8,1]0,1,2,3,4,5,6,7}\r\nfc1 \r\ny \r\nz \r\no \r\n```\r\n\r\n## To Reproduce\r\nhttps://colab.research.google.com/drive/1508nWHxCthxWBlIeKLF0sLZXcjtsO6Ly#scrollTo=nGTxOOgDDOU3\r\n\r\nSteps to reproduce the behavior:\r\n\r\nrun the colab above.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->\r\n\r\n## Expected behavior\r\n\r\nThe fc1, y, z, o should have sharding.\r\n\r\n## Environment\r\n\r\n - Reproducible on XLA backend [CPU/TPU/CUDA]: TPU\r\n - torch_xla version: nightly\r\n\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/xla/issues/7846", "state": "closed", "labels": [], "created_at": "2024-08-13T14:43:50Z", "updated_at": "2024-09-01T12:58:48Z", "comments": 3, "user": "fengyang0317" }, { "repo": "pytorch/xla", "number": 7837, "title": "Make `tpu-info` more visible to the community", "body": "## \ud83d\udcda Documentation\r\n\r\nWe highlighted tpu-info in the [PyTorch/XLA 2.4 release](https://cloud.google.com/blog/products/ai-machine-learning/pytorch-xla-2-4-improves-pallas-and-adds-eager-mode?e=13802955). I understand we have a [CoLab demo page](https://colab.sandbox.google.com/drive/1aMYTONPE4f3BtZpRq1_jPcRcIiSKtoY9?usp=drive_open#scrollTo=ZqjPdg3XlTnG) to help users set up and use `tpu-info`. \r\n\r\nA quick search on the web, however, shows no pointer to the instructions on how to set up `tpu-info`. I suggest we publish a guide that brings this feature to the forefront. cc @duncantech \r\n\r\nSimilarly, PyTorchXLA docker images benefit from having this feature built-in. Can we add it to our docker nightly/release setup flow?\r\n\r\nDo we have plans to make `tpu-info` a standalong installation package?", "url": "https://github.com/pytorch/xla/issues/7837", "state": "closed", "labels": [ "usability" ], "created_at": "2024-08-12T19:17:30Z", "updated_at": "2024-08-17T06:39:58Z", "comments": 5, "user": "miladm" }, { "repo": "pytorch/vision", "number": 8585, "title": "Cant find nms function in code?", "body": "### \ud83d\udc1b Describe the bug\n\nI am looking for a method in torch, but for the love of god I can not not find the function definition!\r\nThe reason I need to find it is that I need to get rid of the torch dependency and I want to try to convert it into numpy.\r\n\r\nI am speaking about torchvision.ops.nms()\r\nThis method is located in torchvision/ops/boxes.py and returns torch.ops.torchvision.nms().\r\n\r\nThis method is generated code which can be found in torch/_ops.py where they initialize ops with ops: _Ops = _Ops().\r\n\r\nThats the point where I am lost, the class is located in the same file, but I cant figure out which library it calls to get the nms() method.\r\n\r\nPlease help me :frowning:\n\n### Versions\n\nLatest", "url": "https://github.com/pytorch/vision/issues/8585", "state": "closed", "labels": [], "created_at": "2024-08-12T12:17:23Z", "updated_at": "2024-08-12T12:26:58Z", "comments": 1, "user": "asusdisciple" }, { "repo": "pytorch/xla", "number": 7832, "title": "80B model how to shard restore in spmd training", "body": "## \u2753 Questions and Help\r\nIn pytorch we can use `fsdp meta init` shard restore my big model(like have 80B parameters),in torch_xla i only find shard save like use this.https://github.com/pytorch/xla/blob/master/torch_xla/experimental/distributed_checkpoint/manager.py#L257.\r\nIs there a way to recover the original pytorch model parameters in pieces during spmd training, and when saving the model, save it in the original pytorch format for inference", "url": "https://github.com/pytorch/xla/issues/7832", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-08-12T11:52:00Z", "updated_at": "2025-04-01T12:50:25Z", "user": "mars1248" }, { "repo": "pytorch/pytorch", "number": 133205, "title": "How to use libtorch in a c++11 project?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nc++14_warning.h:32:2: \u9519\u8bef\uff1a#error This file requires compiler and library support for the forthcoming ISO C++ 2014 standard. This support is currently experimental, and must be enabled with the -std=c++1y or -std=gnu++1y compiler options.\r\n #error This file requires compiler and library support for the forthcoming \\\r\n\r\n\r\n### Versions\r\nPyTorch version: 1.12.0a0+git664058f\r\nOS: CentOS release 6.9 (Final) (x86_64)\r\nGCC version: (GCC) 5.4.0\r\nClang version: 3.4.2 (tags/RELEASE_34/dot2-final)\r\nCMake version: version 3.21.3\r\nLibc version: glibc-2.10\r\n\u5728\u4f7f\u7528libtorch\u6784\u5efa\u81ea\u5df1\u7684\u5de5\u7a0b\u65f6\u62a5\u9519\uff0c\u6211\u7684\u5de5\u7a0b\u662fc++11(\u4e0d\u53ef\u5347\u7ea7) \uff0c\u6709\u6ca1\u6709\u529e\u6cd5\u4f7f\u7528libtorch\uff1f\n\ncc @svekars @brycebortree @jbschlosser @seemethere @malfet @osalpekar @atalman", "url": "https://github.com/pytorch/pytorch/issues/133205", "state": "closed", "labels": [ "module: docs", "module: cpp", "triaged" ], "created_at": "2024-08-12T08:45:32Z", "updated_at": "2024-09-24T02:03:36Z", "user": "zhb0920" }, { "repo": "pytorch/TensorRT", "number": 3075, "title": "\u2753 [Question] failed to run the `examples/dynamo/vgg16_fp8_ptq.y` example", "body": "## \u2753 Question\r\n\r\nI'm trying to run the `examples/dynamo/vgg16_fp8_ptq.y` example but got following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/wh/generative_action/SynHSI/vgg_quat.py\", line 232, in <module>\r\n exp_program = torch.export.export(model, (input_tensor,))\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/__init__.py\", line 174, in export\r\n return _export(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py\", line 1066, in wrapper\r\n raise e\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py\", line 1039, in wrapper\r\n ep = fn(*args, **kwargs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/exported_program.py\", line 100, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py\", line 2034, in _export\r\n export_artifact = export_func( # type: ignore[operator]\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py\", line 1273, in _strict_export\r\n return _strict_export_lower_to_aten_ir(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py\", line 1412, in _strict_export_lower_to_aten_ir\r\n aten_export_artifact = lower_to_aten_callback(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py\", line 633, in _export_to_aten_ir\r\n gm, graph_signature = transform(aot_export_module)(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 1194, in aot_export_module\r\n fx_g, metadata, in_spec, out_spec = _aot_export_function(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 1426, in _aot_export_function\r\n fx_g, meta = create_aot_dispatcher_function(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 429, in create_aot_dispatcher_function\r\n return _create_aot_dispatcher_function(flat_fn, flat_args, aot_config)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py\", line 730, in _create_aot_dispatcher_function\r\n compiled_fn, fw_metadata = compiler_fn(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py\", line 105, in aot_dispatch_export\r\n graph, _, _ = aot_dispatch_base_graph(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py\", line 138, in aot_dispatch_base_graph\r\n fw_module = _create_graph(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py\", line 46, in _create_graph\r\n fx_g = make_fx(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 1805, in wrapped\r\n return make_fx_tracer.trace(f, *args)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 1751, in trace\r\n return self._trace_inner(f, *args)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 1737, in _trace_inner\r\n t = dispatch_trace(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_compile.py\", line 31, in inner\r\n return disable_fn(*args, **kwargs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 631, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 899, in dispatch_trace\r\n graph = tracer.trace(root, concrete_args)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 1392, in trace\r\n res = super().trace(root, concrete_args)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 631, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py\", line 823, in trace\r\n (self.create_arg(fn(*args)),),\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 920, in wrapped\r\n out = f(*tensors)\r\n File \"<string>\", line 1, in <lambda>\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py\", line 403, in _functionalized_f_", "url": "https://github.com/pytorch/TensorRT/issues/3075", "state": "open", "labels": [ "question" ], "created_at": "2024-08-09T08:01:14Z", "updated_at": "2024-08-23T22:06:56Z", "user": "broken-dream" }, { "repo": "pytorch/xla", "number": 7823, "title": "[XLA:GPU compile Error] nvcc fatal : Unsupported gpu architecture 'compute_35'", "body": "detail:\r\nNVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4\r\n\r\nfor Kepler GPUs are removed from CUDA 12.x. how can i compile torch_xla for gpu in CUDA Version 12.X(GPU guide use CUDA12.X). really confused. thanks for reply.\r\n\r\n\r\n![img_v3_02dj_acefc8b2-0c7b-4504-8b7b-0af9b368b7bg](https://github.com/user-attachments/assets/ce7d0448-d53b-455e-ac01-4f0f8c077eed)\r\n\r\noriginal issue:https://github.com/pytorch/xla/issues/7783", "url": "https://github.com/pytorch/xla/issues/7823", "state": "closed", "labels": [ "bug", "xla:gpu", "build" ], "created_at": "2024-08-09T07:50:59Z", "updated_at": "2025-04-01T12:53:16Z", "comments": 3, "user": "FatJhon" }, { "repo": "pytorch/text", "number": 2270, "title": "undefined symbol", "body": "## undefined symbol\r\n\r\nPyTorch version 2.1.2\r\n\r\nI am looking for a version of torchtext that will work with PyTorch 2.1.2. I have tried every version from 0.16.0 to 0.18.0\r\nEach version of torchtext has some version of undefined symbol. \r\n\r\n```\r\npython -c \"import torchtext\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/__init__.py\", line 6, in <module>\r\n from torchtext import _extension # noqa: F401\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/_extension.py\", line 64, in <module>\r\n _init_extension()\r\n File \"/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/_extension.py\", line 58, in _init_extension\r\n _load_lib(\"libtorchtext\")\r\n File \"/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/_extension.py\", line 50, in _load_lib\r\n torch.ops.load_library(path)\r\n File \"/app/software/PyTorch/2.1.2-foss-2023a/lib/python3.11/site-packages/torch/_ops.py\", line 852, in load_library\r\n ctypes.CDLL(path)\r\n File \"/app/software/Python/3.11.3-GCCcore-12.3.0/lib/python3.11/ctypes/__init__.py\", line 376, in __init__\r\n self._handle = _dlopen(self._name, mode)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOSError: /app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/lib/libtorchtext.so: undefined symbol: _ZN5torch6detail10class_baseC2ERKSsS3_SsRKSt9type_infoS6_\r\n```", "url": "https://github.com/pytorch/text/issues/2270", "state": "open", "labels": [], "created_at": "2024-08-08T23:25:46Z", "updated_at": "2024-09-18T09:00:08Z", "comments": 1, "user": "fizwit" }, { "repo": "pytorch/vision", "number": 8570, "title": "RandomPhotometricDistort has undocumented channel shuffle feature", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nThe documentation for RandomPhotometricDistort neither exposes the channel shuffle behavior as a parameter or lists in the description that this is a possibility.\r\n\r\nhttps://pytorch.org/vision/stable/generated/torchvision.transforms.v2.RandomPhotometricDistort.html#torchvision.transforms.v2.RandomPhotometricDistort\r\n\r\nI was trying to use this as convince for randomly brightness and contrast operations, but I got unexpected breaking channel swaps as well.\r\n\r\nThe best course of action could be to expose a boolean true/false parameter on whether to do channel swaps or not.\r\n\r\n### Versions\r\n\r\n0.19 stable documentation", "url": "https://github.com/pytorch/vision/issues/8570", "state": "closed", "labels": [], "created_at": "2024-08-08T19:14:05Z", "updated_at": "2024-08-13T02:50:14Z", "comments": 1, "user": "chadrockey" }, { "repo": "pytorch/functorch", "number": 1146, "title": "Strange behaviour of autograd.functional.jacobian when vectorize=True and strategy=\u2018forward-mode\u2019", "body": "I calculate the Jacobian of a neural network with respect to its 14 input variables. The network has an output of 9015, meaning I have 126210 gradients. Because I have some complex calculations in my neural network I cannot use jacrev/jacfwd, see [ jacfwd and jacrev are fundamentally broken for complex inputs #94397 ](https://github.com/pytorch/pytorch/issues/94397).\r\n\r\nTherefore I am using autograd.functional.jacobian with default settings which works perfectly fine but calculating the Jacobian takes approx. 16 seconds. Since I have to calculate the Jacobian several times during an iteration that I run I have to speed up this process. I do all the calculations on my GPU. \r\n\r\nI set vectorize=True and strategy=\u2018forward-mode\u2019 it works (also with 0.05sec) but after 30 iterations it stops and says \u2018torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU\u2019.\r\n\r\nI am aware of the performance <-> memory tradeoff as described by @zou3519 in [ Cuda Memory Overflow in Jacobian Computation #1058 ](https://github.com/pytorch/functorch/issues/1058) but this seems a bit drastic as the error only occurs after some iterations. The first few work perfectly fine.\r\n\r\nHere is a minimal example of my code:\r\n\r\n`x = torch.rand((1, 601, 20)).to(device)\r\ninitial_guess = torch.rand((1, 14)).to(device)\r\ny = torch.rand((1, 601, 15)).to(device)\r\n \r\nmodel = ....to(device)\r\nmodel.load_state_dict(torch.load(model_location + 'model_weights.pth', map_location=device))\r\nmodel.eval()\r\n\r\ndef get_jacobian(neural_net, input1, input2):\r\n def partial_forward(diff_inp):\r\n return neural_net(input1, diff_inp)\r\n return autograd.functional.jacobian(partial_forward, input2, strategy='forward-mode', vectorize=True).to(device)\r\n\r\n\r\ndef method(neural_net, input1, input2, result, nb_it):\r\n\r\n for i in range(nb_it):\r\n\r\n jac_placeholder = torch.zeros(result.shape[0], result.shape[1], result.shape[2],\r\n input2.shape[1]).to(device)\r\n\r\n print(torch.cuda.memory_summary(device=None, abbreviated=False))\r\n\r\n jac = get_jacobian(neural_net, input1, input2)\r\n diff = neural_net(input1, input2) - result\r\n\r\n for j in range(result.shape[0]):\r\n jac_placeholder[j, :, :, :] = jac[j, :, :, j, :]\r\n\r\n true_jac = torch.permute(jac_placeholder, (0, 3, 1, 2))\r\n\r\n mul = torch.einsum('bptx,btx->bp', true_jac, diff).to(device)\r\n\r\n input2 = input2 - mul\r\n\r\n torch.cuda.empty_cache()\r\n\r\nmethod(model, x1, x2, y, 3000)`\r\n\r\n**Edit 1:**\r\n\r\nThe error occurs because of the line 'input2 = input2 - mul' but it is unclear to me why this happens.\r\n\r\n**Edit 2:**\r\n\r\nI was able to find the error. There was a with torch.no_grad() missing around \r\n`jac = get_jacobian(neural_net, input1, input2)\r\n diff = neural_net(input1, input2) - result`\r\nmeaning it was also calculating the networks weight gradients...\r\n\r\nThe RAM usage is now low but the CPU still runs on 100% altough I have everything on the GPU. This still baffles me.", "url": "https://github.com/pytorch/functorch/issues/1146", "state": "closed", "labels": [], "created_at": "2024-08-08T12:51:16Z", "updated_at": "2024-08-09T11:27:07Z", "comments": 0, "user": "dezenn" }, { "repo": "pytorch/TensorRT", "number": 3073, "title": "\u2753 Cannot figure out the following error: AttributeError: module 'torch_tensorrt' has no attribute 'ptq'.", "body": "## \u2753 Question\r\nI am encountering an AttributeError when trying to use the ptq module from Torch-TensorRT on google colab.\r\n I am attempting to run this line of code\r\ncalibrator = torch_tensorrt.ptq.DataLoaderCalibrator(...) \r\n\r\n## Environment\r\n - PyTorch Version (e.g., 1.0): 2.4.0+cu121\r\n - CUDA Version: 12.2\r\n - Python version : 3.10.12\r\n - torch_tensorrt version : 2.4.0\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3073", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-08T11:37:08Z", "updated_at": "2024-08-09T06:04:57Z", "user": "ImaanIbrar" }, { "repo": "pytorch/tutorials", "number": 2994, "title": "[Reinforcement Learning] - help on cartpull tutorial", "body": "hello im completely new to machine learning and just trying to learn. im getting this warning an none of the figure are showing up (libEGL warning: DRI2: failed to authenticate) does anyone know what i could be missing or what might be the cause? im running this in unraid on a VM with a graphics card passed thru with its drivers installed. using Ubuntu 24.04 LTS fresh install, plz help thanks.\n\ncc @vmoens @nairbv", "url": "https://github.com/pytorch/tutorials/issues/2994", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-08T03:33:14Z", "updated_at": "2024-08-09T03:47:52Z", "user": "Misticfury" }, { "repo": "pytorch/vision", "number": 8569, "title": "Allow ffmpeg-python backend for torchvision.io.write_video?", "body": "### \ud83d\ude80 The feature\n\nCreate another backend for torchvision.io.write_video which uses ffmpeg-python as a backend, but which otherwise has exactly the same interface/functionality.\n\n### Motivation, pitch\n\ntorchvision.io.write_video currently calls PyAV, which in turn is a wrapper for ffmpeg. [PyAV has an issue](https://github.com/PyAV-Org/PyAV/issues/371) which seems still unresolved where setting the CRF (constant rate factor) through the options has no effect. [This issue has been referenced as recently as March of this year](https://github.com/imageio/imageio/issues/1062). As far as I can tell, adjusting CRF is the canonical way to tune a video's level of compression. Adding support for ffmpeg-python as a backend would let users tune CRF, which would allow arbitrary levels of compression.\n\n### Alternatives\n\nIf there is some other set of options which can be passed to write_video to alter the level of compression, that would be an acceptable alternative (at least for my use-case). In this case, it would be ideal to include this alternative set of options in the write_video documentation as an example.\n\n### Additional context\n\nI already kind of got it working in a notebook, but it's missing support for audio and such.\r\n\r\n```\r\n# Define output video parameters\r\noutput_filename = 'output_video.mp4'\r\nfps = 30\r\ncodec = 'libx264' \r\n\r\n# Create the input process from the NumPy array\r\nprocess1 = (\r\n ffmpeg\r\n .input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(video_array.shape[2], video_array.shape[1]))\r\n .output(output_filename, pix_fmt='yuv420p', r=fps, vcodec=codec, crf=10)\r\n .overwrite_output()\r\n .run_async(pipe_stdin=True)\r\n)\r\n\r\n# Write the NumPy array to the input pipe\r\nfor frame in video_array:\r\n process1.stdin.write(frame.tobytes())\r\n\r\n# Close the input pipe\r\nprocess1.stdin.close()\r\n\r\n# Wait for the ffmpeg process to finish\r\nprocess1.wait()\r\n```\r\ncrf=10 produces something good-looking, while crf=50 produces something very compressed-looking as expected.", "url": "https://github.com/pytorch/vision/issues/8569", "state": "closed", "labels": [], "created_at": "2024-08-08T01:14:07Z", "updated_at": "2024-10-11T11:53:49Z", "comments": 1, "user": "adaGrad1" }, { "repo": "pytorch/executorch", "number": 4579, "title": "how to realize the sliding window of kv cache?", "body": "hello, \r\nnow I want to realize the sliding window of kv cache, so dynamic allocation and reclamation of memory needs to be realized. could you please teach me how to realize the dynamic allocation and reclamation of memory in the transformer? \r\nThank you in advanced.", "url": "https://github.com/pytorch/executorch/issues/4579", "state": "closed", "labels": [], "created_at": "2024-08-07T07:05:42Z", "updated_at": "2024-08-15T05:04:51Z", "user": "l2002924700" }, { "repo": "pytorch/TensorRT", "number": 3060, "title": "\u2753 [Question] function `torch._ops.aten.aten::_to_copy` not currently supported with dynamic input shape", "body": "## \u2753 Question\r\n\r\nI'm trying to compile a model with dynamic input shape but told that the `function torch._ops.aten.aten::_to_copy` is not currently supported:\r\n```Traceback (most recent call last):\r\n File \"/home/wh/generative_action/SynHSI/test_module.py\", line 325, in <module>\r\n model = torch_tensorrt.compile(model, ir=\"dynamo\", inputs=trt_inputs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/_compile.py\", line 249, in compile\r\n trt_graph_module = dynamo_compile(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py\", line 243, in compile\r\n trt_gm = compile_module(gm, inputs, settings)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py\", line 431, in compile_module\r\n trt_module = convert_module(\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py\", line 107, in convert_module\r\n interpreter_result = interpret_module_to_result(module, inputs, settings)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py\", line 88, in interpret_module_to_result\r\n interpreter_result = interpreter.run()\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 336, in run\r\n self._construct_trt_network_def()\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 317, in _construct_trt_network_def\r\n super().run()\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py\", line 147, in run\r\n self.env[node] = self.run_node(node)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 378, in run_node\r\n trt_node: torch.fx.Node = super().run_node(n)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py\", line 204, in run_node\r\n return getattr(self, n.op)(n.target, args, kwargs)\r\n File \"/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py\", line 480, in call_function\r\n raise UnsupportedOperatorException(\r\ntorch_tensorrt.dynamo.conversion._TRTInterpreter.UnsupportedOperatorException: Conversion of function torch._ops.aten.aten::_to_copy not currently supported!\r\n```\r\nthe code caused this error is as follow:\r\n`pi = self.positional_encoder.pos_encoding[pi.long()]`\r\nwhere the `self.positional_encoder` is an instance of a customized implementation of the transformer position encoder:\r\n```\r\nclass PositionalEncoding(nn.Module):\r\n def __init__(self, dim_model, dropout_p, max_len):\r\n super().__init__()\r\n # Modified version from: https://pytorch.org/tutorials/beginner/transformer_tutorial.html\r\n # max_len determines how far the position can have an effect on a token (window)\r\n\r\n # Info\r\n self.dropout = nn.Dropout(dropout_p)\r\n\r\n # Encoding - From formula\r\n pos_encoding = torch.zeros(max_len, dim_model)\r\n positions_list = torch.arange(0, max_len, dtype=torch.float).reshape(-1, 1) # 0, 1, 2, 3, 4, 5\r\n division_term = torch.exp(\r\n torch.arange(0, dim_model, 2).float() * (-math.log(10000.0)) / dim_model) # 1000^(2i/dim_model)\r\n\r\n # PE(pos, 2i) = sin(pos/1000^(2i/dim_model))\r\n pos_encoding[:, 0::2] = torch.sin(positions_list * division_term)\r\n\r\n # PE(pos, 2i + 1) = cos(pos/1000^(2i/dim_model))\r\n pos_encoding[:, 1::2] = torch.cos(positions_list * division_term)\r\n\r\n # Saving buffer (same as parameter without gradients needed)\r\n pos_encoding = pos_encoding.unsqueeze(0).transpose(0, 1)\r\n self.register_buffer(\"pos_encoding\", pos_encoding)\r\n\r\n def forward(self, token_embedding: torch.tensor) -> torch.tensor:\r\n # Residual connection + pos encoding\r\n return self.dropout(token_embedding + self.pos_encoding[:token_embedding.size(0), :])\r\n\r\n```\r\n\r\n## What you have already tried\r\nThe complete model is complicated so I have tried to implement a minimal reproducible example, but the compilation of a single `PositionalEncoding` model succeed. I also tried adding more context code but it still succeed. I'm unable to get a minimal reproducible example now.\r\n\r\nI found this error only occurs with dynamic input shape. Compiling model with fixed input shape works well.\r\n\r\nBesides, I noticed that [#2161](https://github.com/pytorch/TensorRT/pull/2161) had added the `_to_copy` converter, so I'm confused why it told me `_to_copy` is not supported, or maybe I misunderstand something?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch V", "url": "https://github.com/pytorch/TensorRT/issues/3060", "state": "open", "labels": [ "question" ], "created_at": "2024-08-05T12:20:32Z", "updated_at": "2024-12-12T18:33:18Z", "user": "broken-dream" }, { "repo": "pytorch/data", "number": 1309, "title": "what's the exact plan for torchdata now?", "body": "hi, as a user of torchdata, i'm very happy to see the resurrection of the project.\r\n\r\ni have a question about the development plan. from the README, i see:\r\n\r\n> torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader\r\n\r\nthis is somewhat surprising. although the current Datapipes seem to have various issues underneath the shell, so far, Datapipes ARE torchdata. the current API reference:\r\n\r\n> API Reference:\r\n> \r\n> [Stateful DataLoader](https://pytorch.org/data/beta/torchdata.stateful_dataloader.html)\r\n> [Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html)\r\n> [Map-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.map.html)\r\n> [Utility Functions](https://pytorch.org/data/beta/torchdata.datapipes.utils.html)\r\n> [DataLoader2](https://pytorch.org/data/beta/dataloader2.html)\r\n> [ReadingService](https://pytorch.org/data/beta/reading_service.html)\r\n\r\nand this is it; i.e., until ver 0.7, torchdata == the datapipes and other necessary utilities (dataloader2 and reading service).\r\n\r\nand that's why it is surprising for me, that while the development of torchdata has re-started, it is being done in a way it discards everything it had.\r\n\r\nso, can i ask for a bit more details about what the new direction (enhancement of torch.utils.data.DataLoader)? or am i missing something here? \r\n\r\nthanks. ", "url": "https://github.com/meta-pytorch/data/issues/1309", "state": "closed", "labels": [], "created_at": "2024-08-04T00:25:26Z", "updated_at": "2024-08-04T00:27:17Z", "comments": 1, "user": "keunwoochoi" }, { "repo": "pytorch/xla", "number": 7805, "title": "Kaggle Notebooks: TPU detected but wont use", "body": "## \u2753 Questions and Help\r\nHi All, \r\nI Have this code \r\n```\r\nimport optuna\r\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\r\n\r\n# Assuming dataset is already defined\r\ntrain_size = int(0.8 * len(dataset))\r\nval_size = len(dataset) - train_size\r\ntrain_dataset, val_dataset = random_split(dataset, [train_size, val_size])\r\n\r\ndef objective(trial):\r\n device = xm.xla_device()\r\n learning_rate = trial.suggest_float('learning_rate', 1e-5, 1e-2, log=True)\r\n dropout_prob = trial.suggest_float('dropout_prob', 0.2, 0.7)\r\n batch_size = trial.suggest_int('batch_size', 2, 32)\r\n optimizer_name = trial.suggest_categorical('optimizer', ['Adam', 'SGD'])\r\n loss_fn_name = trial.suggest_categorical('loss_fn', ['DiceLoss', 'FocalLoss', 'CombinedLoss', 'BCEWithLogitsLoss'])\r\n \r\n backbone = \"resnet101\"\r\n model_name = \"DeepLabV3Plus\"\r\n model = create_model(model_name, encoder_name=backbone, in_channels=3, classes=1)\r\n model.to(device)\r\n \r\n if optimizer_name == 'Adam':\r\n optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=0.0001)\r\n elif optimizer_name == 'SGD':\r\n optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=0.0001)\r\n\r\n if loss_fn_name == 'DiceLoss':\r\n loss_fn = DiceLoss()\r\n elif loss_fn_name == 'FocalLoss':\r\n loss_fn = FocalLoss()\r\n elif loss_fn_name == 'CombinedLoss':\r\n loss_fn = CombinedLoss()\r\n elif loss_fn_name == 'BCEWithLogitsLoss':\r\n pos_weight = torch.tensor([1.127], device=device)\r\n loss_fn = nn.BCEWithLogitsLoss(pos_weight=pos_weight)\r\n\r\n for module in model.modules():\r\n if isinstance(module, nn.Conv2d):\r\n module.add_module('dropout', nn.Dropout2d(dropout_prob))\r\n\r\n scheduler = ReduceLROnPlateau(optimizer, mode='min', patience=3, factor=0.1)\r\n\r\n train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)\r\n val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)\r\n\r\n num_epochs = 5\r\n best_loss = float('inf')\r\n \r\n for epoch in range(num_epochs):\r\n model.train()\r\n train_losses = []\r\n para_loader = pl.ParallelLoader(train_loader, [device])\r\n for inputs, targets in tqdm(para_loader.per_device_loader(device), desc=f\"Epoch {epoch+1}/{num_epochs} - Training\"):\r\n inputs, targets = inputs.to(device), targets.to(device)\r\n\r\n optimizer.zero_grad()\r\n outputs = model(inputs)\r\n loss = loss_fn(outputs, targets.float())\r\n loss.backward()\r\n xm.optimizer_step(optimizer)\r\n train_losses.append(loss.item())\r\n\r\n model.eval()\r\n val_losses = []\r\n para_loader = pl.ParallelLoader(val_loader, [device])\r\n with torch.no_grad():\r\n for inputs, targets in tqdm(para_loader.per_device_loader(device), desc=f\"Epoch {epoch+1}/{num_epochs} - Validation\"):\r\n inputs, targets = inputs.to(device), targets.to(device)\r\n outputs = model(inputs)\r\n loss = loss_fn(outputs, targets.float())\r\n val_losses.append(loss.item())\r\n\r\n val_loss = np.mean(val_losses)\r\n scheduler.step(val_loss)\r\n\r\n if val_loss < best_loss:\r\n best_loss = val_loss\r\n\r\n return best_loss\r\n\r\n# Save the study to a persistent storage\r\nstudy_name = \"my_study\"\r\nstorage_name = f\"sqlite:///example.db\"\r\nstudy = optuna.create_study(direction='minimize', study_name=study_name, storage=storage_name, load_if_exists=True)\r\nstudy.optimize(objective, n_trials=15)\r\n\r\n# Print the best hyperparameters\r\nprint('Best trial:')\r\ntrial = study.best_trial\r\nprint(f' Value: {trial.value}')\r\nprint(' Params: ')\r\nfor key, value in trial.params.items():\r\n print(f' {key}: {value}')\r\n```\r\nHowever the even though the TPU is detected as `Using device: xla:0` It does not show in dashboard, and the TPU deactivates after while due to not been used. \r\nWould anyone be able to help me with this matter please .\r\nThanks & Best Regards \r\nAMJS", "url": "https://github.com/pytorch/xla/issues/7805", "state": "closed", "labels": [ "question", "xla:tpu" ], "created_at": "2024-08-03T16:32:58Z", "updated_at": "2025-04-01T12:55:08Z", "user": "MichaelSchroter" }, { "repo": "pytorch/torchchat", "number": 1001, "title": "[Raspbian] streamlit GUI interface does not work / no documentation how to install", "body": "### \ud83d\udc1b Describe the bug\n\n\r\nfrom #985:\r\n> 2. If you're interested in debugging the browser, feel free to spin up another issue with the error message from this\r\n> > streamlit run torchchat.py -- browser llama3\r\n\r\nThanks, I will. I suspect it's pretty straightforward - there's no streamlit installed on my system. I assumed that your install script would install it, or tell me to install it if I needed that?!\r\n\r\n```\r\n$ streamlit\r\nbash: streamlit: command not found\r\n```\r\n\r\nI have no idea what to install for / how to install streamlit, and even less so whether it's available for this platform. It wasn't high on my list, and so I moved on when it didn't work. (Was curious to try the GUI just for kicks, in case this was installed by default with the OS.)\r\n\r\nHere's the Raspbian version I used:\r\n\r\n```\r\n$ uname -a\r\nLinux raspberrypi 6.6.31+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.6.31-1+rpt1 (2024-05-29) aarch64 GNU/Linux\r\n```\r\n\r\n\r\nopening a separate issue as suggested in #985 \r\n\r\n\n\n### Versions\n\nwget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n--2024-08-02 20:22:47-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py\r\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...\r\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.\r\nHTTP request sent, awaiting response... 200 OK\r\nLength: 23357 (23K) [text/plain]\r\nSaving to: 'collect_env.py.1'\r\n\r\ncollect_env.py.1 100%[===================>] 22.81K --.-KB/s in 0.005s \r\n\r\n2024-08-02 20:22:47 (4.57 MB/s) - 'collect_env.py.1' saved [23357/23357]\r\n\r\nCollecting environment information...\r\nPyTorch version: N/A\r\nIs debug build: N/A\r\nCUDA used to build PyTorch: N/A\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Debian GNU/Linux trixie/sid (aarch64)\r\nGCC version: (Debian 13.3.0-3) 13.3.0\r\nClang version: 16.0.6 (27+b1)\r\nCMake version: Could not collect\r\nLibc version: glibc-2.39\r\n\r\nPython version: 3.11.2 (main, May 2 2024, 11:59:08) [GCC 12.2.0] (64-bit runtime)\r\nPython platform: Linux-6.6.31+rpt-rpi-v8-aarch64-with-glibc2.39\r\nIs CUDA available: N/A\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: N/A\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 4\r\nOn-line CPU(s) list: 0-3\r\nVendor ID: ARM\r\nModel name: Cortex-A76\r\nModel: 1\r\nThread(s) per core: 1\r\nCore(s) per cluster: 4\r\nSocket(s): -\r\nCluster(s): 1\r\nStepping: r4p1\r\nCPU(s) scaling MHz: 100%\r\nCPU max MHz: 2400.0000\r\nCPU min MHz: 1500.0000\r\nBogoMIPS: 108.00\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp\r\nL1d cache: 256 KiB (4 instances)\r\nL1i cache: 256 KiB (4 instances)\r\nL2 cache: 2 MiB (4 instances)\r\nL3 cache: 2 MiB (1 instance)\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Reg file data sampling: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; CSV2, BHB\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy==1.10.1\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.26.4\r\n[pip3] types-flake8-2020==1.8\r\n[pip3] types-flake8-bugbear==23.9.16\r\n[pip3] types-flake8-builtins==2.2\r\n[pip3] types-flake8-docstrings==1.7\r\n[pip3] types-flake8-plugin-utils==1.3\r\n[pip3] types-flake8-rst-docstrings==0.3\r\n[pip3] types-flake8-simplify==0.21\r\n[pip3] types-flake8-typing-imports==1.15\r\n[pip3] types-mypy-extensions==1.0\r\n[con", "url": "https://github.com/pytorch/torchchat/issues/1001", "state": "closed", "labels": [ "bug", "Browser" ], "created_at": "2024-08-03T03:24:10Z", "updated_at": "2024-08-06T00:32:41Z", "user": "sunshinesfbay" }, { "repo": "pytorch/pytorch", "number": 132559, "title": "How to fix tensor.numpy() not supported for torch.export with strict=False", "body": "### \ud83d\udc1b Describe the bug\n\nThis is trying to do a BE task to unblock https://github.com/pytorch/pytorch/pull/130977. The problem is very similar to https://github.com/pytorch/pytorch/pull/120261, though that one uses torch.export with strict=True.\r\n\r\n# repro:\r\n```\r\nimport numpy as np\r\nimport torch\r\n\r\nclass MyNumpyModel(torch.nn.Module):\r\n def __init__(self):\r\n super(MyNumpyModel, self).__init__()\r\n\r\n def forward(self, input):\r\n return input.numpy()\r\n\r\nwith torch._subclasses.FakeTensorMode():\r\n model = MyNumpyModel()\r\n _ = torch.export.export(model, args=(torch.randn(1000),), strict=False)\r\n```\r\n\r\n# Error:\r\n```\r\nRuntimeError:.numpy() is not supported for tensor subclasses.\r\n```\r\n\r\n# Attempt:\r\nInside tracing, the tensor is `FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1000,))))`, and applying `torch._numpy.ndarray` would turn it into `FunctionalTensor(_to_functional_torch.ndarray(FakeTensor(..., size=(1000,), dtype=float64)))`. \r\n\r\nHowever, I don't know how to make it into a permanent fix. \n\n### Versions\n\nPyTorch version: 2.5.0a0+git0b7d6b3\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.0\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: CentOS Stream 9 (x86_64)\r\nGCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)\r\nClang version: Could not collect\r\nCMake version: version 3.26.4\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34\r\nIs CUDA available: True\r\nCUDA runtime version: 12.0.140\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA PG509-210\r\nGPU 1: NVIDIA PG509-210\r\nGPU 2: NVIDIA PG509-210\r\nGPU 3: NVIDIA PG509-210\r\nGPU 4: NVIDIA PG509-210\r\nGPU 5: NVIDIA PG509-210\r\nGPU 6: NVIDIA PG509-210\r\nGPU 7: NVIDIA PG509-210\r\n\r\nNvidia driver version: 525.105.17\r\ncuDNN version: Probably one of the following:\r\n/usr/lib64/libcudnn.so.8.8.0\r\n/usr/lib64/libcudnn_adv_infer.so.8.8.0\r\n/usr/lib64/libcudnn_adv_train.so.8.8.0\r\n/usr/lib64/libcudnn_cnn_infer.so.8.8.0\r\n/usr/lib64/libcudnn_cnn_train.so.8.8.0\r\n/usr/lib64/libcudnn_ops_infer.so.8.8.0\r\n/usr/lib64/libcudnn_ops_train.so.8.8.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 192\r\nOn-line CPU(s) list: 0-191\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz\r\nCPU family: 6\r\nModel: 85\r\nThread(s) per core: 2\r\nCore(s) per socket: 24\r\nSocket(s): 4\r\nStepping: 11\r\nFrequency boost: enabled\r\nCPU(s) scaling MHz: 100%\r\nCPU max MHz: 1801.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 3600.00\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nL1d cache: 3 MiB (96 instances)\r\nL1i cache: 3 MiB (96 instances)\r\nL2 cache: 96 MiB (96 instances)\r\nL3 cache: 132 MiB (4 instances)\r\nNUMA node(s): 4\r\nNUMA node0 CPU(s): 0-23,96-119\r\nNUMA node1 CPU(s): 24-47,120-143\r\nNUMA node2 CPU(s): 48-71,144-167\r\nNUMA node3 CPU(s): 72-95,168-191\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization", "url": "https://github.com/pytorch/pytorch/issues/132559", "state": "open", "labels": [ "module: numpy", "tensor subclass", "module: functionalization", "export-triage-review", "oncall: export" ], "created_at": "2024-08-02T23:04:39Z", "updated_at": "2024-08-06T18:42:05Z", "user": "henrylhtsang" }, { "repo": "pytorch/xla", "number": 7803, "title": "[question] Seeking information on low-level TPU interaction and libtpu.so API", "body": "I'm looking to build an automatic differentiation library for TPUs without using high-level front-ends like TensorFlow/JAX/PyTorch-XLA, but I'm finding information about lower-level TPU usage is practically non-existent.\r\n\r\nSpecifically, I'm interested in:\r\n1. How to interact with TPUs at a lower level than what's typically exposed in TensorFlow\r\n2. Information about the libtpu.so library and its API\r\n3. Any resources or documentation on implementing custom TPU operations\r\n\r\nAre there any insights or suggestions on how to approach this, particularly regarding TPU support? Any ideas or help would be greatly appreciated.\r\n\r\nI understand that some of this information might be proprietary, but any guidance on what is possible or available would be very helpful.", "url": "https://github.com/pytorch/xla/issues/7803", "state": "closed", "labels": [ "question", "xla:tpu" ], "created_at": "2024-08-02T10:16:01Z", "updated_at": "2025-04-01T12:56:19Z", "user": "notlober" }, { "repo": "pytorch/executorch", "number": 4510, "title": "How to link custom ops?", "body": "Hi!\r\n\r\nI'm trying to integrate some of quantized MatMul C++ kernels into Executorch and I'm having a bad time: the documentation is very vague about what exactly I need to include/link for ATen to pick up my ops.\r\n\r\nI would greatly appreciate any help in trying to make it work.\r\n\r\n### Overview:\r\n\r\nSource code for the dynamic library containing the ops consists of 3 files: `lut_kernel.h`, `lut_kernel.cpp`, `lut_kernel_pytorch.cpp`. The files contain roughly this code:\r\n\r\n```c++\r\n// lut_kernel.h\r\n#pragma once\r\n\r\n#include <executorch/runtime/kernel/kernel_includes.h>\r\n\r\nnamespace torch {\r\nnamespace executor {\r\n\r\nnamespace native {\r\n\r\nTensor& code2x8_lut_matmat_out(\r\n RuntimeContext& ctx,\r\n const Tensor& input,\r\n const Tensor& codes,\r\n const Tensor& codebooks,\r\n const Tensor& scales,\r\n const optional<Tensor>& bias,\r\n Tensor& out\r\n);\r\n} // namespace native\r\n} // namespace executor\r\n} // namespace torch\r\n```\r\n\r\n```c++\r\n// lut_kernel.cpp\r\n#include \"lut_kernel.h\"\r\n\r\n#include <executorch/extension/kernel_util/make_boxed_from_unboxed_functor.h>\r\n\r\nnamespace torch {\r\n namespace executor {\r\n namespace native {\r\n Tensor& code2x8_lut_matmat_out(\r\n RuntimeContext& ctx,\r\n const Tensor& input,\r\n const Tensor& codes,\r\n const Tensor& codebooks,\r\n const Tensor& scales,\r\n const optional<Tensor>& bias,\r\n Tensor& out\r\n ) {\r\n // CALCULATIONS\r\n return out;\r\n }\r\n } // namespace native\r\n } // namespace executor\r\n} // namespace torch\r\n\r\nEXECUTORCH_LIBRARY(aqlm, \"code2x8_lut_matmat.out\", torch::executor::native::code2x8_lut_matmat_out);\r\n```\r\n\r\n```c++\r\n// lut_kernel_pytorch.cpp\r\n#include \"lut_kernel.h\"\r\n\r\n#include <executorch/extension/aten_util/make_aten_functor_from_et_functor.h>\r\n#include <executorch/extension/kernel_util/make_boxed_from_unboxed_functor.h>\r\n\r\n#include <torch/library.h>\r\n\r\nnamespace torch {\r\n namespace executor {\r\n namespace native {\r\n Tensor& code2x8_lut_matmat_out_no_context(\r\n ...\r\n Tensor& output\r\n ) {\r\n void* memory_pool = malloc(10000000 * sizeof(uint8_t));\r\n MemoryAllocator allocator(10000000, (uint8_t*)memory_pool);\r\n\r\n exec_aten::RuntimeContext context{nullptr, &allocator};\r\n return torch::executor::native::code2x8_lut_matmat_out(\r\n context,\r\n ...,\r\n output\r\n );\r\n }\r\n\r\n at::Tensor code2x8_lut_matmat(\r\n ...\r\n ) {\r\n auto sizes = input.sizes().vec();\r\n sizes[sizes.size() - 1] = codes.size(1) * codebooks.size(2);\r\n auto out = at::empty(sizes,\r\n at::TensorOptions()\r\n .dtype(input.dtype())\r\n .device(input.device())\r\n );\r\n\r\n WRAP_TO_ATEN(code2x8_lut_matmat_out_no_context, 5)(\r\n ...,\r\n out\r\n );\r\n return out;\r\n }\r\n } // namespace native\r\n } // namespace executor\r\n} // namespace torch\r\n\r\nTORCH_LIBRARY(aqlm, m) {\r\n m.def(\r\n \"code2x8_lut_matmat(Tensor input, Tensor codes, \"\r\n \"Tensor codebooks, Tensor scales, Tensor? bias=None) -> Tensor\"\r\n );\r\n m.def(\r\n \"code2x8_lut_matmat.out(Tensor input, Tensor codes, \"\r\n \"Tensor codebooks, Tensor scales, Tensor? bias=None, *, Tensor(c!) out) -> Tensor(c!)\"\r\n );\r\n}\r\n\r\nTORCH_LIBRARY_IMPL(aqlm, CompositeExplicitAutograd, m) {\r\n m.impl(\r\n \"code2x8_lut_matmat\", torch::executor::native::code2x8_lut_matmat\r\n );\r\n m.impl(\r\n \"code2x8_lut_matmat.out\",\r\n WRAP_TO_ATEN(torch::executor::native::code2x8_lut_matmat_out_no_context, 5)\r\n );\r\n}\r\n```\r\n\r\n, which closely follows the executorch custom sdpa code.\r\n\r\nI build it as two standalone dynamic libs: one `lut_kernel.cpp` with dependency only on `executorch` and `lut_kernel_pytorch.cpp` with additional `torch` dependency. I load the latter lib into pytorch as `torch.ops.load_library(f\"../libaqlm_bindings.dylib\")`.\r\n\r\n### The problem: \r\n\r\nI wrote a small `nn.Module` that basically just calls the op. In pytorch it works well. `aten_dialect` for it looks like this:\r\n```\r\nExportedProgram:\r\n class GraphModule(torch.nn.Module):\r\n def forward(self, p_codes: \"i8[3072, 128, 2]\", p_codebooks: \"f32[2, 256, 1, 8]\", p_scales: \"f32[3072, 1, 1, 1]\", p_bias: \"f32[3072]\", input: \"f32[s0, s1, 1024]\"):\r\n input_1 = input\r\n \r\n # File: [/Users/blacksamorez/reps/AQLM/inference_lib/src/aqlm/inference.py:74](https://file+.vscode-resource.vscode-cdn.net/Users/blacksamorez/reps/AQLM/inference_lib/src/aqlm/inference.py:74) in forward, code: return torch.ops.aqlm.code2x8_lut_matmat(\r\n code2x8_lut_matmat: \"f32[s0, s1, 1024]\" = torch.ops.aqlm.code2x8_lut_matmat.default(input_1, p_codes, p_codebooks, p_scales, p_bias); input_1 = p_codes", "url": "https://github.com/pytorch/executorch/issues/4510", "state": "closed", "labels": [], "created_at": "2024-08-01T21:16:01Z", "updated_at": "2024-08-21T21:09:03Z", "user": "BlackSamorez" }, { "repo": "pytorch/torchchat", "number": 989, "title": "Weird model behaviour on Server/Browser: Looks like it's not using the template", "body": "Hi,\r\n\r\nI'm trying out the torchchat right now, started the streamlit application with llama3 model\r\n![image](https://github.com/user-attachments/assets/3ee31c11-29ed-423a-ac29-c155bf38ebcf)\r\n\r\nI just texted Hi !!\r\n- Why is this text generation behaviour unusal , Is it the problem with model being converted to torchchat format ?\r\n\r\n![image](https://github.com/user-attachments/assets/4d38eb53-a4f4-4a58-bae0-09a7169219e9)\r\n", "url": "https://github.com/pytorch/torchchat/issues/989", "state": "open", "labels": [ "bug", "actionable", "Browser" ], "created_at": "2024-08-01T05:52:19Z", "updated_at": "2024-08-02T08:05:45Z", "comments": 2, "user": "akhilreddy0703" }, { "repo": "pytorch/torchchat", "number": 988, "title": "Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nanollava?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHaving good basic pytorch support for inferencing LLMs is key to continued success of pytorch. Vision LLM models tend to have uneven support on mainstream inferencing engines like Llama.cpp due to the need to reimplement CLIP/SIGLIP etc. Pytorch could natively support performant vision LLMs with quantization on ARM devices, which would make a big difference in usability. \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### RFC (Optional)\n\n_No response_", "url": "https://github.com/pytorch/torchchat/issues/988", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-08-01T03:59:17Z", "updated_at": "2024-08-01T05:50:16Z", "comments": 1, "user": "kinchahoy" }, { "repo": "pytorch/TensorRT", "number": 3049, "title": "Is jetpack 6.0 for jetson agx orin supported?", "body": "I tried installing torch_tensorrt using jetpack 5.0 WORKSPACE script but it did not work for my system which is currently using jetpack 6.0 on the jetson agx orin", "url": "https://github.com/pytorch/TensorRT/issues/3049", "state": "open", "labels": [ "question" ], "created_at": "2024-07-31T03:06:24Z", "updated_at": "2024-09-12T21:11:40Z", "user": "dhruvmsheth" }, { "repo": "pytorch/xla", "number": 7774, "title": "ddp documentation issues", "body": "## \ud83d\udcda Documentation\r\n\r\nOur [documentations](https://pytorch.org/xla/release/2.3/index.html#how-to-use-distributeddataparallel) suggests users must use the following parameters while setting up DDP. This information is outdated. Please remove any such documentations.\r\n\r\n```\r\nos.environ['MASTER_ADDR'] = 'localhost'\r\nos.environ['MASTER_PORT'] = '12355'\r\n```\r\n\r\nreplace with\r\n```\r\nos.environ['PJRT_DEVICE'] = 'TPU'\r\n```", "url": "https://github.com/pytorch/xla/issues/7774", "state": "closed", "labels": [ "usability", "documentation" ], "created_at": "2024-07-30T18:53:45Z", "updated_at": "2024-10-30T16:46:30Z", "comments": 1, "user": "miladm" }, { "repo": "pytorch/torchchat", "number": 969, "title": "Running `torchchat export` with just the model name does not error out", "body": "### \ud83d\udc1b Describe the bug\n\nRunning `python torchchat.py export stories15M` does not error out, nor generates any export files, though it should have?\r\n```shell\r\n% python torchchat.py export stories15M; echo $?\r\nlm_eval is not installed, GPTQ may not be usable\r\nUsing device=mps\r\nWarning! Device MPS not supported for export. Exporting for device CPU.\r\nLoading model...\r\nTime to load model: 0.02 seconds\r\n-----------------------------------------------------------\r\n0\r\n```\n\n### Versions\n\nNo idea, where is the torchchat version defined?", "url": "https://github.com/pytorch/torchchat/issues/969", "state": "closed", "labels": [ "bug", "actionable" ], "created_at": "2024-07-30T13:56:14Z", "updated_at": "2024-11-26T19:43:00Z", "comments": 2, "user": "malfet" }, { "repo": "pytorch/executorch", "number": 4461, "title": "How to dispatch SDPA to XNNPACK?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI\u2019m currently working on dispatching the SDPA operations to XNNPACK. To accomplish this, I\u2019ve added `torch.nn.functional.scaled_dot_product_attention` to the `SUPPORTED_DYN_QUANT_LINEAR_MODULES` in the `backends/xnnpack/partition/configs.py` file, as shown in the code block below.\r\n\r\n```python\r\n# Modules which support dynamic quantization\r\n# These already support dynamic shape.\r\nSUPPORTED_DYN_QUANT_LINEAR_MODULES = [\r\n torch.nn.Linear,\r\n torch.nn.functional.linear,\r\n torch.nn.functional.scaled_dot_product_attention,\r\n]\r\n```\r\n\r\nI attempted to run the llama example using the following command:\r\n```python\r\npython -m examples.models.llama2.export_llama --checkpoint ./stories110M/stories110M.pt -p ./stories110M/params.json -X -kv -qmode 8da4w --group_size 128 -d fp32 -o ptes -n stories110M_test_xnnpack\r\n```\r\nUnfortunately, an error occurred. Please find the full backtrace attached below.\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/workspace/executorch/examples/models/llama2/export_llama.py\", line 31, in <module>\r\n main() # pragma: no cover\r\n File \"/workspace/executorch/examples/models/llama2/export_llama.py\", line 27, in main\r\n export_llama(modelname, args)\r\n File \"/workspace/executorch/examples/models/llama2/export_llama_lib.py\", line 332, in export_llama\r\n builder = _export_llama(modelname, args)\r\n File \"/workspace/executorch/examples/models/llama2/export_llama_lib.py\", line 511, in _export_llama\r\n backend = builder_exported_to_edge.to_backend(partitioners)\r\n File \"/workspace/executorch/examples/models/llama2/builder.py\", line 249, in to_backend\r\n self.edge_manager = self.edge_manager.to_backend(partitioner)\r\n File \"/workspace/executorch/exir/program/_program.py\", line 1165, in to_backend\r\n new_edge_programs[name] = to_backend(program, partitioner)\r\n File \"/usr/lib/python3.10/functools.py\", line 889, in wrapper\r\n return dispatch(args[0].__class__)(*args, **kw)\r\n File \"/workspace/executorch/exir/backend/backend_api.py\", line 384, in _\r\n tagged_graph_module = _partition_and_lower(\r\n File \"/workspace/executorch/exir/backend/backend_api.py\", line 299, in _partition_and_lower\r\n partitioned_module = _partition_and_lower_one_graph_module(\r\n File \"/workspace/executorch/exir/backend/backend_api.py\", line 230, in _partition_and_lower_one_graph_module\r\n lowered_submodule = to_backend(\r\n File \"/usr/lib/python3.10/functools.py\", line 889, in wrapper\r\n return dispatch(args[0].__class__)(*args, **kw)\r\n File \"/workspace/executorch/exir/backend/backend_api.py\", line 114, in _\r\n preprocess_result: PreprocessResult = cls.preprocess(\r\n File \"/workspace/executorch/backends/xnnpack/xnnpack_preprocess.py\", line 159, in preprocess\r\n raise RuntimeError(\r\nRuntimeError: For scalar_tensor, call_function:scalar_tensor.default is not supported in XNNPACK Delegate\r\n```\r\n\r\nI believe the SDPA can be integrated with XNNPACK, but I'm unsure of the correct approach. Could you please offer guidance on how to do this?\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.4.0a0+git9afe4ec\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.30.0\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 36\r\nOn-line CPU(s) list: 0-35\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz\r\nCPU family: 6\r\nModel: 85\r\nThread(s) per core: 2\r\nCore(s) per socket: 18\r\nSocket(s): 1\r\nStepping: 7\r\nCPU max MHz: 4500.0000\r\nCPU min MHz: 1200.0000\r\nBogoMIPS: 6000.00\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall ", "url": "https://github.com/pytorch/executorch/issues/4461", "state": "closed", "labels": [], "created_at": "2024-07-30T06:32:29Z", "updated_at": "2024-08-02T01:44:09Z", "user": "DzAvril" }, { "repo": "pytorch/xla", "number": 7766, "title": "Does PyTorch/XLA nightly provide GPU support?", "body": "## \u2753 Questions and Help\r\n\r\nIn README.md, there is nightly support on TPU\r\n```\r\npip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu\r\npip install 'torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl' -f https://storage.googleapis.com/libtpu-releases/index.html\r\n```\r\n\r\nBut there is no instructions of XLA nightly support on GPU plugin. Is there a way that I can download PyTorch/XLA compatible version with torch-nightly?", "url": "https://github.com/pytorch/xla/issues/7766", "state": "closed", "labels": [ "xla:gpu", "documentation" ], "created_at": "2024-07-29T22:29:51Z", "updated_at": "2024-12-19T22:18:22Z", "comments": 5, "user": "titaiwangms" }, { "repo": "pytorch/ao", "number": 550, "title": "[Question] How to effectively use the `intmm.py` and `intmm_triton.py`", "body": "Hello AO Team! Thanks for this amazing package. I am extremely interested in using the `Integer MatMul Kernels` on `A100` GPUs.\r\n\r\nI wrote a simple matmul operation to see the effectiveness of the same. \r\n\r\n```python\r\nimport os\r\nimport torch\r\nfrom torchao.kernel.intmm import int_matmul\r\nfrom tqdm import tqdm\r\n\r\n# print(f\"Is Auto Tuner enabled: {bool(os.getenv('TORCHAO_AUTOTUNER_ENABLE', 0))}\")\r\n# print(f\"A100 path: {os.getenv('TORCHAO_AUTOTUNER_DATA_PATH', None)}\")\r\n\r\n\r\ndevice = \"cuda:0\"\r\n\r\na = torch.rand(2048, 2048).to(torch.int8).to(device)\r\nb = torch.rand(2048, 4096).to(torch.int8).to(device)\r\n\r\nprint(f\"a: {a.shape}, a.dtype: {a.dtype}, a.device: {a.device}\")\r\nprint(f\"b: {b.shape}, b.dtype: {b.dtype}, b.device: {b.device}\")\r\n\r\nfor _ in tqdm(range(100000)):\r\n c = int_matmul(a, b)\r\nprint(f\"c: {c.shape}, c.dtype: {c.dtype}, c.device: {c.device}\")\r\n\r\nprint(\"Using Float32 to do it\")\r\na = a.to(torch.float32)\r\nb = b.to(torch.float32)\r\n\r\nprint(f\"a: {a.shape}, a.dtype: {a.dtype}, a.device: {a.device}\")\r\nprint(f\"b: {b.shape}, b.dtype: {b.dtype}, b.device: {b.device}\")\r\n\r\nfor _ in tqdm(range(100000)):\r\n c = torch.matmul(a, b).to(torch.int32)\r\nprint(f\"c: {c.shape}, c.dtype: {c.dtype}, c.device: {c.device}\")\r\n```\r\nThe Int Matmul is almost 1.5x compared to `torch.matmul` which is really great!\r\n![Screenshot 2024-07-28 at 4 09 46\u202fPM (1)](https://github.com/user-attachments/assets/3548f2f2-4df4-457b-b017-168e010c1e2a)\r\n\r\nMy question is, Am I using it right? At least looking through the source code it looks like I am not going via [intmm_triton.py](https://github.com/pytorch/ao/blob/main/torchao/kernel/intmm.py#L102) as I have not enabled the `TORCHAO_AUTOTUNER_ENABLE`. But when I enable it, it seems to take a long time to process. I even tried setting the `TORCHAO_AUTOTUNER_DATA_PATH` manually to the downloaded `data_a100.pkl` as I have an `A100` GPU. I am kinda confused here on how should I use this triton kernel. Any help is appreciated. Also I want to use the [int_scaled_matmul](https://github.com/pytorch/ao/blob/main/torchao/kernel/intmm.py#L107) and it looks like running it without `TORCHAO_AUTOTUNER_ENABLE` completely eliminates the memory benefits I get from fusing the scales. \r\n\r\n", "url": "https://github.com/pytorch/ao/issues/550", "state": "open", "labels": [], "created_at": "2024-07-29T16:25:03Z", "updated_at": "2024-07-30T19:59:03Z", "user": "balaabhijit" }, { "repo": "pytorch/audio", "number": 3816, "title": "Division by zero in loudness calculation", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nThe following line in the functional method `loudness` results in `nan` value when the entire waveform is below the hardcoded loudness threshold value `gamma_abs = -70`.\r\nhttps://github.com/pytorch/audio/blob/69b2a0adc2ec03ab99990d7e8be3d4510438c148/src/torchaudio/functional/functional.py#L1627-L1631\r\n\r\nAn example case is while trying to find loudness of an ambient sound signal.\r\n\r\nThe threshold can probably be made configurable with mention in documentation. However, I as the method returns a **LUFS** value, I am unsure if a configurable threshold should be allowed. I am not very familiar with the algorithm yet, any suggestions/corrections to what I've said is most welcome.\r\n\r\n### Versions\r\n\r\nLatest code in `main` branch.", "url": "https://github.com/pytorch/audio/issues/3816", "state": "open", "labels": [], "created_at": "2024-07-24T05:55:53Z", "updated_at": "2024-07-29T06:32:17Z", "comments": 0, "user": "DanTremonti" }, { "repo": "pytorch/audio", "number": 3815, "title": "Division by zero in loudness calculation", "body": "The following line in the functional method `loudness` results in `nan` value when the entire waveform is below the hardcoded loudness threshold value `gamma_abs = -70`.\r\nhttps://github.com/pytorch/audio/blob/69b2a0adc2ec03ab99990d7e8be3d4510438c148/src/torchaudio/functional/functional.py#L1627-L1631\r\n\r\nAn example case is while trying to find loudness of an ambient sound signal.\r\n\r\nThe threshold can probably be made configurable with mention in documentation.", "url": "https://github.com/pytorch/audio/issues/3815", "state": "closed", "labels": [], "created_at": "2024-07-24T05:52:03Z", "updated_at": "2024-07-24T05:53:28Z", "comments": 0, "user": "dhanvanth-pk-13760" }, { "repo": "pytorch/torchtitan", "number": 479, "title": "regarding torch.compile support", "body": "in coming soon, there is an item called `torch.compile support`. I'm wondering if we simply call torch.compile once to wrap the entire model, will that be enough? What's the reason we want to do something more fine-grained and customized? \r\n", "url": "https://github.com/pytorch/torchtitan/issues/479", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-24T01:10:20Z", "updated_at": "2024-07-26T23:50:24Z", "user": "jason718" }, { "repo": "pytorch/torchtitan", "number": 478, "title": "what's the est timeline for releasing Context Parallel and 3D Pipeline ", "body": "Many interesting topics are mentioned in coming soon section, I'm wondering do we have a estimated/targeted releasing date? Thanks again for the great work.", "url": "https://github.com/pytorch/torchtitan/issues/478", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-24T01:09:02Z", "updated_at": "2024-07-26T23:51:06Z", "user": "jason718" }, { "repo": "pytorch/examples", "number": 1278, "title": "Larger image size for DCGAN code with Celeba dataset", "body": "I want to test DCGAN example with a larger image size. The [default](https://github.com/pytorch/tutorials/blob/main/beginner_source/dcgan_faces_tutorial.py#L188) image size is 64x64 and in this [topic](https://github.com/pytorch/examples/issues/70), there are some proposals to modify the code to support larger images sizes.\r\n\r\nHowever, that topic is for the code in 2017 and when I change the size to 128x128, I get a different error now:\r\n```\r\nStarting Training Loop...\r\nTraceback (most recent call last):\r\n File \"/home/mahmood/DCG/main.py\", line 599, in <module>\r\n errD_real = criterion(output, label)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/mahmood/pytorch/torch/nn/modules/module.py\", line 1716, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/mahmood/pytorch/torch/nn/modules/module.py\", line 1727, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/mahmood/pytorch/torch/nn/modules/loss.py\", line 697, in forward\r\n return F.binary_cross_entropy(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/mahmood/pytorch/torch/nn/functional.py\", line 3545, in binary_cross_entropy\r\n raise ValueError(\r\nValueError: Using a target size (torch.Size([128])) that is different to the input size (torch.Size([3200])) is deprecated. Please ensure they have the same size.\r\n\r\n```\r\n\r\nI don't know where does the 3200 come from. Any idea on how to fix that?\r\n ", "url": "https://github.com/pytorch/examples/issues/1278", "state": "closed", "labels": [], "created_at": "2024-07-23T11:40:57Z", "updated_at": "2024-07-24T08:10:03Z", "comments": 0, "user": "mahmoodn" }, { "repo": "pytorch/pytorch", "number": 131313, "title": "How to create a custom op which can be compile by dynamo inductor?", "body": "### \ud83d\udcda The doc issue\n\nhttps://pytorch.org/tutorials/advanced/cpp_extension.html\n\n### Suggest a potential alternative/fix\n\nA descriptive explanation and a simple example are required.\n\ncc @svekars @brycebortree @ezyang @anijain2305 @chauhang @penguinwu", "url": "https://github.com/pytorch/pytorch/issues/131313", "state": "closed", "labels": [ "module: docs", "triaged", "module: custom-operators", "oncall: pt2" ], "created_at": "2024-07-22T08:09:10Z", "updated_at": "2024-07-23T14:17:40Z", "user": "MoFHeka" }, { "repo": "pytorch/examples", "number": 1277, "title": "word_language_model, is it a Transformer, Encoder-only or Decoder only?", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in any of the README.md files is an issues -->\r\n\r\nThe document says word_language_model uses RNN/Transformer but I am having trouble understanding exactly what it is.\r\n\r\nLooking at the input target sequences, seems like it is a generative model where the expected output is shifted by 1(i.e the model is trained to generate words base on a prefix)\r\nhttps://github.com/pytorch/examples/blob/main/word_language_model/main.py#L140\r\n\r\nHowever, I see the output of decoder is re-wired as the input to encoder here:\r\nhttps://github.com/pytorch/examples/blob/main/word_language_model/model.py#L143\r\n\r\nAs a reference, since the document says that word_language_model implement both a RNN and a transformer model, I looked pytorch's implementation of transformer here:\r\nhttps://github.com/pytorch/pytorch/blob/main/torch/nn/modules/transformer.py#L273-L279\r\npytorch's implementation aligns with what the paper proposed where the input to decoder is src(input sequence) and input to. decoder is tgt(shifted target sequence)\r\n\r\nSo obviously word_language_model is not a vanilla transformer-like model for generating text because of the rewiring.\r\nSince it uses the vanilla transformer model and the built in cross attention in decoder is not removed, it is not a decoder-only model either.\r\nAnd since it is trained to generate text, I dont think it can be understood as a decoder-only model.\r\n\r\n\r\nCan someone help me understand why the output of encoder is re-wired to decoder as input to decoder instead of through cross attention and if the doc needs to be updated to reflect what the model is doing or the code needs to be simplified to use a decoder-only model?", "url": "https://github.com/pytorch/examples/issues/1277", "state": "closed", "labels": [], "created_at": "2024-07-20T05:14:09Z", "updated_at": "2024-07-20T05:40:53Z", "comments": 1, "user": "efg001" }, { "repo": "pytorch/TensorRT", "number": 3024, "title": "\u2753 [Question] How to deal with this error: AssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8", "body": "## \u2753 Question\r\n\r\nWhen I run the TensorRT/examples/dynamo/vgg16_fp8_ptq.py \r\nAssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8\r\n\r\n## What you have already tried\r\n\r\nI transfer the cuda version:11.8/12.1/12.2\uff0cit doesn't work\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version : 2.3.1\r\n - CPU Architecture: Core\r\n - OS : wsl ubuntu\r\n - How you installed PyTorch : pip\r\n - Python version: 3.10\r\n - CUDA version: 12.1\r\n - I build conda env by these comands: \r\n conda create -n tensorrt python==3.10\r\n pip install torch torchvision torch-tensorrt tensorrt\r\n pip install nvidia-modelopt\r\n\r\n\r\n## Full error reporting\r\n\r\nTraceback (most recent call last):\r\n File \"/home/kun/code/YOLOv6/tools/quantization/vgg16_fp8.py\", line 106, in <module>\r\n trt_model = torchtrt.dynamo.compile(\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py\", line 227, in compile\r\n trt_gm = compile_module(gm, inputs, settings)\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py\", line 394, in compile_module\r\n submodule_outputs = submodule(\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/fx/graph_module.py\", line 737, in call_wrapped\r\n return self._wrapped_call(self, *args, **kwargs)\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/fx/graph_module.py\", line 317, in __call__\r\n raise e\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/fx/graph_module.py\", line 304, in __call__\r\n return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1532, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1541, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"<eval_with_key>.35\", line 6, in forward\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/_ops.py\", line 594, in __call__\r\n return self_._op(*args, **kwargs)\r\n File \"/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/modelopt/torch/quantization/tensor_quant.py\", line 49, in scaled_e4m3_impl\r\n assert (\r\nAssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8.\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3024", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-20T01:44:13Z", "updated_at": "2024-08-07T17:06:50Z", "user": "zk1009" }, { "repo": "pytorch/tutorials", "number": 2978, "title": "\ud83d\udca1 [REQUEST] - Tutorial on deep survival analysis using PyTorch & TorchSurv", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\r\n\r\n[`TorchSurv`](https://github.com/Novartis/torchsurv) is a Python package that serves as a companion tool to perform deep survival modeling within the `PyTorch` environment. Unlike existing libraries that impose specific parametric forms on users, `TorchSurv` enables the use of custom `PyTorch`-based deep survival models. With its lightweight design, minimal input requirements, full `PyTorch` backend, and freedom from restrictive survival model parameterizations, `TorchSurv` facilitates efficient survival model implementation, particularly beneficial for high-dimensional input data scenarios.\r\n\r\nIn this tutorial, we want to introduce how to easily use our package, from `loss functions` (Weibull and Cox model), `evaluation metrics` (concordance-index, AUC, Brier score) and `statistical tools` (Kaplan-Meier, estimator). This will enable `Pytorch` users to **develop true survival model by changing few lines of code** while using their favorite deep learning framework!\r\n\r\n### Existing tutorials on this topic\r\n\r\nThe tutorial will be adapted from our existing documentations:\r\n* [introduction to TorchSurv](https://opensource.nibr.com/torchsurv/notebooks/introduction.html)\r\n* [survival example with MNIST](https://opensource.nibr.com/torchsurv/notebooks/momentum.html)\r\n\r\n### Additional context\r\n\r\n**category**: `survival analysis`\r\n\r\nThis work was made as part of the collaboration research between the `FDA` and `Novartis`\r\n\r\nFurther read:\r\n* Our preprint manuscript can be found [here](https://arxiv.org/abs/2404.10761). \r\n* Features comparison between best `R` and `Python` packages can be found in [this section](https://opensource.nibr.com/torchsurv/index.html#related-packages)\r\n* Performance benchmarks and evaluations can be found [here](https://opensource.nibr.com/torchsurv/benchmarks.html)", "url": "https://github.com/pytorch/tutorials/issues/2978", "state": "closed", "labels": [], "created_at": "2024-07-19T17:53:34Z", "updated_at": "2024-10-30T18:09:44Z", "comments": 3, "user": "tcoroller" }, { "repo": "pytorch/xla", "number": 7714, "title": "How to test on a subset of TPUs in a TPU Pod", "body": "## \u2753 Questions and Help\r\n\r\nWe have some quota for TPU pods (TPU v3-8N, N>1) but not for single-node machines (TPU v3-8). As everyone knows, single-node machines are really useful for debugging. However, under the default settings, simply launching the XLA code on a single node within a pod won't work -- it will wait for other nodes to join.\r\n\r\nFrom JAX\u2019s documentation, I vaguely remember there\u2019s an environment variable that allows you to run code on a subset of TPUs from a TPU pod. Do we have this feature in PyTorch XLA? If so, could you provide a pointer to this?", "url": "https://github.com/pytorch/xla/issues/7714", "state": "closed", "labels": [], "created_at": "2024-07-19T16:29:43Z", "updated_at": "2024-07-31T09:29:39Z", "user": "Jiayi-Pan" }, { "repo": "pytorch/TensorRT", "number": 3018, "title": "\u2753 [Question] How do you save a unet model compiled Torch-TensorRT (Stable Diffusion XL)", "body": "## \u2753 Question\r\n\r\nHow do you save a unet model compiled Torch-TensorRT from Stable Diffusion XL?\r\n\r\n## What you have already tried\r\n\r\nI've tried following the compilation instructions from the tutorial ([link](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_stable_diffusion.html)). It wasn't very useful for my use case because I would like to save the compilation on disk and load it down the line when inference is needed. \r\n\r\nSo I've tried following the instructions which let you save your compilation using the dynamo backend ([link](https://pytorch.org/TensorRT/user_guide/saving_models.html#dynamo-ir)). This script represents a summary of what I'm doing:\r\n\r\n```\r\nimport torch\r\nimport torch_tensorrt\r\nfrom diffusers import StableDiffusionXLPipeline\r\n\r\npipe = StableDiffusionXLPipeline.from_pretrained(\r\n \"stabilityai/stable-diffusion-xl-base-1.0\",\r\n torch_dtype=torch.float16,\r\n use_safetensors=True,\r\n).to(\"cuda\")\r\n\r\ninputs = [torch.randn((2, 4, 128, 128)).cuda()] # After some digging, these are the input sizes needed to generate 1024x1024 images\r\n\r\ntrt_gm = torch_tensorrt.compile(pipe.unet, ir=\"dynamo\", inputs=inputs)\r\n```\r\n\r\nBut this yields the following error: `TypeError: UNet2DConditionModel.forward() missing 2 required positional arguments: 'timestep' and 'encoder_hidden_states'`\r\n\r\nSo, I've tried to provide these arguments as well, found after some playing around with the code from diffusers:\r\n\r\n```\r\nkwargs = {\r\n \"timestep\": torch.tensor(951.0).cuda(),\r\n \"encoder_hidden_states\": torch.randn(\r\n (2, 77, 2048), dtype=torch.float16\r\n ).cuda(),\r\n}\r\n\r\ntrt_gm = torch_tensorrt.compile(pipe.unet, ir=\"dynamo\", inputs=inputs, **kwargs)\r\n```\r\n\r\nAnd I get the same error. Probably, the kwargs don't get passed down into the calling functions. After altering the code from torch export (which probably wasn't necessary), I got an error of the type: `torch._dynamo.exc.InternalTorchDynamoError: argument of type 'NoneType' is not iterable`\r\n\r\nAny ideas how to properly compile a unet model from stable diffusion XL? Many thanks in advance.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.3.1+cu121\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 22.04.3 LTS\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): `pip install torch --index-url https://download.pytorch.org/whl/cu121`\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: \r\n - Python version: Python 3.10.12\r\n - CUDA version: 12.4\r\n - GPU models and configuration: NVIDIA GeForce RTX 4090\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/3018", "state": "open", "labels": [ "question" ], "created_at": "2024-07-18T18:15:06Z", "updated_at": "2024-09-03T06:52:33Z", "user": "dru10" }, { "repo": "pytorch/vision", "number": 8536, "title": "ColorJitter results with OverflowError", "body": "### \ud83d\udc1b Describe the bug\n\nUsing `ColorJitter` augmentations in torchvision 0.18.1 results in an `OverflowError`. This was not observed in older `torchvision` versions (tested with 0.15.0).\r\n\r\nHow to reproduce:\r\n```python\r\n# read an image\r\nfrom PIL import Image\r\nimport requests\r\nfrom io import BytesIO\r\n# I picked this image, but it actually happens with others as well. just try one that you have.\r\npil_img = Image.open(BytesIO(requests.get('https://www.weizmann.ac.il/math/bagon/sites/math.bagon/files/styles/pi_photo/public/ShaiBagon_8.png').content))\r\n\r\nfrom torchvision import transforms\r\ncj = transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.2, hue=0.1)\r\nfor _ in range(10):\r\n cj(pil_img) # it does not happen every time, but out of 10 it will most likely happen)\r\n```\r\nThis code will through:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 2, in <module>\r\n File \"[...]/python3.10/site-packages/torch/nn/modules/module.py\", line 1532, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"[...]/python3.10/site-packages/torch/nn/modules/module.py\", line 1541, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"[...]/lib/python3.10/site-packages/torchvision/transforms/transforms.py\", line 1280, in forward\r\n img = F.adjust_hue(img, hue_factor)\r\n File \"[...]/lib/python3.10/site-packages/torchvision/transforms/functional.py\", line 959, in adjust_hue\r\n return F_pil.adjust_hue(img, hue_factor)\r\n File \"[...]/lib/python3.10/site-packages/torchvision/transforms/_functional_pil.py\", line 114, in adjust_hue\r\n np_h += np.uint8(hue_factor * 255)\r\nOverflowError: Python integer -24 out of bounds for uint8\r\n```\r\n\n\n### Versions\n\n```\r\nPyTorch version: 2.3.1+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Red Hat Enterprise Linux 9.1 (Plow) (x86_64)\r\nGCC version: (GCC) 11.3.1 20220421 (Red Hat 11.3.1-2)\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.14.0-162.6.1.el9_1.x86_64-x86_64-with-glibc2.34\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe\r\nNvidia driver version: 535.161.07\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 57 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 52\r\nOn-line CPU(s) list: 0-51\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz\r\nCPU family: 6\r\nModel: 106\r\nThread(s) per core: 1\r\nCore(s) per socket: 26\r\nSocket(s): 2\r\nStepping: 6\r\nCPU max MHz: 3400.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 4400.00\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nL1d cache: 2.4 MiB (52 instances)\r\nL1i cache: 1.6 MiB (52 instances)\r\nL2 cache: 65 MiB (52 instances)\r\nL3 cache: 78 MiB (2 instances)\r\nNUMA node(s): 2\r\nNUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50\r\nNUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\n", "url": "https://github.com/pytorch/vision/issues/8536", "state": "closed", "labels": [], "created_at": "2024-07-18T14:00:33Z", "updated_at": "2024-07-28T07:06:21Z", "comments": 7, "user": "shaibagon" }, { "repo": "pytorch/serve", "number": 3253, "title": "GPU memory not released after inference", "body": "I built the .mar file by using torch-model-archiver, and wrote a custom handler that processes batched inputs, to be more specific\r\nI'm doing the following steps:\r\n\r\nsending one single request with N images as a list of base64 str\r\nconverting these images into tensors in my handler's preprocess\r\ncreate a batch from the above tensors and pass it to the model for inference\r\nreturn the inference response\r\n\r\nand through testing, I found if I send 4 images it will occupy around 14G memories of GPU, and then after sending 4 images, the next request if I only send 1 image, the GPU memory is not released and kept at 14G\r\n\r\nIs this normal, and is there any way I can release some GPU memories after no inference request like after a while?\r\n", "url": "https://github.com/pytorch/serve/issues/3253", "state": "closed", "labels": [], "created_at": "2024-07-17T09:10:59Z", "updated_at": "2024-07-19T14:39:02Z", "comments": 1, "user": "Di-Gu" }, { "repo": "pytorch/executorch", "number": 4276, "title": "How to export a pretrained model?", "body": "Is there a way to export a pretrained model to executorch? This example https://pytorch.org/executorch/stable/getting-started-setup.html#export-a-program only shows how to export a new model instance. I tried doing it like this\r\n\r\n```\r\n# 1. torch.export: Defines the program with the ATen operator set.\r\nmodel.eval()\r\naten_dialect = torch.export.export( model, ( torch.ones( 2 ) ) )\r\n\r\n# 2. to_edge: Make optimizations for Edge devices\r\nedge_program = executorch.exir.to_edge( aten_dialect )\r\n\r\n# 3. to_executorch: Convert the graph to an ExecuTorch program\r\nexecutorch_program = edge_program.to_executorch()\r\n\r\n# 4. Save the compiled .pte program\r\nwith open( \"net.pte\", \"wb\" ) as file:\r\n file.write(executorch_program.buffer)\r\n```\r\n\r\nbut I get `Expecting 'args' to be a tuple of example positional inputs, got <class 'torch.Tensor'>`.\r\n\r\nMy model:\r\n```\r\nclass Net( nn.Module ):\r\n def __init__( self ):\r\n super().__init__()\r\n self.inputFeatures = 2\r\n self.fc1 = nn.Linear( self.inputFeatures, 1 )\r\n\r\n def forward( self, x ):\r\n fc1 = F.sigmoid( self.fc1( x ) )\r\n return fc1\r\n```\r\n", "url": "https://github.com/pytorch/executorch/issues/4276", "state": "closed", "labels": [], "created_at": "2024-07-16T14:55:42Z", "updated_at": "2024-07-22T21:55:34Z", "user": "Bresenham" }, { "repo": "pytorch/torchtitan", "number": 462, "title": "[FP8 options] Float8Linear vs TransformerEngine", "body": "Hi team, first of all thanks for this great repo for showcasing how to leverage the latest techniques in torch ecosystem, it's been super useful and insightful :) I have a naive question about FP8 options and would like to know more about how you view it. \r\n\r\nThere's the https://github.com/NVIDIA/TransformerEngine by nvidia for fp8 training on hopper and it's started to be integrated into downstream frameworks like HF, lightning etc. However I'm also seeing https://github.com/pytorch-labs/float8_experimental evolving quickly and the fact that it's more lightweight & potentially more composable w/ remaining torch techniques is also important to us. I'm wondering if you have some insight about the pros and cons of each of them, how would Float8Linear's performance compare to TE, and if you would recommend going with TE or Float8Linear for LLM pretraining/finetuning use cases. Thanks a lot! \r\n", "url": "https://github.com/pytorch/torchtitan/issues/462", "state": "open", "labels": [ "question" ], "created_at": "2024-07-16T03:54:29Z", "updated_at": "2025-06-02T16:54:11Z", "user": "yundai424" }, { "repo": "pytorch/torchchat", "number": 903, "title": "Github code search doesnt work with folders called `build`", "body": "### \ud83d\udc1b Describe the bug\n\nI was trying to look for the `model.py` definition\r\nhttps://github.com/pytorch/torchchat/tree/main/build but it wasn't showing up\r\n<img width=\"816\" alt=\"Screenshot 2024-07-15 at 6 54 39\u202fPM\" src=\"https://github.com/user-attachments/assets/11021312-9e40-4ec6-adad-0a52a24f06e0\">\r\n\r\ngenerate.py which is not in builder works fine\r\n<img width=\"805\" alt=\"Screenshot 2024-07-15 at 6 54 54\u202fPM\" src=\"https://github.com/user-attachments/assets/8f6eaf5e-7e76-4a3d-b1cc-37254c6e3515\">\r\n\r\nCan we rename the folder to anything else, build to me signifies either release infra scripts or artifacts that are created after installing a package not model building utilities\r\n\n\n### Versions\n\nNightlies", "url": "https://github.com/pytorch/torchchat/issues/903", "state": "open", "labels": [ "actionable" ], "created_at": "2024-07-16T01:55:45Z", "updated_at": "2024-07-30T15:11:19Z", "comments": 1, "user": "msaroufim" }, { "repo": "pytorch/serve", "number": 3247, "title": "TorchServe docker image with vllm, trt-llm dependencies", "body": "### \ud83d\ude80 The feature\n\nTo have a no code solution with vllm, trt-llm, TorchServe needs a docker image with these dependencies.\r\nIncluding this with TorchServe's GPU image will bloat the image for all users of TorchServe\r\n\r\nWe can instead have another image for GenAI.\n\n### Motivation, pitch\n\nNo code solution for GenAI\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3247", "state": "open", "labels": [], "created_at": "2024-07-16T01:16:34Z", "updated_at": "2024-07-16T01:16:34Z", "comments": 0, "user": "agunapal" }, { "repo": "pytorch/xla", "number": 7689, "title": "CUDA and GPU-Flavoured Docker/Container Image Missing CUDA Support", "body": "## \u2753 Questions and Help\r\n\r\nHi,\r\n\r\nAccording to the docs [here]( https://github.com/pytorch/xla?tab=readme-ov-file#docker ), the image `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_cuda_12.1` should have Cuda 12.1 support for use on a local GPU. I have also tried pulling `xla:nightly_3.8_cuda_12.1`.\r\n\r\nWhen I start the container (`podman run --shm-size=16g --net=host --gpus all us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_cuda_12.1`), it appears there is no CUDA support compiled in:\r\n\r\n```terminal\r\n# nvidia-smi\r\nbash: nvidia-smi: command not found\r\n# python\r\n>>> import torch, torch_xla\r\n>>> torch.cuda.get_device_name(0)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py\", line 414, in get_device_name\r\n return get_device_properties(device).name\r\n File \"/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py\", line 444, in get_device_properties\r\n _lazy_init() # will define _get_device_properties\r\n File \"/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py\", line 284, in _lazy_init\r\n raise AssertionError(\"Torch not compiled with CUDA enabled\")\r\nAssertionError: Torch not compiled with CUDA enabled\r\n>>> print(torch.__version__)\r\n2.3.0 # No CUDA suffix here\r\n>>> print(torch_xla.__version__)\r\n2.3.0 # Or here\r\n```\r\n\r\nAm I missing something here, or has something gone up with these CI builds?\r\n\r\nThanks\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/7689", "state": "closed", "labels": [ "question", "xla:gpu" ], "created_at": "2024-07-15T22:56:55Z", "updated_at": "2025-04-03T13:56:12Z", "user": "stellarpower" }, { "repo": "pytorch/xla", "number": 7682, "title": "Is there any way to directly execute the cached computational graph", "body": "## \u2753 Questions and Help\r\nMy application code is complex, but it's not computationally expensive, and the graph is consistent, so I tried to cache it with XLA_PERSISTENT_CACHE_PATH, but it took a long time to execute the logic (without performing any computation).Is there any way to execute the cached graph? I also tried dynamo, but encountered many errors, such as incompatibility with autocast and so on", "url": "https://github.com/pytorch/xla/issues/7682", "state": "closed", "labels": [ "question", "dynamo" ], "created_at": "2024-07-15T11:19:23Z", "updated_at": "2025-04-01T13:11:38Z", "user": "mars1248" }, { "repo": "pytorch/xla", "number": 7667, "title": "Equivalent of get_worker_info to split an IterableDataset", "body": "## \u2753 Questions and Help\r\n\r\nI have an `IterableDataset` of unknown size. I would like to use something like `torch.utils.data.get_worker_info` to split it across the spawned `xmp` processes, but AFAIK there is no equivalent in `xla_multiprocessing`. Is there a workaround? I tried randomly subsampling on each process but this hangs for me for some reason. ", "url": "https://github.com/pytorch/xla/issues/7667", "state": "closed", "labels": [], "created_at": "2024-07-10T18:46:08Z", "updated_at": "2024-08-06T01:17:46Z", "comments": 20, "user": "davidaknowles" }, { "repo": "pytorch/pytorch", "number": 130238, "title": "how to simplify torch.fx like using onnxsim?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nlack of the corresponding tools to simplify the exported FX model and count the flops, memory, etc.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/130238", "state": "open", "labels": [ "triaged" ], "created_at": "2024-07-08T08:30:28Z", "updated_at": "2024-08-16T13:40:42Z", "user": "MaltoseFlower" }, { "repo": "pytorch/data", "number": 1283, "title": "best practice for `snapshot_every_n_steps`", "body": "Hello,\r\n\r\nThank you for your awesome implementation of StatefulDataloader.\r\n\r\nI have a question about `snapshot_every_n_steps`. It seems there is not much detailed explanation about this argument.\r\n\r\n* Will frequent snapshots (i.e., `snapshot_every_n_steps=1`) cause a data loading burden?\r\n* What is the best practice for setting this value? Is it related to checkpointing frequency?\r\n\r\ncc @andrewkho ", "url": "https://github.com/meta-pytorch/data/issues/1283", "state": "open", "labels": [ "documentation" ], "created_at": "2024-07-07T03:56:03Z", "updated_at": "2024-11-17T19:41:33Z", "comments": 5, "user": "ShoufaChen" }, { "repo": "pytorch/vision", "number": 8515, "title": "How to write your own v2 transforms example does not work", "body": "### \ud83d\udc1b Describe the bug\n\nI copy pasted the custom transform from your [tutorial page](https://pytorch.org/vision/stable/auto_examples/transforms/plot_custom_transforms.html#:~:text=How%20to%20write%20your%20own%20v2%20transforms%20Note,from%20torchvision%20import%20tv_tensors%20from%20torchvision.transforms%20import%20v2) and inserted it into the transform pipeline in your reference/detection/presets.py script. When trying to run, I get the following error.\r\n\r\n\r\nFile \"site-packages/torchvision/transforms/v2/_container.py\", line 51, in forward\r\n outputs = transform(*inputs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"site-packages/torch/nn/modules/module.py\", line 1532, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"site-packages/torch/nn/modules/module.py\", line 1538, in _call_impl\r\n if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"site-packages/torch/nn/modules/module.py\", line 1709, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'MyCustomTransform' object has no attribute '_backward_hooks'\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.3.1+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.6 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.26.3\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.31\r\nIs CUDA available: True\r\nCUDA runtime version: 11.8.89\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration:\r\nGPU 0: NVIDIA A100-PCIE-40GB\r\nGPU 1: NVIDIA A100-PCIE-40GB\r\n\r\nNvidia driver version: 550.90.07\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.5.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.5.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.5.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.5.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.5.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.5.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.5.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 43 bits physical, 48 bits virtual\r\nCPU(s): 128\r\nOn-line CPU(s) list: 0-127\r\nThread(s) per core: 2\r\nCore(s) per socket: 64\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: AuthenticAMD\r\nCPU family: 23\r\nModel: 49\r\nModel name: AMD EPYC 7702P 64-Core Processor\r\nStepping: 0\r\nFrequency boost: enabled\r\nCPU MHz: 1540.122\r\nCPU max MHz: 2183,5930\r\nCPU min MHz: 1500,0000\r\nBogoMIPS: 3992.22\r\nVirtualization: AMD-V\r\nL1d cache: 2 MiB\r\nL1i cache: 2 MiB\r\nL2 cache: 32 MiB\r\nL3 cache: 256 MiB\r\nNUMA node0 CPU(s): 0-127\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection\r\nVulnerability Spec rstack overflow: Mitigation; safe RET\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\r\nVulnerability Srbds: Not affected\r\nVulner", "url": "https://github.com/pytorch/vision/issues/8515", "state": "open", "labels": [], "created_at": "2024-07-06T23:04:22Z", "updated_at": "2024-07-10T21:59:25Z", "user": "TonyCongqianWang" }, { "repo": "pytorch/xla", "number": 7635, "title": "Inconsistency between xla/examples/train_resnet_base.py and docs", "body": "## \ud83d\udcda Documentation\r\n\r\nThis isn't necessarily an issue with the documentation, but an inconsistency between the documentation and the simplest [Pytorch XLA example](https://github.com/pytorch/xla/blob/master/examples/train_resnet_base.py). The [docs](https://pytorch.org/xla/release/2.3/index.html) say that the one key change to a standard training loop (for single device use) is adding `xm.mark_step()`, but `train_resnet_base.py` doesn't have (and just has `xm.wait_device_ops()` after all all epochs are complete). \r\n\r\nMy understanding is that `xm.mark_step()` isn't necessary if we're not directly accessing any state on the TPU, which is why `train_resnet_base.py` doesn't use it and works around it via `xm.add_step_closure`. I assume the latter is actually preferred, but either way it would be helpful for folks getting started if there wasn't a confusing inconsistency like this for the simplest setting. \r\n\r\n@JackCaoG I think this is your wheelhouse? Thanks for any clarification. \r\n", "url": "https://github.com/pytorch/xla/issues/7635", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-06T19:52:50Z", "updated_at": "2025-04-03T14:51:15Z", "user": "davidaknowles" }, { "repo": "pytorch/pytorch", "number": 130137, "title": "How to get stream operators in custom backend compiler ?", "body": "### \ud83d\udc1b Describe the bug\n\nHi, when I use a custom backend, I find that the fx graph that custom compiler gets does not have the stream related operations.\r\n\r\nThen I found that the fx graph dropped those stream operations after aot_module_simplified.\r\nSo, I want to know how can we get a fx graph that contains stream-related operations, when using custom compiler and aot_module_simplified? \r\n\r\n\r\ncc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @ezyang @anijain2305 @zou3519 @ptrblck @msaroufim @yf225\r\n\r\n\r\nHere is my test script. \r\nWhen I use aot_toy_backend backend, no stream related ops in gx graph.\r\nWhat can we do to fix this\uff1f Can you give me some guidance or advice on this issue.\r\n\r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nclass Layer(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n def forward(self, x):\r\n stream2 = torch.cuda.Stream()\r\n with torch.cuda.stream(stream2):\r\n z = x + 1\r\n y = x - 1\r\n return y + z\r\n\r\nmm = Layer()\r\nx=torch.randn([4]).cuda()\r\n\r\nfrom torch._functorch.aot_autograd import aot_module_simplified\r\ndef toy_backend(gm, sample_inputs):\r\n return gm\r\ndef aot_toy_backend(gm, sample_inputs):\r\n return aot_module_simplified(gm, sample_inputs, fw_compiler=toy_backend)\r\n\r\nmmc = torch.compile(mm, backend=aot_toy_backend)\r\nyc= mmc(x)\r\n```\r\n\n\n### Versions\n\npytorch 2.3.0", "url": "https://github.com/pytorch/pytorch/issues/130137", "state": "open", "labels": [ "oncall: distributed", "triaged", "oncall: pt2" ], "created_at": "2024-07-05T03:41:24Z", "updated_at": "2024-11-27T05:20:33Z", "user": "wbigat" }, { "repo": "pytorch/xla", "number": 7634, "title": "Failed to install xla gpu", "body": "## \u2753 Questions and Help\r\npip install torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl\r\nBut got the error:\r\nERROR: torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl is not a supported wheel on this platform.\r\n\r\nHow can i install torch_xla on GPU ?", "url": "https://github.com/pytorch/xla/issues/7634", "state": "closed", "labels": [ "xla:gpu" ], "created_at": "2024-07-05T02:37:12Z", "updated_at": "2024-08-05T21:40:28Z", "comments": 1, "user": "Beakboomboom" }, { "repo": "pytorch/xla", "number": 7633, "title": "Multiprocess inference warning: ignoring nprocs", "body": "## \u2753 Questions and Help\r\nWhen I made multiprocess inference of huggingface transformers frame, I used xmp.spawn(perform_inference, args=(args,), nprocs=4), and I wanted to run 4 scripts once. However, it reported a warning that WARNING:root:Unsupported nprocs (4), ignoring... I wonder if it is a bug or it has any mistake in my infer script.\r\n\r\nMy infer script is as following:\r\n\r\n device = xm.xla_device()\r\n print(f\"tpu name: {device}\")\r\n\r\n sentences = [\"Sample-1\", \"Sample-2\"] * args.batch_size\r\n print(f\"sentences length: {len(sentences)}\")\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)\r\n model = AutoModel.from_pretrained(args.model_name_or_path).to(device)\r\n model.eval()\r\n\r\n for i in range(20):\r\n if i == 19:\r\n print(f\"log port: {port}\")\r\n xp.trace_detached(f'localhost:{port}', './profiles/', duration_ms=2000)\r\n with xp.StepTrace('bge_test'):\r\n with xp.Trace('build_graph'):\r\n encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt').to(device)\r\n with torch.no_grad():\r\n start = time.perf_counter()\r\n model_output = model(**encoded_input)\r\n end = time.perf_counter()\r\n sentence_embeddings = model_output[0][:, 0]\r\n print(\"inference time:\", (end - start))\r\n\r\n sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)\r\n print(\"Sentence embeddings: \", sentence_embeddings)\r\n\r\nif __name__ == \"__main__\":\r\n torch.set_default_dtype(torch.float32)\r\n args = get_args()\r\n\r\n xmp.spawn(perform_inference, args=(args,), nprocs=4)\r\n\r\n# detail log\r\nWARNING:root:Unsupported nprocs (4), ignoring...\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080892.528224 2908632 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nI0000 00:00:1720080892.528293 2908632 pjrt_api.cc:79] PJRT_Api is set for device type tpu\r\nI0000 00:00:1720080892.528300 2908632 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080892.544289 2908627 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nI0000 00:00:1720080892.544426 2908627 pjrt_api.cc:79] PJRT_Api is set for device type tpu\r\nI0000 00:00:1720080892.544434 2908627 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080892.728254 2908631 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nI0000 00:00:1720080892.728326 2908631 pjrt_api.cc:79] PJRT_Api is set for device type tpu\r\nI0000 00:00:1720080892.728332 2908631 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080892.916441 2908634 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nI0000 00:00:1720080892.916616 2908634 pjrt_api.cc:79] PJRT_Api is set for device type tpu\r\nI0000 00:00:1720080892.916625 2908634 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080893.409535 2908636 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nI0000 00:00:1720080893.409646 2908636 pjrt_api.cc:79] PJRT_Api is set for device type tpu\r\nI0000 00:00:1720080893.409654 2908636 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080893.658751 2908630 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nI0000 00:00:1720080893.658883 2908630 pjrt_api.cc:79] PJRT_Api is set for device type tpu\r\nI0000 00:00:1720080893.658891 2908630 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080893.659256 2908635 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1720080893.659285 2908633 pjrt_ap", "url": "https://github.com/pytorch/xla/issues/7633", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-07-04T08:28:47Z", "updated_at": "2025-04-03T14:52:10Z", "user": "SileonQuinn" }, { "repo": "pytorch/xla", "number": 7622, "title": "How to avoid compilation in a section of code?", "body": "## \u2753 Questions and Help\r\nWe are using Pytorch XLA w/ TPU to train a multi-modal language models.\r\n\r\nWe can make most of the code, such as image encoding and the forward pass in the LLM backbone, in a static shape, which XLA handles well. However, making the part that fuses image and text embeddings into the input embedding static is extremely challenging. \r\n\r\nCurrently, we use `mark_step` to isolate that section from the rest of the code, allowing it to recompile each time. **Although this part is very computationally light, the recompilation is extremely slow and often consumes the majority of training time**.\r\n\r\nWe find documentation on this issue very hard to find, and we are exploring better solutions, such as running that part on the CPU, in eager mode, or not saving that part of the graph to avoid OOM errors during long training runs. **We wonder if you have any suggestions/pointers on how to workaround this inefficiency?**\r\n\r\nFollowing is a pesudo code to illustrate our problem\r\n\r\n```python\r\nfor ... # loading data\r\n # these tensors are with static shape, xla works great on them\r\n image_embeddings = image_encoder(raw_image_tensor)\r\n text_embeddings = get_text_embedding(text_token_idxs)\r\n \r\n xm.mark_step()\r\n # this part is very light in compute, but dynamic. We currently just recompile this graph every single time :(\r\n input_embeddings = fuse_embedding(raw_image_tensor, text_token_idxs, sequence_info_dict)\r\n xm.mark_step()\r\n \r\n # these tensors are with static shape, xla works great on them\r\n output_logits = llm(input_embeddings)\r\n # loss compute / backward / optimizer step omited\r\n```", "url": "https://github.com/pytorch/xla/issues/7622", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-03T00:15:12Z", "updated_at": "2025-04-03T14:54:28Z", "user": "Jiayi-Pan" }, { "repo": "pytorch/xla", "number": 7614, "title": "Dynamo persistent cache real-time look-up", "body": "## \ud83d\ude80 Feature\r\nAs described in https://github.com/pytorch/pytorch/issues/125958, we are integrating with vLLM on TPUs. We see that in the warm up phase of the vLLM, it needs to pre-compile ~30 different input shape combinations. PyTorch/XLA does not support dynamic shapes today so torch.compile will keep compiling the model code which slows down the development speed (waiting for 10 minutes before warm up is finished). PyTorch/XLA already cache the XLA compilation but torch.compile itself is pretty expensive.\r\n\r\nThis feature request pitches to achieve the similar effect of dynamic shapes by persistent caching and real time look up of the compiled program.\r\n\r\n## Details\r\nTo do this, in high-level, we need to do the following:\r\n- Turn on the dynamo dynamic shape mode, dynamo will start passing the inputs with dynamic shapes to PyTorch/XLA\r\n- PyTorch/XLA can then try to figure out if this shape is compiled in XLA\r\n- If it is, we can map the different input shape to different compiled binaries\r\n\r\n## Open questions\r\n- Does persistent FxGraph caching work with PyTorch/XLA? Details at https://github.com/pytorch/pytorch/issues/125958#issuecomment-2204040977. \r\n- How can we properly map the different input shape to different compiled binaries?\r\n\r\ncc @JackCaoG @WoosukKwon\r\n", "url": "https://github.com/pytorch/xla/issues/7614", "state": "closed", "labels": [], "created_at": "2024-07-02T21:01:36Z", "updated_at": "2024-07-23T01:18:34Z", "comments": 2, "user": "wonjoo-wj" }, { "repo": "pytorch/vision", "number": 8510, "title": "Obscure error messages using VideoReader when PyAV version too old/not installed", "body": "### \ud83d\udc1b Describe the bug\n\nWhen a sufficiently recent version of PyAV is not installed, the script `vision/torchvision/io/video_reader.py` initialises the variable `av` to an `ImportError` object that contains a description of the issue, either at line 38:\r\n```python\r\nav = ImportError(\r\n \"\"\"\\\r\nPyAV is not installed, and is necessary for the video operations in torchvision.\r\nSee https://github.com/mikeboers/PyAV#installation for instructions on how to\r\ninstall PyAV on your system.\r\n\"\"\"\r\n )\r\n```\r\nor on line 28 (code omitted for brevity, but is similar to the above). This is potentially very useful information that would make it easy to see why an application isn't working. Unfortunately, this error is never actually raised.\r\n\r\nInstead, when a VideoReader object is created, the `av` variable is simply assumed to contain the PyAV module object. This is first used on line 159: \r\n```python\r\n self.container = av.open(src, metadata_errors=\"ignore\")\r\n```\r\nAs an `ImportError` object does not have a method called `open`, this results in a rather mystifying error condition being raised: `AttributeError: 'ImportError' object has no attribute 'open'`.\r\n\r\nI suspect there should be a test immediately prior to line 159 which checks if `av` is an ImportError object and raises it if it is.\n\n### Versions\n\nThis bug is not related to specific versions, but can be seen by examination of the current version of the source code.", "url": "https://github.com/pytorch/vision/issues/8510", "state": "open", "labels": [], "created_at": "2024-07-02T18:51:34Z", "updated_at": "2024-07-04T10:43:51Z", "comments": 1, "user": "occipita" }, { "repo": "pytorch/pytorch", "number": 129949, "title": "How to get stream operators in custom backend compiler ?", "body": "### \ud83d\udc1b Describe the bug\n\nHi, when I use a custom backend, I find that the fx graph that custom compiler gets does not have the stream related operations.\r\n\r\nThen I found that the fx graph dropped those stream operations after aot_module_simplified.\r\nSo, I want to know how can we get a fx graph that contains stream-related operations, when using aot_module_simplified and custom compiler?\r\n\r\n\r\ncc @ezyang @anijain2305 @chauhang @penguinwu @zou3519 @ptrblck @msaroufim \r\n\r\n\r\nHere is my test script. \r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nclass Layer(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n def forward(self, x):\r\n stream2 = torch.cuda.Stream()\r\n with torch.cuda.stream(stream2):\r\n z = x + 1\r\n y = x - 1\r\n return y + z\r\n\r\nmm = Layer()\r\nx=torch.randn([4]).cuda()\r\n\r\nfrom torch._functorch.aot_autograd import aot_module_simplified\r\ndef toy_backend(gm, sample_inputs):\r\n return gm\r\ndef aot_toy_backend(gm, sample_inputs):\r\n return aot_module_simplified(gm, sample_inputs, fw_compiler=toy_backend)\r\n\r\nmmc = torch.compile(mm, backend=aot_toy_backend)\r\nyc= mmc(x)\r\n```\r\n\r\n When I use aot_toy_backend backend, no stream related ops in gx graph.\r\n\r\n\n\n### Versions\n\npytorch 2.3.0", "url": "https://github.com/pytorch/pytorch/issues/129949", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2024-07-02T09:05:54Z", "updated_at": "2024-07-05T06:31:36Z", "user": "wbigat2" }, { "repo": "pytorch/xla", "number": 7607, "title": "How to use spmd to support hybrid shard data parallelism\uff1f", "body": "## \u2753 Questions and Help\r\nFsdp can be well expressed by spmd, but hsdp seems to be unable to be expressed. Is there any way to express hsdp in spmd?", "url": "https://github.com/pytorch/xla/issues/7607", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-02T08:05:47Z", "updated_at": "2025-04-03T14:54:52Z", "user": "mars1248" }, { "repo": "pytorch/pytorch", "number": 129877, "title": "Eager and PT2 inconsistent on whether or not scalar tensor is allowed as input where int is expected", "body": "### \ud83d\udc1b Describe the bug\n\nInternal xref: https://fb.workplace.com/groups/1075192433118967/posts/1454391288532411/\r\n\r\nThe error looks like this:\r\n\r\n```\r\nTorchRuntimeError: Failed running call_function fbgemm.jagged_1d_to_dense(*(), **{'values': FakeTensor(..., device='cuda:7', size=(260039,), dtype=torch.int64), 'offsets': FakeTensor(..., device='cuda:7', size=(513,), dtype=torch.int64), 'max_sequence_length': FakeTensor(..., device='cuda:7', size=(), dtype=torch.int64), 'padding_value': 0}):\r\nfbgemm::jagged_1d_to_dense() Expected a value of type 'int' for argument 'max_sequence_length' but instead found type 'FakeTensor'.\r\n```\r\n\r\nYou can work around it by replacing `max_len = torch.max(lengths)` with `max_len = torch.max(lengths).item()` but it would be better if PT2 implicitly inserted the item call\r\n\r\n@zou3519 I am not sure if this is a custom op problem or a Dynamo problem\r\n\r\nA minimal repro should be relatively simple to create.\n\n### Versions\n\nmain\n\ncc @anijain2305 @chauhang @penguinwu @zou3519 @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/129877", "state": "closed", "labels": [ "triaged", "module: custom-operators", "oncall: pt2", "module: pt2-dispatcher" ], "created_at": "2024-07-01T14:11:55Z", "updated_at": "2025-07-30T17:43:13Z", "user": "ezyang" }, { "repo": "pytorch/data", "number": 1280, "title": "Importing `torchdata.stateful_dataloader` hides `torch` RandomSampler and BatchSampler", "body": "### \ud83d\udc1b Describe the bug\n\n### Description\r\n\r\nIn `torchdata.stateful_dataloader.sampler.py`, several Sampler classes in `torch.utils.data` are overwritten:\r\n1. https://github.com/pytorch/data/blob/main/torchdata/stateful_dataloader/sampler.py#L61-L62\r\n2. https://github.com/pytorch/data/blob/main/torchdata/stateful_dataloader/sampler.py#L134-L135\r\n\r\nThe implication here is that if code were to import `StatefulDataLoader` after importing torch, then there may be inconsistent definitions of `BatchSampler` and `RandomSampler` at runtime. See the gist below for a toy example, where a StatefulDataLoader has a handle to a `torch.utils.data.sampler.BatchSampler` rather than a `torchdata.stateful_dataloader.sampler.BatchSampler`.\r\n\r\nThis may possibly be the root cause of https://github.com/huggingface/accelerate/issues/2894\r\n\r\n### How to reproduce\r\n\r\nSee gist: https://gist.github.com/byi8220/3091215e38d8f1caba01bc015aed32aa\n\n### Versions\n\nPyTorch version: 2.5.0.dev20240628\r\nIs debug build: False\r\nCUDA used to build PyTorch: Could not collect\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 24.04 LTS (x86_64)\r\nGCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.39\r\n\r\nPython version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39\r\nIs CUDA available: False\r\nCUDA runtime version: 12.5.40\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti\r\nNvidia driver version: 555.42.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 43 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 12\r\nOn-line CPU(s) list: 0-11\r\nVendor ID: AuthenticAMD\r\nModel name: AMD Ryzen 5 3600 6-Core Processor\r\nCPU family: 23\r\nModel: 113\r\nThread(s) per core: 2\r\nCore(s) per socket: 6\r\nSocket(s): 1\r\nStepping: 0\r\nFrequency boost: enabled\r\nCPU(s) scaling MHz: 83%\r\nCPU max MHz: 4208.2031\r\nCPU min MHz: 2200.0000\r\nBogoMIPS: 7200.35\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es\r\nVirtualization: AMD-V\r\nL1d cache: 192 KiB (6 instances)\r\nL1i cache: 192 KiB (6 instances)\r\nL2 cache: 3 MiB (6 instances)\r\nL3 cache: 32 MiB (2 instances)\r\nNUMA node(s): 1\r\nNUMA node0 CPU(s): 0-11\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Reg file data sampling: Not affected\r\nVulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection\r\nVulnerability Spec rstack overflow: Mitigation; Safe RET\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of releva", "url": "https://github.com/meta-pytorch/data/issues/1280", "state": "closed", "labels": [], "created_at": "2024-06-28T23:28:50Z", "updated_at": "2024-07-03T18:23:06Z", "comments": 8, "user": "byi8220" }, { "repo": "pytorch/torchtitan", "number": 434, "title": "Question about custom cuda operators for tensor parallelism", "body": "We are currently trying to apply torchtitan to MoE models. MoE models require using grouped_gemm https://github.com/fanshiqing/grouped_gemm. GroupedGemm ops basically follow the same rule as in ColumnLinear and RowLinear. Is there any way to make custom ops dtensor compatible? Great thanks for help!", "url": "https://github.com/pytorch/torchtitan/issues/434", "state": "open", "labels": [ "question" ], "created_at": "2024-06-28T12:29:43Z", "updated_at": "2024-11-22T00:04:50Z", "user": "vermouth1992" }, { "repo": "pytorch/torchtitan", "number": 431, "title": "Question about Pipeline parallelism", "body": "Just wonder does the current PipelineStage API supports variable length input shapes like in Megatron? https://github.com/NVIDIA/Megatron-LM/blob/e33c8f78a35765d5aa37475a144da60e8a2349d1/megatron/core/model_parallel_config.py#L212 This is particular useful for packed inputs where all the paddings are removed.", "url": "https://github.com/pytorch/torchtitan/issues/431", "state": "open", "labels": [ "enhancement", "question", "post training" ], "created_at": "2024-06-27T15:31:52Z", "updated_at": "2025-10-02T02:32:07Z", "user": "vermouth1992" }, { "repo": "pytorch/serve", "number": 3206, "title": "Docker swarm with TorchServe workflow", "body": "I want to scale the workflows through \"Docker Swarm\". (I hope it is possible, if not please tell me how one can achieve this? I know it is not supported yet through TorchServe directly, that is why I'm using docker to scale the workflow.)\r\nI have few questions related to using TorchServe as a docker service in swarm mode while I encountered few issues.\r\n\r\n**Problem Statement:** \r\n\r\n- We are using TorchServe workflow as we have multiple models required to complete the use case.\r\n- To make sure that there isn't any difference I've set the number of workers to 2 on each node, so that memory consumption doesn't go above 16GB, and each node has same number of workers and memory.\r\n- While creating a docker service, the manager node seems to work fine with the below TorchServe config and completes the task in desired time, but when the manager assigns the task to any of the worker node it takes ~3X more time.\r\n- Problem we are facing is while a TorchServe worker is executing on the worker node, looks like it is executing with intervals. i.e., it doesn\u2019t show continuous GPU utilization/processing and stops printing logs as well along with delay in response and meanwhile that if another request comes it will stop executing the current request and starts executing new one.\r\n- I did see something in logs (unfortunately, I'm unable to provide the logs here) like, when node `m5` is being executed and new request came then the current request directly stops (at least in the logs it looked like that, but no error was thrown) and new one starts. Correct me if I'm wrong but old request should be executing in the background, right?\r\n- Now, the question is, Does TorchServe support routing the request through docker swarm?\r\n- If so, then what would be the correct configuration to achieve similar results on the all the nodes apart from manager in swarm?\r\n\r\n\r\n**My Docker Swarm Config:** \r\n* 3 nodes, 1 manager 2 workers\r\n* Manager has 4 X v100 sxm-2, 32GB each, Worker has 4 X v100 sxm-2, 16GB each\r\n\r\n**My project config:** \r\n(Please ignore the timeout, as I've put it this way because my inference request takes around 10 mins, as it takes over 100 images to process in a batch)\r\n\r\n* There are 5 models\r\n* **model-config.yaml**\r\n```yaml\r\nmaxBatchDelay: 10000000\r\nresponseTimeout: 10000000\r\n```\r\n* **workflow.yaml**\r\n```yaml\r\nmodels:\r\n min-workers: 1\r\n max-workers: 2\r\n max-batch-delay: 10000000\r\n retry-attempts: 1\r\n timeout-ms: 3000000\r\n\r\n m1:\r\n url: mode-1.mar\r\n\r\n m2:\r\n url: model-2.mar\r\n\r\n m3:\r\n url: model-3.mar\r\n\r\n m4:\r\n url: model-4.mar\r\n\r\n m5:\r\n url: model-5.mar\r\n \r\ndag:\r\n pre_processing: [m1]\r\n m1: [m2]\r\n m2: [m3]\r\n m3: [m4]\r\n m4: [m5]\r\n m5: [post_processing]\r\n```\r\n* **config.properties**\r\n```properties\r\ninference_address=http://0.0.0.0:8080\r\nmanagement_address=http://0.0.0.0:8081\r\nmetrics_address=http://0.0.0.0:8082\r\n\r\n# management\r\ndefault_response_timeout=10000000\r\ndefault_workers_per_model=2\r\n\r\nload_models=\r\nmodel_store=model_store\r\nworkflow_store=wf_store\r\n\r\nenable_envvars_config=true\r\njob_queue_size=3\r\n```\r\n\r\n**Python Packages:**\r\n\r\n```text\r\ntorch==1.13.1+cu117\r\ntorchvision==0.14.1+cu117\r\ntorchaudio==0.13.1+cu117\r\ntorchserve==0.10.0\r\ntorch-model-archiver==0.10.0\r\ntorch-workflow-archiver==0.2.12\r\nnvgpu==0.10.0\r\ncaptum==0.7.0\r\n```\r\n", "url": "https://github.com/pytorch/serve/issues/3206", "state": "closed", "labels": [ "triaged", "workflowx" ], "created_at": "2024-06-26T16:20:40Z", "updated_at": "2024-07-25T14:54:32Z", "comments": 6, "user": "KD1994" }, { "repo": "pytorch/pytorch", "number": 129542, "title": "How to Convert pytorch qat model to tensorrt", "body": "\r\nI find that the converted qat model in pytorch can't use GPU Kernel, But I don't find the function or ways to convert to tensorrt. How to Convert pytorch qat model to tensorrt?\r\n", "url": "https://github.com/pytorch/pytorch/issues/129542", "state": "closed", "labels": [], "created_at": "2024-06-26T02:46:20Z", "updated_at": "2024-06-26T15:42:38Z", "user": "AnnaTrainingG" }, { "repo": "pytorch/xla", "number": 7466, "title": "Register python implementation for the aten ops", "body": "## \u2753 Questions and Help\r\nCurrently `F.interpolate(mode='tilinear)'` will be dispatched to `aten::upsample_trilinear3d` which we don't have c++ lowering. There is a python decomp for this op in https://github.com/pytorch/pytorch/blob/ad76da6c16c5dc465e8aac8d913532251db7b400/torch/_decomp/decompositions.py#L3591-L3602 so I am wondering if there is way for PyTorch/XLA to register this python implementation directly.\r\n\r\nSimilar request for `scaled_dot_product_attention`, we have the Pallas based implementation in https://github.com/pytorch/xla/blob/master/torch_xla/experimental/custom_kernel.py#L162 for TPU but I don't know how to register this for PyTorch/XLA.\r\n\r\ncc @ezyang @bdhirsh @alband", "url": "https://github.com/pytorch/xla/issues/7466", "state": "closed", "labels": [ "question", "lowering" ], "created_at": "2024-06-25T21:00:50Z", "updated_at": "2025-04-07T12:46:14Z", "user": "JackCaoG" }, { "repo": "pytorch/TensorRT", "number": 2955, "title": "\u2753 [Question] How do you compile a chunk operator with TensorRT?", "body": "## \u2753 Question\r\n\r\nHow do you compile a chunk operator with TensorRT? I have been trying a basic example in a Jupyter Notebook but get an unbroadcastable dimension error. The below code executes in PyTorch inference and torchscript, but cannot be compiled with TensorRT.\r\n\r\n## What you have already tried\r\n\r\n\r\n\r\n```import torch\r\nimport torch.nn as nn\r\nimport torch_tensorrt\r\ndevice = \"cuda\"\r\n\r\nclass TestModel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n def forward(self, x, y):\r\n y1, _ = y.chunk(2, dim=0) #y1.shape --> (1, 3)\r\n return x + y1 #(2, 3) + (1, 3)\r\n \r\nmodel = TestModel()\r\nmodel.eval()\r\n\r\nx = torch.randn((2, 3), device=device)\r\ny = torch.randn((2, 3), device=device)\r\n\r\nmodel(x, y)\r\n\r\ntraced_model = torch.jit.trace(model, (x, y))\r\n\r\ntrt_model = torch_tensorrt.compile(traced_model, \r\n inputs=[torch_tensorrt.Input(shape=x.shape, dtype=torch.float32),\r\n torch_tensorrt.Input(shape=y.shape, dtype=torch.float32)]\r\n )\r\n```\r\n\r\nError messages:\r\n\r\n```ERROR: [Torch-TensorRT TorchScript Conversion Context] - ITensor::getDimensions: Error Code 4: Shape Error (broadcast dimensions must be conformable)\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - ITensor::getDimensions: Error Code 4: Shape Error (broadcast dimensions must be conformable)\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - IBuilder::buildSerializedNetwork: Error Code 4: Internal Error (%9 : Tensor = aten::add(%x, %y1, %3) # [...): IElementWiseLayer must have inputs with same dimensions or follow broadcast rules. Input dimensions were [2,3] and [1,0].)\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.3.0\r\n - CPU Architecture:\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.10.14\r\n - CUDA version: 12.1\r\n - GPU models and configuration: A100\r\n - Any other relevant information:\r\n\r\nThank you for the help!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2955", "state": "open", "labels": [ "question" ], "created_at": "2024-06-25T20:37:51Z", "updated_at": "2024-06-25T21:45:45Z", "user": "joshuageddes" }, { "repo": "pytorch/ao", "number": 436, "title": "what if below condition? about OCP Microscaling", "body": "assume we have a fp32 tensor like [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 127.99999], and set k to 32(default, convert to fp8 e5m2 mx block.\r\n\r\nbtw asfloat(0x42FFFFFF) = 127.9999f\r\n\r\nfrom current code, the max absolute value is 127.9999, the unbiased exponent is 6, minus emax.fp8_e5m2(which is 15), so the shared scale is 6 - 15 + 127 is 118. \r\n\r\n127.9999/2^118 = 0x1.FFFFF7p+15, assume we use RN rounding mode, after rounding this value will convert to 0x2.0p+15(fp8 e5m2) which is large than the max normal representation of fp8 e5m2, so clamp to the max normal<OCP Microscaling Formats (MX) Specification Version 1.0. chapter 6.3>.\r\n\r\ncurrent code do as above, the log shows below:\r\n`tensor([ 1.0000, 2.0000, 3.0000, 4.0000, 5.0000, 6.0000, 7.0000,\r\n 8.0000, 9.0000, 10.0000, 11.0000, 12.0000, 13.0000, 14.0000,\r\n 15.0000, 16.0000, 17.0000, 18.0000, 19.0000, 20.0000, 21.0000,\r\n 22.0000, 23.0000, 24.0000, 25.0000, 26.0000, 27.0000, 28.0000,\r\n 29.0000, 30.0000, 31.0000, **127.9999**], device='cuda:0')\r\nMXTensor: elem_dtype: torch.float8_e5m2, s_e8m0: tensor([118], device='cuda:0', dtype=torch.uint8), d: tensor([ 512., 1024., 1536., 2048., 2560., 3072., 3584., 4096., 4096.,\r\n 5120., 6144., 6144., 6144., 7168., 8192., 8192., 8192., 8192.,\r\n 10240., 10240., 10240., 12288., 12288., 12288., 12288., 12288., 14336.,\r\n 14336., 14336., 16384., 16384., **57344.**], device='cuda:0',\r\n dtype=torch.float8_e5m2), d_hp: tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 8., 10., 12., 12.,\r\n 12., 14., 16., 16., 16., 16., 20., 20., 20., 24., 24., 24.,\r\n 24., 24., 28., 28., 28., 32., 32., **112**.], device='cuda:0')`\r\n\r\nfrom above log, we see shared exp is 118, fp32 127.9999 convert to fp8 e5m2 57344. 112=2^(118-127)*57344 seems far less than 127.9999.\r\nbut if we add shred exp by 1 if max(abs)/scale are large than the max normal representation of fp8 e5m2 after rounding, which means shared_exp to 119, then 129.999 convert to fp8 e5m2 is 0x1.00p+15 = 32768, and 128 = 2^(119-127)*32768, seems more accurate than 112.\r\n\r\nbut this seems not compliance with mx1.0 spec, what we choose and why? anyone who can help me?\r\n\r\n", "url": "https://github.com/pytorch/ao/issues/436", "state": "closed", "labels": [ "question", "mx" ], "created_at": "2024-06-25T08:37:17Z", "updated_at": "2024-07-05T16:31:23Z", "user": "avater210" }, { "repo": "pytorch/serve", "number": 3204, "title": "WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.", "body": "Hi, I've been running models with Torchserve 0.11.0 on Sagemaker and noticed following warning:\r\n`WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.` when starting the Torchserve. \r\n\r\nI read that this method was removed in Java8 (https://stackoverflow.com/questions/23808803/sun-reflect-reflection-getcallerclass-alternative?noredirect=1&lq=1). How does lack of support for this method affect performance? What is required to get rid of this warning when running torchserve? \r\n\r\n", "url": "https://github.com/pytorch/serve/issues/3204", "state": "open", "labels": [ "java" ], "created_at": "2024-06-24T13:58:02Z", "updated_at": "2024-06-26T21:20:17Z", "comments": 1, "user": "aalbersk" }, { "repo": "pytorch/ao", "number": 430, "title": "Understanding 8da4w", "body": "Hi there,\r\n\r\nI'm new to quantization. From my understanding, \"8da4w\" means that the weights are pre-quantized to 4 bits, and the activations are quantized to 8 bits at runtime. Following this, the GEMM (General Matrix Multiply) operation between weights and activations is computed in the `int8` data type. Do I have this correct?\r\n\r\nHowever, I'm confused by the code for `Int8DynActInt4WeightQuantizer`. The `forward` method of `Int8DynActInt4WeightLinear` calls a method named `per_token_dynamic_quant`, which can be found [here](https://github.com/pytorch/ao/blob/fd9f95d614fa03f09d85d73a2c2740cc647d7b9b/torchao/quantization/utils.py#L436-L458). In this method, the input is first quantized to `int8` and then immediately converted back to its original data type without further processing. I don't understand the purpose of this function. Furthermore, I have launched a program using `Int8DynActInt4WeightQuantizer ` and observed the data types of `x` and `w_dq` in the method `linear_forward_8da4w`, which can be found [here](https://github.com/pytorch/ao/blob/fd9f95d614fa03f09d85d73a2c2740cc647d7b9b/torchao/quantization/GPTQ.py#L800), they both are `float32`. This seems to contradict my understanding of the computations involved in '8da4w'.\r\n\r\nI realize that I'm likely missing some fundamental aspects of dynamic quantization. Could anyone kindly clarify this process for me?\r\n\r\nThank you!", "url": "https://github.com/pytorch/ao/issues/430", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-24T08:43:44Z", "updated_at": "2024-07-23T17:32:41Z", "user": "DzAvril" }, { "repo": "pytorch/vision", "number": 8503, "title": "Can we add datatype support for examples under references", "body": "### \ud83d\ude80 The feature\n\ncurrently the examples under references only support default datatype (float32), can we support a argument like --data-type to allow user to specify the datatype for the model?\n\n### Motivation, pitch\n\nMany users like us always need to run different dataytpye for the model. like float16 and bfloat16. If this argument can be added, it will save many efforts.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8503", "state": "open", "labels": [], "created_at": "2024-06-24T03:29:04Z", "updated_at": "2024-07-12T15:09:10Z", "comments": 2, "user": "wincent8" }, { "repo": "pytorch/xla", "number": 7326, "title": "dear teachers, i can connect the internet, but i can not download it the torch_xla", "body": "pip install torch_xla[tpu]~=2.3.0 -f https://storage.googleapis.com/libtpu-releases/index.html\r\n\r\nERROR: Could not find a version that satisfies the requirement torch_xla~=2.3.0 (from versions: none)\r\nERROR: No matching distribution found for torch_xla~=2.3.0\r\n", "url": "https://github.com/pytorch/xla/issues/7326", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-21T07:42:57Z", "updated_at": "2025-04-07T12:58:54Z", "user": "zhangwaer" }, { "repo": "pytorch/torchtitan", "number": 412, "title": "ImportError in LLaMA Training Script", "body": "When attempting to run the training script for LLaMA with the following command:\r\n`CONFIG_FILE=\"./train_configs/llama3_8b.toml\" ./run_llama_train.sh`\r\nan ImportError is encountered. The specific error message is:\r\n`ImportError: cannot import name 'Partial' from 'torch.distributed._tensor' (/apps/torchtitan/torchtitan/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py)`\r\n\r\nThe training script should start without any import errors and utilize the specified configuration file to train the model across 8 GPUs.\r\n\r\nThe script fails to run due to an ImportError indicating that Partial cannot be imported from torch.distributed._tensor. The error traceback is as follows:\r\n`Traceback (most recent call last):\r\n File \"/apps/torchtitan/train.py\", line 34, in <module>\r\n from torchtitan.models import model_name_to_cls, model_name_to_tokenizer, models_config\r\n File \"/apps/torchtitan/torchtitan/models/__init__.py\", line 7, in <module>\r\n from torchtitan.models.llama import llama2_configs, llama3_configs, Transformer\r\n File \"/apps/torchtitan/torchtitan/models/llama/__init__.py\", line 10, in <module>\r\n from torchtitan.models.llama.model import ModelArgs, Transformer\r\n File \"/apps/torchtitan/torchtitan/models/llama/model.py\", line 17, in <module>\r\n from torchtitan.models.norms import create_norm\r\n File \"/apps/torchtitan/torchtitan/models/norms.py\", line 17, in <module>\r\n from torch.distributed._tensor import Partial, Replicate, Shard\r\nImportError: cannot import name 'Partial' from 'torch.distributed._tensor' (/apps/torchtitan/torchtitan/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py)\r\n`", "url": "https://github.com/pytorch/torchtitan/issues/412", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-19T17:45:48Z", "updated_at": "2024-07-12T16:06:10Z", "user": "viai957" }, { "repo": "pytorch/TensorRT", "number": 2940, "title": "\u2753 [Question] Is there any plan to support bfloat16 compile", "body": "## What you have already tried\r\n\r\nThe nvidia tensorrt has already support the `bf16` precision after tensorrt>=9.2:\r\n\r\n- https://github.com/NVIDIA/TensorRT/issues/1883\r\n- https://github.com/AmusementClub/vs-mlrt/issues/64\r\n\r\nHowever, the latest torch_tensorrt (`torch_tensorrt==2.3.0 w/ tensorrt==10.0.1`) has not support this.\r\n\r\nIs there any plan to support bfloat16 in future verisons? The bf16 is very popular in the LLM inference.\r\n\r\n```python\r\ntrt_model = torch_tensorrt.compile(\r\n module=torch.jit.script(model),\r\n inputs=[torch_tensorrt.Input(shape=(bs, seq, dim), dtype=torch.bfloat16)],\r\n enabled_precisions={torch.int8, torch.bfloat16, torch.float32},\r\n calibrator=calibrator,\r\n device={\r\n \"device_type\": torch_tensorrt.DeviceType.GPU,\r\n \"gpu_id\": 0,\r\n \"dla_core\": 0,\r\n \"allow_gpu_fallback\": True,\r\n \"disable_tf32\": True,\r\n },\r\n)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/data01/home/zhanglei.me/workspace/tensorrt_example/example_int8.py\", line 38, in <module>\r\n trt_model = torch_tensorrt.compile(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/_compile.py\", line 208, in compile\r\n compiled_ts_module: torch.jit.ScriptModule = torchscript_compile(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/ts/_compiler.py\", line 151, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/ts/_compile_spec.py\", line 208, in _parse_compile_spec\r\n dtype=i.dtype.to(_C.dtype),\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/_enums.py\", line 305, in to\r\n raise TypeError(\r\nTypeError: Provided an unsupported data type as an input data type (support: bool, int32, long, half, float), got: dtype.bf16\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.3.1\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3 install torch_tensorrt==2.3.0 tensorrt==10.0.1\r\n - Build command you used (if compiling from source): no\r\n - Are you using local sources or building from archives: no\r\n - Python version: 3.10\r\n - CUDA version: 12.2\r\n - GPU models and configuration: Nvidia A100\r\n - Any other relevant information:", "url": "https://github.com/pytorch/TensorRT/issues/2940", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-19T06:05:30Z", "updated_at": "2024-06-25T04:39:59Z", "user": "leeeizhang" }, { "repo": "pytorch/serve", "number": 3195, "title": "How to send a torch array via request", "body": "I want to send a torch (cuda) array via python request to the inference API. Is that possible?", "url": "https://github.com/pytorch/serve/issues/3195", "state": "closed", "labels": [], "created_at": "2024-06-18T21:05:58Z", "updated_at": "2024-06-19T19:21:59Z", "user": "lschaupp" }, { "repo": "pytorch/tutorials", "number": 2939, "title": "[BUG] - is torch.compile necessary to use user defined triton kernel ", "body": "### Add Link\n\nhttps://pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html\n\n### Describe the bug\n\ni think we can call triton kernel with torch.compile\r\nwhat we get when call triton kernel through torch.compile?\n\n### Describe your environment\n\nnone\n\ncc @williamwen42 @msaroufim", "url": "https://github.com/pytorch/tutorials/issues/2939", "state": "closed", "labels": [ "bug", "question", "torch.compile" ], "created_at": "2024-06-18T16:12:15Z", "updated_at": "2024-06-18T16:41:31Z", "user": "felixdae" }, { "repo": "pytorch/vision", "number": 8497, "title": "Improve empty import time of torchvision", "body": "### \ud83d\ude80 The feature\n\nWhen importing torchvision, a number of libraries are imported by default for more niche functionality of the library. To improve import time, I would favor delaying those imports to when they are needed\n\n### Motivation, pitch\n\nIn my case, it is the av library in particular that contributes to the import time: \r\n<img width=\"2087\" alt=\"image\" src=\"https://github.com/pytorch/vision/assets/2241296/2af05ab0-f97c-44bd-b7f2-fd5111f747d7\">\r\n\r\n(this assumes that torch, dynamo and onnx are already imported). \r\n\r\nThe import of `av` can easily be avoided as it is not needed by default. \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nI checked the code and I found this code here: \r\n\r\n```\r\ntry:\r\n import av\r\n\r\n av.logging.set_level(av.logging.ERROR)\r\n if not hasattr(av.video.frame.VideoFrame, \"pict_type\"):\r\n av = ImportError(\r\n \"\"\"\\\r\nYour version of PyAV is too old for the necessary video operations in torchvision.\r\nIf you are on Python 3.5, you will have to build from source (the conda-forge\r\npackages are not up-to-date). See\r\nhttps://github.com/mikeboers/PyAV#installation for instructions on how to\r\ninstall PyAV on your system.\r\n\"\"\"\r\n )\r\nexcept ImportError:\r\n av = ImportError(\r\n \"\"\"\\\r\nPyAV is not installed, and is necessary for the video operations in torchvision.\r\nSee https://github.com/mikeboers/PyAV#installation for instructions on how to\r\ninstall PyAV on your system.\r\n\"\"\"\r\n )\r\n```\r\n\r\nThe `pict_type` got added somewhere in the 0.5 range (released around 2020), 6.0 followed shortly. So I would suggest to change this test to not import av but the use `importlib` to check the version which would make this go away. This applies both to `torchvision/io/video_reader.py` as well as `torchvision/io/video.py`. I also wonder whether the logging call is still required given so much has changed since this code was written. ", "url": "https://github.com/pytorch/vision/issues/8497", "state": "open", "labels": [], "created_at": "2024-06-18T09:24:43Z", "updated_at": "2024-07-29T12:02:13Z", "comments": 3, "user": "bschindler" }, { "repo": "pytorch/torchtitan", "number": 409, "title": "DataLoader state is empty for different ranks ? ", "body": "Thanks for your amazing work ! \r\n\r\nWe have been testing the llama3_8b model on slimpajama dataset. The training seem to be fine based on loss curves. \r\n\r\nHowever, upon resuming the model from a previous checkpoint, we see the following warnings:\r\n\r\n```\r\n16: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 16, expected key dp_rank_16.\r\n28: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 28, expected key dp_rank_28.\r\n 5: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 5, expected key dp_rank_5.\r\n20: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 20, expected key dp_rank_20.\r\n27: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 27, expected key dp_rank_27.\r\n 2: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 2, expected key dp_rank_2.\r\n19: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 19, expected key dp_rank_19.\r\n30: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 30, expected key dp_rank_30.\r\n23: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 23, expected key dp_rank_23.\r\n21: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 21, expected key dp_rank_21.\r\n17: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 17, expected key dp_rank_17.\r\n18: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 18, expected key dp_rank_18.\r\n 1: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 1, expected key dp_rank_1.\r\n26: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 26, expected key dp_rank_26.\r\n31: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 31, expected key dp_rank_31.\r\n12: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 12, expected key dp_rank_12.\r\n10: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 10, expected key dp_rank_10.\r\n11: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 11, expected key dp_rank_11.\r\n14: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 14, expected key dp_rank_14.\r\n15: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 15, expected key dp_rank_15.\r\n13: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 13, expected key dp_rank_13.\r\n29: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 29, expected key dp_rank_29.\r\n 7: 2024-06-17 01:22:16,617 - root - WARNING - DataLoader state is empty for dp rank 7, expected key dp_rank_7.\r\n 8: 2024-06-17 01:22:16,617 - root - WARNING - DataLoader state is empty for dp rank 8, expected key dp_rank_8.\r\n 4: 2024-06-17 01:22:16,617 - root - WARNING - DataLoader state is empty for dp rank 4, expected key dp_rank_4.\r\n 3: 2024-06-17 01:22:16,618 - root - WARNING - DataLoader state is empty for dp rank 3, expected key dp_rank_3.\r\n 9: 2024-06-17 01:22:16,618 - root - WARNING - DataLoader state is empty for dp rank 9, expected key dp_rank_9.\r\n 6: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 6, expected key dp_rank_6.\r\n22: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 22, expected key dp_rank_22.\r\n24: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 24, expected key dp_rank_24.\r\n25: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 25, expected key dp_rank_25.\r\n```\r\n\r\nWhat can be the reason for DataLoader state being empty when loading the model ? \r\n\r\nAlso noting that checkpoints are loaded properly. ", "url": "https://github.com/pytorch/torchtitan/issues/409", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-17T17:46:42Z", "updated_at": "2024-11-22T00:00:55Z", "user": "ahatamiz" }, { "repo": "pytorch/pytorch", "number": 128698, "title": "ONNX docs missing info about how to remove custom domains", "body": "### \ud83d\udcda The doc issue\n\nIn the docs about exporting to onnx [here](https://pytorch.org/tutorials/beginner/onnx/export_simple_model_to_onnx_tutorial.html?highlight=torch%20onnx%20dynamo_export) there is not a mention of how to remove the functions. The use of aten operators defined as functions creates a problem when converting to tensorrt. When visualizing with netron the functions are composed of simpler official ai.onnx operators which have support for tensorrt but not the custom exported aten operators. There should be a way to save the models without using functions and custom operators and just export the raw operators even if that means more repetitions, but it would make models exportable to tensorrt.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @svekars @brycebortree", "url": "https://github.com/pytorch/pytorch/issues/128698", "state": "closed", "labels": [ "module: onnx", "module: docs", "triaged" ], "created_at": "2024-06-14T13:01:35Z", "updated_at": "2025-09-07T22:35:57Z", "user": "Jerry-Master" }, { "repo": "pytorch/torchtitan", "number": 399, "title": "How to use nsys?", "body": "Is there a recommended way to use nsys / nsight? I know there's a profiling hook for using the Pytorch profiler, but I'm wondering how to use nsys instead.\r\n\r\nCan I use these APIs:\r\n```\r\nwith torch.autograd.profiler.emit_nvtx():\r\n profiler.start()\r\n y = x.view(1, -1)\r\n z = x.to(memory_format=torch.channels_last)\r\n zz = z.reshape(1, -1)\r\n profiler.stop()\r\n``` \r\n\r\nFurthermore, I'm not sure which of the below I'm supposed to use:\r\n```\r\n import torch.cuda.profiler as profiler\r\n with torch.autograd.profiler.emit_nvtx():\r\n```", "url": "https://github.com/pytorch/torchtitan/issues/399", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-06-13T18:14:52Z", "updated_at": "2024-11-22T00:00:02Z", "user": "vedantroy" }, { "repo": "pytorch/xla", "number": 7255, "title": "[RFC] torch_xla2 dynamo integration", "body": "# Dynamo backend for torchxla2\r\n\r\n## Goal\r\n\r\nHave a dynamo backend backend by torch_xla2.\r\n\r\nThe users should be able to do the following:\r\n\r\n```python\r\nm = model ...\r\nm_compiled = torch.compile(m, backend='torch_xla2_compile') # backend name TBD\r\nresult = m_compiled(*inputs)\r\n```\r\n\r\nThe above should run on TPU will low overhead.\r\n\r\n## Challenge\r\n\r\nUsually the challenge of a dynamo backend is the compiler that\r\ntransforms a fx graph with torch (or Aten) ops to the compiled executable.\r\nHowever, in our case, that piece is solved.\r\n\r\nFor every `call_function` node; we lookup the corresponding implementation of\r\nsaid ATen op in a dictionary for it's corresponding implementation in Jax,\r\nand we just call it.\r\n\r\nThis is illustrated here: https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/torch_xla2/export.py#L23\r\n\r\nNow, the challenge is for dynamo to be able to 1. produce the graph; and 2. n\r\nnot incur any data copies in this process.\r\n\r\n\r\nConsider this following pseudocode:\r\n\r\n```python\r\nclass XLATensor2:\r\n _data: jax.Array \r\n def __torch_dispatch__(...):\r\n # do stuff with _data, get new data\r\n return XLATensor2(new_data)\r\n\r\ndef dynamo_backend(fx, sample):\r\n compiled = compile fx into graph that manipulate jax.Array.\r\n def returned_callable(inputs):\r\n datas = [i._data for i in inputs]\r\n res = compiled(*datas)\r\n return TensorSubclass(res)\r\n return returned_callable\r\n\r\nmodel = torch.compile(model, backend = dynamo_backend)\r\ninputs = a list of TensorSubclass or a list of torch.Tensor?\r\nmodel(*inputs)\r\n```\r\n\r\nWhat would be the type of inputs?\r\nIf inputs are of type `TensorSubclass`, then dynamo\r\nwill attempt to trace through the `__torch_dispatch__` method,\r\nand throws error because it doesn't know what is `_data` and the\r\noperations on it.\r\n\r\nIf `inputs` is of type `torch.Tensor`, then it works: dynamo \r\ncalls the backend, the backend can produce correct result.\r\nBut, `inputs` need to be converted to `TensorSubclass` first inside of\r\nthe backend; which usually means a data copy. This happens everytime \r\nthe compiled backend is executed, therefore not desirable.\r\n\r\n## The Desired behavior\r\n\r\nWhen *tracing* dynamo treats TensorSubclass as if it is a regular tensor\r\nwithout dispatch override; and when executing the compiled callable,\r\nTensorSubclass is passed in as-is. We know that dynamo can do this with \r\nsome tensor subclass, namely `FakeTensor`.\r\n\r\n\r\nLet's list out the possible ways we could accomplish this behavior.\r\n\r\n\r\n# Option 1. Have the jax.Array object hold in C++\r\n\r\nRoughly we would have a `Tensor` subclass in C++, this is very\r\nsimilar to the `LazyTensor` subclass that is the current `XLATensor`.\r\nThis tensor can hold it's own states in C++. In our case, that would \r\nbe a `PyObject*` that happens to point to either `jnp.ndarray` or \r\njax's `Traced<ShapedArray>` during jax.jit. We might further result the\r\n`XLA` dispatch key to route the operators to the jax implementation, \r\nemulating what `__torch_dispatch__` does.\r\n\r\nThis way, eager mode will continue to work, and dynamo would work\r\nbecause the Python class is still `torch.Tensor` (not a subclass), and\r\nthere are no Python logic in dispatching so dynamo cannot trace through.\r\n\r\n## Pros:\r\n* Very clear that this will work. \r\n\r\n## Cons:\r\nNow need to deal with C++ builds. In particular, `torch` becomes a source\r\ndependency instead of a pip dependency; meaning, again we need to start\r\nbuilding torch first then build torch_xla2. This might be mitigated if\r\nthat subclass can be upstreamed.\r\n\r\n\r\n# Option 2. Modify dynamo to do the desired behavior\r\n\r\nWe have one instance where a `torch.Tensor` dispatch subclass\r\njust works with dynamo, without dynamo make a fuss when it traces\r\n`__torch_dispatch__`. This is `FakeTensor`. (https://github.com/pytorch/pytorch/pull/100017/files)\r\n\r\nThe idea is to make dynamo trace as-if the inputs are `FakeTensor` and\r\nnot `XLATensor`. and only after the creation of fx graph and backend, dynamo\r\ncalls the compiled callable with `XLATensor`.\r\n\r\nPros:\r\n* Likely pure python changes. \r\n\r\nCons:\r\n* We also need to design a mechanism to represent tensor subclasses that\r\n is desirable for dynamo to trace through, and those is not.\r\n* Likely significant amount of work.\r\n\r\n\r\n# Option 3. Register All the ops as custom_ops\r\n\r\nSo currently dynamo traces `__torch_dispatch__`, and we don't like that\r\nbecause it will find the operations on Jax arrays, and doesn't understand those.\r\n\r\nWhat if we make dynamo **able** to understand what is inside?\r\nThe [Black box python functions](https://docs.google.com/document/d/1ZuCVyMfibExwvtzhd9cfMWk5zXT3Dhy1b3kuvAIkBoU/edit#heading=h.56tggsazyrkh) doc \r\npoints the possibility of registering things that we don't want dynamo\r\nto go into as a custom op. So we could, theoretically do the following:\r\n\r\n1. Register the jax impl of an Aten op as a custom op.\r\n i.e. register `jaten.add` for `aten.add`.\r\n2. For meta kernels, just call the meta kernel of `aten.add`.\r\n3. In `_", "url": "https://github.com/pytorch/xla/issues/7255", "state": "open", "labels": [ "dynamo", "RFC", "torchxla2" ], "created_at": "2024-06-12T17:31:23Z", "updated_at": "2025-11-12T19:14:04Z", "comments": 7, "user": "qihqi" }, { "repo": "pytorch/xla", "number": 7253, "title": "[RFC] PyTorch/XLA eager mode as default", "body": "# Context\r\n\r\n\r\n## Objective\r\n\r\nIn this RFC I will talk about the roadmap to enable eager mode as the default computation mode for PyTorch/XLA users and how to enable graph compilation in this mode.\r\n\r\n\r\n## Background\r\n\r\nPyTorch/XLA has been using tracing mode as the default mode since the project started. All of the torch operation users issued will be accumulated in the background and sent to the XLA for compilation and execution upon a `mark_step` call.\r\n\r\nThe upside of this approach is that users don\u2019t need to change their model code too much. As long as the user adds a `mark_step` at the right place everything should just work. However from the user feedback in the last couple years this approach creates too much confusion and frustration for the user. Both PyTorch and JAX took the approach of using eager mode as default and asking users to specify the function that they want to compile. PyTorch/XLA should take the same approach.\r\n\r\n\r\n# Design\r\n\r\n\r\n## Eager mode\r\n\r\nThere is no real eager mode in TPU. However we can fake the eager mode by compiling and executing each torch operation. Such mode already exist as a debug only mode today, it was contributed by @aws-rhsoln 2 year ago in https://github.com/pytorch/xla/pull/3306. The work here is to do a better API level wrapping and make sure this mode work with other features(debug output, SPMD, multiprocess etc). This approach was way too slow a couple years ago due to XRT not being able to execute small executions very efficiently but with PJRT the performance is much better. \r\n\r\nThe whole eager mode still builds on top of the existing Lazy tensor framework, but becomes invisible to the user. A couple things we need to do to accommodate the eager mode are\r\n\r\n1. Increase the compilation cache from 1024 to 2048 since each torch op will also reside in the compilation cache. We also need to recompile every torch op for different input shapes.\r\n2. Increase the max execution we can queue in the PJRT level since now we will execute a lot more small computations.\r\n\r\n\r\n## Compile\r\n\r\nFor the compile part we currently have 2 options, lazy tensor and torch dynamo(torch.compile).\r\n\r\nFor lazy tensor based compile I will add a new API_\r\n\r\n\r\n```\r\ntorch_xla.experimental.compile(fn) -> compiled_fn\r\n```\r\n\r\n\r\nWhich under the hood just enables the tracing mode upon running the function and executes the traced graph before returning. Here is the [implementation](https://github.com/pytorch/xla/pull/7246/files#diff-1e2407471d3328b83dabbeb29cdf3ef468a201d3d4aecac8f4cd46f76751b8c1). For `torch.compile` we can just use the existing API.\r\n\r\n\r\n# Example UX\r\n\r\n\r\n```python\r\nimport torch_xla\r\ntorch_xla.experimental.eager_mode(True)\r\n\r\nClass TrainDecoderOnlyBase():\r\n def __init__():\r\n train_loader = MyLoader()\r\n self.model = DecoderOnlyModel(self.config).to(torch_xla.device())\r\n # if run with dynamo, use\r\n # self.step_fn = torch.compile(self.step_fn, backend=\"openxla\")\r\n self.step_fn = torch_xla.experimental.compile(self.step_fn)\r\n\r\n def step_fn(self, data, target):\r\n self.optimizer.zero_grad()\r\n logits = self.model(data)\r\n loss = self.loss_fn(\r\n logits.view(-1, self.config.vocab_size), target.view(-1))\r\n loss.backward()\r\n self.run_optimizer()\r\n return loss\r\n\r\n def start_training(self):\r\n for step, (data, target) in enumerate(loader):\r\n loss = self.step_fn(data, target)\r\n\r\nif __name__ == '__main__':\r\n base = TrainDecoderOnlyBase()\r\n base.start_training()\r\n```\r\n\r\nNote that two changes user need to make is to enable the eager mode by `torch_xla.experimental.eager_mode(True)` and then compile the step function with `torch_xla.experimental.compile` or `torch.compile`.\r\n\r\nUsers can also choose to run the whole model in eager mode.\r\n\r\n\r\n# Why\r\n\r\nIMO using tracing mode as the default has a couple very significant drawback\r\n\r\n\r\n\r\n1. Users are often confused about when the framework is tracing and when the framework is executing.\r\n2. Users don\u2019t know where to add the `mark_step`.\r\n3. Random python code(data preprocessing for example) often generates some small pending execution that gets leaked into the main graph(step function) and causes recompilation. The recompilation of the whole graph is usually very expensive.\r\n4. It is hard to debug when/why recompilation happens.\r\n\r\nBoth JAX and PyTorch took the approach of asking users to explicitly mark the region/function for compilation. This methodology seems well received for users that want compilation mode. I think this proposal will make a much better usability story by\r\n\r\n\r\n1. Allow users to use eager mode to do the initial model development and use compile mode to scale up. This also significantly lowers the bar for a normal pytorch user to onboard PyTorch/XLA.\r\n2. Reduce the number of recompilation generated by non-core model codes, since those will get executed eagerly.\r\n3. Make graph recompilation easier to debug since only the `compiled_fn` should generate graphs.\r\n\r\n\r\n# Benchmark\r\n\r\nI am ", "url": "https://github.com/pytorch/xla/issues/7253", "state": "open", "labels": [ "usability", "RFC", "eager" ], "created_at": "2024-06-12T03:40:12Z", "updated_at": "2025-11-09T19:39:21Z", "comments": 5, "user": "JackCaoG" }, { "repo": "pytorch/executorch", "number": 3939, "title": "How can I use the generated pte file to process my own data and predict the results?", "body": "auto train_loader = torch::data::make_data_loader(\r\n SWaTegLoader(\"/dataset/train.csv\", 100, 10, \"train\"),\r\n batch_size=256,\r\n torch::data::DataLoaderOptions().workers(0).shuffle(true)\r\n);\r\n\r\nIs this correct? Then how do we process the data with the model?\r\n\r\n\r\n for (auto& batch : *train_loader) {\r\n auto input = batch.data.to(device), labels = batch.target.to(device);\r\n auto output = method->execute(input)\r\n\r\nIs it correct to write code in libtorch way?\r\n", "url": "https://github.com/pytorch/executorch/issues/3939", "state": "closed", "labels": [ "need-user-input" ], "created_at": "2024-06-11T22:22:13Z", "updated_at": "2025-02-05T17:44:36Z", "user": "tayloryoung-o" }, { "repo": "pytorch/pytorch", "number": 128414, "title": "How to enable XNNPACK instead of NNPACK/MKLDNN in Windows?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI'm trying to compile PyTorch for Windows on ARM64 device. I've got one workable version, but NNPACK/MKLDNN doesn't work in ARM64 windows. May I know how to enable XNNPACK as the default 'PACK' to improve the performance?\r\nThanks in advance!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @malfet @snadampal", "url": "https://github.com/pytorch/pytorch/issues/128414", "state": "open", "labels": [ "module: windows", "triaged", "module: xnnpack", "module: arm" ], "created_at": "2024-06-11T12:53:01Z", "updated_at": "2024-09-04T10:33:25Z", "user": "zhanweiw" }, { "repo": "pytorch/data", "number": 1271, "title": "Returning tensor instead of dict for state_dict causes failure", "body": "### \ud83d\udc1b Describe the bug\n\n```\r\nclass TensorStateDataset(torch.utils.data.IterableDataset, Stateful, Iterator):\r\n def __init__(self, length):\r\n self.length = length\r\n self.i = 0\r\n\r\n def __iter__(self):\r\n return self\r\n \r\n def __next__(self):\r\n if self.i >= self.length:\r\n raise StopIteration\r\n self.i += 1\r\n return self.i\r\n\r\n def state_dict(self):\r\n return torch.rand(2, 2)\r\n\r\n def load_state_dict(self, state_dict):\r\n\t\tpass\r\n\r\n\r\nclass TestSimple(TestCase):\r\n def test(self):\r\n dataset = TensorStateDataset(100)\r\n dl = StatefulDataLoader(\r\n dataset=dataset,\r\n num_workers=1,\r\n )\r\n it = iter(dl)\r\n for _ in range(30):\r\n next(it)\r\n self.assertTrue(False)\r\n```\r\n\r\nRunning this, I hit an error as follows:\r\n\r\n```\r\n\r\nself = <torch._utils.ExceptionWrapper object at 0x7f921c5fde10>\r\n\r\n def reraise(self):\r\n r\"\"\"Reraises the wrapped exception in the current thread\"\"\"\r\n # Format a message such as: \"Caught ValueError in DataLoader worker\r\n # process 2. Original Traceback:\", followed by the traceback.\r\n msg = f\"Caught {self.exc_type.__name__} {self.where}.\\nOriginal {self.exc_msg}\"\r\n if self.exc_type == KeyError:\r\n # KeyError calls repr() on its argument (usually a dict key). This\r\n # makes stack traces unreadable. It will not be changed in Python\r\n # (https://bugs.python.org/issue2651), so we work around it.\r\n msg = KeyErrorMessage(msg)\r\n elif getattr(self.exc_type, \"message\", None):\r\n # Some exceptions have first argument as non-str but explicitly\r\n # have message field\r\n raise self.exc_type(message=msg)\r\n try:\r\n exception = self.exc_type(msg)\r\n except TypeError:\r\n # If the exception takes multiple arguments, don't try to\r\n # instantiate since we don't know how to\r\n raise RuntimeError(msg) from None\r\n> raise exception\r\nE RuntimeError: Caught RuntimeError in DataLoader worker process 0.\r\nE Original Traceback (most recent call last):\r\nE File \"/home/gokulg/torchdata/data/torchdata/stateful_dataloader/worker.py\", line 233, in _worker_loop\r\nE delta_state_dict = incremental_worker_state.generate_delta(state_dict)\r\nE File \"/home/gokulg/torchdata/data/torchdata/stateful_dataloader/incremental_state.py\", line 142, in generate_delta\r\nE if iter_state := fetcher_state.get(_DATASET_ITER_STATE, None):\r\nE RuntimeError: Boolean value of Tensor with more than one value is ambiguous\r\nE\r\nE\r\nE To execute this test, run the following from the base repo dir:\r\nE python test/stateful_dataloader/test_state_dict.py -k test2\r\nE\r\nE This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0\r\n\r\n../../.conda/envs/basetorch/lib/python3.10/site-packages/torch/_utils.py:722: RuntimeError\r\n```\r\n\r\nIf an integer is returned or say dict (with value as tensor) is returned, there is no error. \n\n### Versions\n\nLatest git commit 82918dd", "url": "https://github.com/meta-pytorch/data/issues/1271", "state": "closed", "labels": [ "bug", "stateful_dataloader" ], "created_at": "2024-06-10T23:49:43Z", "updated_at": "2024-06-13T19:16:27Z", "comments": 2, "user": "gokulavasan" }, { "repo": "pytorch/tutorials", "number": 2926, "title": "\ud83d\udca1 [REQUEST] - New recipe tutorial on calculating layer output dimensions", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\n\nThis tutorial will help users understand how to transition from convolutional and pooling layers to linear layers in their models.\r\n\r\nLearning objectives:\r\n- How to manually calculate the output dimensions after applying a convolution or pooling layer\r\n- How to print the shape of internal tensors for inspecting dimensionality changes in a model\r\n- How to use the ``torchinfo`` package to show output dimensions for all layers in a model\r\n\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\nI created this draft (https://github.com/pytorch/tutorials/pull/2923) as a part of the PyTorch Docathon H1 2024 effort. I did not realize new tutorials weren't being accepted as part of the sprint and was asked to fill out an issue and convert the PR to a draft.", "url": "https://github.com/pytorch/tutorials/issues/2926", "state": "closed", "labels": [], "created_at": "2024-06-10T23:01:44Z", "updated_at": "2025-04-16T20:08:34Z", "comments": 2, "user": "loganthomas" }, { "repo": "pytorch/tutorials", "number": 2925, "title": "\ud83d\udca1 [REQUEST] - New recipe tutorial on implementing a Keras progress bar", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\r\n\r\nThis tutorial will help users to understand better how to implement a Keras progress bar in PyTorch.\r\n- How to implement with a traditional train/test loop\r\n- How to implement with a train loop with validation data\r\n\r\n### Existing tutorials on this topic\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nI created this draft (https://github.com/pytorch/tutorials/pull/2921) as a part of the PyTorch Docathon H1 2024 effort. I did not realize new tutorials weren't being accepted as part of the sprint and was asked to fill out an issue and convert the PR to a draft.", "url": "https://github.com/pytorch/tutorials/issues/2925", "state": "closed", "labels": [], "created_at": "2024-06-10T22:59:38Z", "updated_at": "2025-04-16T20:08:41Z", "comments": 0, "user": "loganthomas" }, { "repo": "pytorch/tutorials", "number": 2924, "title": "\ud83d\udca1 [REQUEST] - New recipe tutorial on accessing model parameters", "body": "### \ud83d\ude80 Describe the improvement or the new tutorial\r\n\r\nThis tutorial will help begginers understand how to access and make sense of model parameters, collect trainable parameters, and use `torchinfo.summary()`. \r\n\r\nLearning objectives:\r\n- How to inspect a model's parameters using ``.parameters()`` and ``.named_parameters()``\r\n- How to collect the trainable parameters of a model\r\n- How to use the ``torchinfo`` package (formerly ``torch-summary``) to print a model summary\r\n\r\n### Existing tutorials on this topic\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nI created this draft (https://github.com/pytorch/tutorials/pull/2914) as a part of the PyTorch Docathon H1 2024 effort. I did not realize new tutorials weren't being accepted as part of the sprint and was asked to fill out an issue and convert the PR to a draft.", "url": "https://github.com/pytorch/tutorials/issues/2924", "state": "open", "labels": [], "created_at": "2024-06-10T22:56:58Z", "updated_at": "2024-06-10T23:01:48Z", "comments": 0, "user": "loganthomas" }, { "repo": "pytorch/xla", "number": 7232, "title": "How to convert hlo.pb to hlo text?", "body": "## \u2753 Questions and Help\r\n\r\n### How to convert hlo.pb to hlo_text in torch xla eco system?\r\n\r\nIn JAX we can do the following:\r\n```python\r\nfrom jax.lib.xla_bridge import xla_client\r\n\r\nfname = \"model.hlo.pb\"\r\nwith open(fname, mode=\"rb\") as f:\r\n comp = xla_client.XlaComputation(f.read())\r\n\r\nprint(comp.as_hlo_text())\r\n```\r\n\r\nResult:\r\n\r\n```c\r\nHloModule Test, entry_computation_layout={(f32[5]{0})->f32[5]{0}}\r\n\r\n%test_add_one_func.0 (x.1: f32[]) -> f32[] {\r\n %x.1 = f32[] parameter(0)\r\n %y.2 = f32[] constant(1)\r\n ROOT %add.0 = f32[] add(f32[] %x.1, f32[] %y.2)\r\n}\r\n\r\nENTRY %main (x: f32[5]) -> f32[5] {\r\n %x = f32[5]{0} parameter(0)\r\n ROOT %bar = f32[5]{0} map(f32[5]{0} %x), dimensions={0}, to_apply=%test_add_one_func.0\r\n}\r\n```", "url": "https://github.com/pytorch/xla/issues/7232", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-10T20:50:31Z", "updated_at": "2025-06-05T01:49:49Z", "user": "apivovarov" }, { "repo": "pytorch/xla", "number": 7203, "title": "[RFC] PR Cherrypicking Process After a Release Branch Cut", "body": "## \ud83d\ude80 Feature\r\n\r\nIn this RFC, we propose the policy aiming to guide the decision-making process for determining whether Pull Requests (PRs) should be cherry-picked onto a release branch after the release branch has been cut. The goal is to maintain the stability and predictability of releases while addressing critical issues and incorporating essential improvements.\r\n\r\n## Motivation\r\n\r\nCherry-picking pull requests (PRs) onto a release branch can introduce additional overhead and goes against established best practices. While cherry-picks are sometimes unavoidable, we can mitigate their necessity through well-defined policies. This proposal outlines a framework for making informed decisions about when and how to cherry-pick changes.\r\n\r\n## Proposed Policies:\r\n\r\nThe following outlines the specific scenarios under which cherry-picking pull requests (PRs) onto a release branch will be considered acceptable after the official release branch cut.\r\n\r\n- The PR is for __severe/P0__ bug fixing purposes\r\n- The PR is for improving __unforeseen__ code stability or security issues\r\n- The PR has __significant__ impact on usability improvements\r\n- The PR is related to a planned release feature __urgent fix__\r\n- The PR only updates documentation, not changing any code\r\n- The PR is for improving release infrastructure\r\n", "url": "https://github.com/pytorch/xla/issues/7203", "state": "open", "labels": [ "RFC" ], "created_at": "2024-06-05T22:19:07Z", "updated_at": "2025-09-11T23:04:41Z", "comments": 2, "user": "lsy323" }, { "repo": "pytorch/xla", "number": 7196, "title": "Distributed spmd training with multiple compilations", "body": "## \u2753 Questions and Help\r\nWhen starting gpu spmd training with `torchrun`, why does it need to be compiled once per machine? Although the resulting graph is the same. Is there any way to avoid it", "url": "https://github.com/pytorch/xla/issues/7196", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-05T08:46:55Z", "updated_at": "2025-04-07T13:32:17Z", "user": "mars1248" }, { "repo": "pytorch/torchchat", "number": 857, "title": "[Feature Request]: Continuous batching", "body": "Does torchchat plan to support asynchronous requests and continuous batching?\r\n\r\n\r\nTo get higher tokens/second by making efficient use of compute, continuous batching is a common strategy that is used.\r\n\r\nWe could specify the `batch_size` `n` as a parameter and `torchchat` behind the scene would send `n` number of prompts with varying lengths asynchronously \r\n\r\n```\r\npython3 torchchat.py generate llama3 --prompt \"write me a story about a boy and his bear\" --batch_size 8\r\n```\r\n", "url": "https://github.com/pytorch/torchchat/issues/857", "state": "closed", "labels": [], "created_at": "2024-06-05T02:22:36Z", "updated_at": "2024-06-14T09:21:53Z", "comments": 1, "user": "agunapal" }, { "repo": "pytorch/xla", "number": 7191, "title": "How do I know which pytorch parameter corresponds to which parameter in hlo ir", "body": "## \u2753 Questions and Help\r\n\r\nI am dumping the optimized HLO IR and designing a new backend. There are some parameters and the corresponding shapes of them in the IR file. But I don't know which parameter is which module in the defined PyTorch model. Is there a way to get the mapping details of the model's input(weights and inputs) and the parameter in the HLO IR?\r\n\r\nThanks!", "url": "https://github.com/pytorch/xla/issues/7191", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-04T18:32:56Z", "updated_at": "2025-04-07T13:33:10Z", "user": "yao-jz" }, { "repo": "pytorch/xla", "number": 7189, "title": "Add example for training small LLM", "body": "## \ud83d\udcda Documentation\r\n\r\nCreate an example on how to train a small LLM. \r\n\r\nAdd it to the examples directory here: \r\nhttps://github.com/pytorch/xla/tree/master/examples\r\n", "url": "https://github.com/pytorch/xla/issues/7189", "state": "open", "labels": [ "docathon-h1-2024", "advanced" ], "created_at": "2024-06-04T16:42:54Z", "updated_at": "2024-06-19T01:14:21Z", "comments": 4, "user": "alchemicduncan" }, { "repo": "pytorch/xla", "number": 7185, "title": "Try running inference on an ARM CPU", "body": "## \ud83d\udcda Documentation\r\n\r\nInstall the CPU PJRT plugin from the instructions here: \r\nhttps://github.com/pytorch/xla/blob/master/plugins/cpu/README.md \r\n\r\nNext try getting a model to run on a ARM CPU, if it works, create a tutorial on how to get it running.\r\n", "url": "https://github.com/pytorch/xla/issues/7185", "state": "open", "labels": [ "docathon-h1-2024", "advanced" ], "created_at": "2024-06-04T16:40:13Z", "updated_at": "2024-06-17T17:59:07Z", "comments": 4, "user": "alchemicduncan" }, { "repo": "pytorch/xla", "number": 7183, "title": "Create a distributed and single device example", "body": "## \ud83d\udcda Documentation\r\n\r\nSelect a model of your own to train. Then create an example of both running it on a single device, and running it on a distributed device of your choice. \r\n\r\nAdd both training examples that you came up with to the examples directory: https://github.com/pytorch/xla/tree/master/examples", "url": "https://github.com/pytorch/xla/issues/7183", "state": "open", "labels": [ "docathon-h1-2024", "advanced" ], "created_at": "2024-06-04T16:38:24Z", "updated_at": "2025-06-08T02:04:27Z", "comments": 1, "user": "alchemicduncan" }, { "repo": "pytorch/xla", "number": 7182, "title": "Try running Resnet example on GPU", "body": "## \ud83d\udcda Documentation\r\n\r\nTry running the Resnet training example on a GPU: https://github.com/pytorch/xla/blob/master/examples/train_resnet_base.py \r\n\r\nIf it works add a section about how to do it to the GPU instructions here: https://github.com/pytorch/xla/blob/master/docs/gpu.md\r\n", "url": "https://github.com/pytorch/xla/issues/7182", "state": "closed", "labels": [ "docathon-h1-2024", "medium" ], "created_at": "2024-06-04T16:37:36Z", "updated_at": "2024-06-11T18:37:09Z", "comments": 1, "user": "alchemicduncan" }, { "repo": "pytorch/xla", "number": 7180, "title": "Adding a new arg to a PyTorch op", "body": "## \u2753 Questions and Help\r\n\r\nI'm trying to add a new (optional) argument to the `cumsum` operator in PyTorch - a boolean arg `full` which prepends a 0 to the beginning of the returned tensor. I'd appreciate some help to figure out how to get XLA to build with this change, and what the update process should look like (considering that the XLA and pytorch repos will be out of sync during the development).\r\n\r\nPR/issue on the PyTorch side:\r\nhttps://github.com/pytorch/pytorch/pull/127675\r\nhttps://github.com/pytorch/pytorch/issues/76191\r\n\r\nThe XLA builds are failing on my PR:\r\nhttps://github.com/pytorch/pytorch/actions/runs/9360674517/job/25766868220\r\n\r\n```\r\n2024-06-04T03:56:39.3106543Z torch_xla/csrc/aten_xla_type.cpp:1147:12: error: no declaration matches 'at::Tensor torch_xla::XLANativeFunctions::cumsum(const at::Tensor&, int64_t, std::optional<c10::ScalarType>)'\r\n2024-06-04T03:56:39.3108366Z 1147 | at::Tensor XLANativeFunctions::cumsum(const at::Tensor& self, int64_t dim,\r\n2024-06-04T03:56:39.3109178Z | ^~~~~~~~~~~~~~~~~~\r\n2024-06-04T03:56:39.3109813Z In file included from torch_xla/csrc/aten_xla_type.cpp:22:\r\n2024-06-04T03:56:39.3111932Z bazel-out/k8-opt/bin/torch_xla/csrc/XLANativeFunctions.h:166:19: note: candidate is: 'static at::Tensor torch_xla::XLANativeFunctions::cumsum(const at::Tensor&, int64_t, std::optional<c10::ScalarType>, bool)'\r\n2024-06-04T03:56:39.3114227Z 166 | static at::Tensor cumsum(const at::Tensor & self, int64_t dim, ::std::optional<at::ScalarType> dtype, bool full);\r\n2024-06-04T03:56:39.3115256Z | ^~~~~~\r\n2024-06-04T03:56:39.3115866Z In file included from torch_xla/csrc/aten_xla_type.cpp:22:\r\n2024-06-04T03:56:39.3117419Z bazel-out/k8-opt/bin/torch_xla/csrc/XLANativeFunctions.h:14:8: note: 'struct torch_xla::XLANativeFunctions' defined here\r\n2024-06-04T03:56:39.3118602Z 14 | struct XLANativeFunctions {\r\n2024-06-04T03:56:39.3119113Z | ^~~~~~~~~~~~~~~~~~\r\n```\r\n\r\nI've tried patching the build on the XLA side: \r\nhttps://github.com/pytorch/xla/compare/master...davidberard98:xla:update-cumsum-args?expand=1\r\n\r\nThis works when combined with my changes on the PyTorch side, but not when combined with the main branch of PyTorch today. i.e.:\r\n* trunk pytorch + trunk xla -> builds\r\n* pytorch w/ my patches + xla w/ my patches -> builds\r\n* trunk pytorch + xla w/ my patches -> does not build \r\n\r\nIt seems like the issue is that the definition in `torch_xla/csrc/aten_xla_type.cpp` needs to match the signature in XLANativeFunctions.h (presumably code-genned from native_functions.yaml or similar?)", "url": "https://github.com/pytorch/xla/issues/7180", "state": "closed", "labels": [], "created_at": "2024-06-04T16:35:37Z", "updated_at": "2024-06-10T16:47:49Z", "comments": 0, "user": "davidberard98" }, { "repo": "pytorch/ao", "number": 320, "title": "Saving autoquant quantization plan", "body": "First of all, thank you for the great library! It makes quantization really easy.\r\n\r\nIs it possible to run autoquant once and later applying the same quantization plan again? Or would I need to manually look at logs right now to see what autoquant came up with so I can apply the same quantization later?\r\n\r\n// I see there's `AUTOQUANT_CACHE` that gets used to save the timings, maybe just saving/loading that will do?\r\n// Seems like ^ works!", "url": "https://github.com/pytorch/ao/issues/320", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-04T11:10:41Z", "updated_at": "2024-06-07T10:45:07Z", "user": "RobinKa" }, { "repo": "pytorch/xla", "number": 7177, "title": "Why not register low precision autocast for scaled dot product attention?", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nMultiHeadAttention can not run with auto mixed precision mode.\r\n\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```bash\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\nxla_device = xm.xla_device()\r\nembed_dim = 1024\r\nnum_heads = 64\r\nmultihead_attn = nn.MultiheadAttention(embed_dim, num_heads)\r\ninput = torch.ones([4,32,1024], dtype=torch.float32).to(xla_device)\r\nattn_mask = torch.ones([32,32], dtype=torch.float32).to(xla_device)\r\nmultihead_attn = nn.MultiheadAttention(embed_dim, num_heads, batch_first=True).to(xla_device)\r\nwith torch.amp.autocast(\"xla\", dtype=torch.float16):\r\n attn_output = multihead_attn(input, input, input, attn_mask=attn_mask, need_weights=False)\r\nxm.mark_step()\r\nprint(attn_output[0].dtype)\r\nprint(attn_output)\r\n```\r\n\r\nRuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and query.dtype: c10::Half instead.\r\n\r\n## Expected behavior\r\n\r\nMultiHeadAttention module can run successfully and get correct result tensor type.\r\n\r\n## Environment\r\n\r\n - Reproducible on XLA backend [CPU/TPU/CUDA]: CPU\r\n\r\n\r\n## Additional context\r\n\r\nThough I reproduce the bug by CPU, but I believe it will occur with any kind of pjrt device except cuda. I can reproduce it on intel gpu also. To solve this bug, we only need to register low precision autocast for scaled dot product attention and has verified it. I want to ask why we don't register this and does there exist any problem?\r\n", "url": "https://github.com/pytorch/xla/issues/7177", "state": "closed", "labels": [], "created_at": "2024-06-04T06:17:53Z", "updated_at": "2024-06-17T02:58:42Z", "comments": 2, "user": "ghost" }, { "repo": "pytorch/serve", "number": 3172, "title": "Two-way authentication/Mutual SSL in gRPC", "body": "### \ud83d\ude80 The feature\n\nTorchserve currently supports SSL for gRPC but one way authentication. Can we make it two way ?\n\n### Motivation, pitch\n\nMore security\n\n### Alternatives\n\nreverse proxy like nginx is an option i think\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3172", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-06-03T14:58:07Z", "updated_at": "2024-06-03T17:37:53Z", "comments": 0, "user": "MohamedAliRashad" }, { "repo": "pytorch/tutorials", "number": 2894, "title": "~PyTorch Docathon H1 2024!~ ", "body": "### **PyTorch Docathon H1 2024!**\r\n\r\nHooray! It's this time of the year again and we are excited for you to participate in the PyTorch docathon. We have the following repositories participating:\r\n\r\n- [pytorch/pytorch](https://github.com/pytorch/pytorch) \r\n- [pytorch/tutorials](https://github.com/pytorch/tutorials) \r\n- [pytorch/xla](https://github.com/pytorch/xla)\r\n- [pytorch-labs/torchfix](https://github.com/pytorch-labs/torchfix)\r\n\r\nThe docathon starts on June 4 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on June 16th.\r\n\r\n#### **Date and location**\r\n\r\n**WHEN:** The docathon starts on June 4 at 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on June 16th.\r\n**WHERE:** Virtual\r\n**WHAT:** Issues with the docathon-h1-2024 label - will be posted on June 4th.\r\n\r\nWatch our intro video to learn more details about the event.\r\n\r\n### **Can everyone participate?**\r\nWe encourage everyone to consider participating in the docathon but there are a few things we expect from the participants:\r\n\r\n- You must have a GitHub account and know how to use Git and GitHub, how to submit or rebase your PR on the latest main branch, how to fork or clone the repo, how to view errors in the CI and troubleshoot. We reserve the right to reject incorrectly submitted PRs.\r\n- You must be familiar with Python, the basics of Machine Learning, and have at least a basic knowledge of PyTorch. Familiarity with Sphinx, sphinx-gallery, and reStructuredText is a plus.\r\n\r\nBefore you start contributing make sure to read [Linux Foundation Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/) as well as the [GitHub Code of Conduct](https://docs.github.com/en/site-policy/github-terms/github-community-code-of-conduct).\r\n\r\n### **What contributions are we looking for?**\r\n\r\nAll issues for this docathon are tagged with the _docathon-h1-2024_ label. Please note that contributions that address other issues won't be counted. We are primarily looking for the following contributions: \r\n\r\n- Docstring fixes\r\n- Documentation bug fixes\r\n- Tutorial fixes and testing\r\n\r\n\r\n**NOTE:** Due to the large number of RSVPs, the tasks are provided on a first come first serve basis \u2014 please don't hoard the tasks!\r\n\r\n### **Difficulty Levels**\r\nThe issues have three levels of difficulty: _easy, medium_, and _advanced_. If this is your first time contributing to PyTorch, we recommend that you start with an issue that is tagged as easy.\r\n\r\n### **How to contribute to tutorials?**\r\n\r\n1. Read [PyTorch Contributor Document](https://github.com/pytorch/tutorials/blob/main/CONTRIBUTING.md?rgh-link-date=2023-05-26T19%3A09%3A32Z) for general guidelines on how the submission process works and overall style and voice.\r\n2. Pick an issue that is labeled as _docathon-h1-2024_.\r\n3. In the issue, add a comment with the text /assigntome. If the issue is already assigned, please find another issue to work on. We ask that you assign one issue at a time - we want to give everyone a fair chance to participate. When you are done with one issue and get it approved, you can assign another one to yourself and start working on it.\r\n4. If you are submitting a new tutorial, use [this template](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py?rgh-link-date=2023-05-26T19%3A09%3A32Z).\r\n5. Fork or clone the PyTorch repository to your computer. For simple fixes, like incorrect URLs, you could use the GitHub UI as well.\r\n6. Create a branch and work on the fix.\r\n7. Test your fix by running the single tutorial locally. Don't run the whole build as it takes hours and requires a GPU. You can run one tutorial as a script `python3 <tutorial-name.py> or GALLERY_PATTERN=\"neural_style_transfer_tutorial.py\" make html`\r\n8. After you fix all the issues, you are ready to submit your PR.\r\n\r\n### **Submit Your PR**\r\n\r\n1. Submit your PR referencing the issue you've picked. For example:\r\n![image](https://github.com/sekyondaMeta/testsRepo/assets/127536312/26bfac4a-c694-48d7-a45e-914c2474bdb8)\r\n3. If you have not yet, sign the Contributor License Agreement (CLA) - prompted as a check in the PR. We can't accept any PRs without a signed CLA.\r\n4. Watch for any CI errors and fix as needed - all checks must pass successfully.\r\n5. When the build is finished, you will see a preview link to preview your changes. \r\n6. The reviewers might provide feedback that we expect you to address.\r\n7. When all feedback is addressed and your PR is approved - one of the reviewers will merge your PR.\r\n\r\n\r\n### **Can I partner with someone to work on an issue?**\r\n\r\nUnless you are working on a completely new tutorial from scratch, most of the issues should be possible to address on your own. If you decide to partner with someone, you can find someone to work with on our Slack channel by posting a free-form request to collaborate. One individual from the group can submit a PR referring ", "url": "https://github.com/pytorch/tutorials/issues/2894", "state": "closed", "labels": [ "docathon-h1-2024" ], "created_at": "2024-05-31T16:25:09Z", "updated_at": "2024-07-15T18:38:28Z", "comments": 0, "user": "sekyondaMeta" }, { "repo": "pytorch/examples", "number": 1264, "title": "reference of weight initialization for llama2 model", "body": "first of all, thank you for supporting native TP for torch.\r\ni just have been reading your TP tutorial code and found [the initialization detail](https://github.com/pytorch/examples/blob/main/distributed/tensor_parallelism/llama2_model.py#L316-L319) is different from the pytorch default parameterization (kaming init).\r\nis there any reference for depth init ??", "url": "https://github.com/pytorch/examples/issues/1264", "state": "closed", "labels": [], "created_at": "2024-05-31T03:18:46Z", "updated_at": "2024-05-31T04:18:26Z", "comments": 1, "user": "SeunghyunSEO" }, { "repo": "pytorch/examples", "number": 1263, "title": "`local_rank` or `rank` for multi-node FSDP", "body": "I am wondering for multi-node FSDP, does `local_rank` and `rank` have any obvious difference here?\r\nI think I understand that `local_rank` is the rank within a node.\r\n\r\nI see in a few places it looks like `local_rank` is specifically used\r\n\r\nFor example\r\n\r\nhttps://github.com/pytorch/examples/blob/main/distributed/FSDP/T5_training.py#L111\r\n`torch.cuda.set_device(local_rank)`\r\n\r\nand \r\nhttps://github.com/pytorch/examples/blob/main/distributed/FSDP/utils/train_utils.py#L48\r\n`batch[key] = batch[key].to(local_rank)`\r\n\r\nIs there any problem if using `rank` instead?", "url": "https://github.com/pytorch/examples/issues/1263", "state": "open", "labels": [], "created_at": "2024-05-30T19:47:21Z", "updated_at": "2024-05-30T19:47:21Z", "comments": 0, "user": "Emerald01" }, { "repo": "pytorch/xla", "number": 7139, "title": "Setting FrontEnd attributes for CC ops replica groups in the HLO", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nThe metadata of the CC operation needs to have an extra field/key, indicating whether the replica groups are represented directly with all the ids or encoded in some other manner, expanded into actual ids downstream into the stack. These will be lowered as front end attributes of the op so that the compiler/runtime understands if a direct or indirect representation is used.\r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nThe replica groups become very long at scale. To represent then concisely, a condensed form of representation is necessary. The basic idea can be thought of as an Iota like operation. With an attribute indicating whether its direct or indirect, the compiler/runtime can infer if the groups represent the actual replica ids. If not, these will be expanded based on the representation coded.\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nThe framework will exercise the option of turning on or off the condensed form of replica group representation. When the condensed/indirect form is used to represent the replica groups, we would need to have the frontend_attributes={replica_grps=\"indirect\"} set for the CC ops indicating the format of the replica groups to be consumed by compiler/runtime. \r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n", "url": "https://github.com/pytorch/xla/issues/7139", "state": "closed", "labels": [ "enhancement", "distributed" ], "created_at": "2024-05-29T12:47:47Z", "updated_at": "2025-04-07T13:55:20Z", "comments": 2, "user": "amithrm" }, { "repo": "pytorch/vision", "number": 8450, "title": "Let `v2.functional.gaussian_blur` backprop through `sigma` parameter", "body": "the v1 version of `gaussian_blur` allows to backprop through sigma\r\n\r\n(example taken from https://github.com/pytorch/vision/issues/8401)\r\n```\r\nimport torch\r\nfrom torchvision.transforms.functional import gaussian_blur\r\n\r\ndevice = \"cuda\"\r\ndevice = \"cpu\"\r\nk = 15\r\ns = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_grad=True, device=device)\r\n\r\nblurred = gaussian_blur(torch.randn(1, 3, 256, 256, device=device), k, [s])\r\nblurred.mean().backward()\r\nprint(s.grad)\r\n```\r\n\r\non CPU and on GPU (after https://github.com/pytorch/vision/pull/8426).\r\n\r\nHowever, the v2 version fails with\r\n\r\n```\r\nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\r\n```\r\n\r\n\r\nThe support in v1 is sort of undocumented and probably just works out of luck (sigma is typically expected to be a list of floats rather than a tensor). So while it works, it's not 100% clear to me whether this is a feature we absolutely want. I guess we can implement it if it doesn't make the code much more complex or slower.", "url": "https://github.com/pytorch/vision/issues/8450", "state": "closed", "labels": [], "created_at": "2024-05-29T12:45:21Z", "updated_at": "2024-07-29T15:45:14Z", "comments": 3, "user": "NicolasHug" }, { "repo": "pytorch/pytorch", "number": 127320, "title": "[While_loop] How to use layer like `torch.nn.BatchNorm2d` with while_loop?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nHi, I'm trying to support `while_loop` with `DispatchKey.XLA`;\r\n\r\nwhen I try linear and MNIST with torch, code would be dispatched to `DispatchKey.CompositeExplicitAutograd` to use pure python while, and finish;\r\n\r\nmy local example code for MNIST:\r\n```python\r\nimport torch\r\nfrom torch._higher_order_ops.while_loop import while_loop\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\n\r\ndef test_while_loop_tpu_MNIST_inside_loop(self):\r\n\r\n torch.set_grad_enabled(False)\r\n\r\n n_epochs = 3\r\n batch_size_train = 8\r\n batch_size_test = 10\r\n learning_rate = 0.01\r\n momentum = 0.5\r\n log_interval = 10\r\n random_seed = 1\r\n torch.backends.cudnn.enabled = False\r\n torch.manual_seed(random_seed)\r\n\r\n class MNIST(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5, stride=1, padding=2)\r\n self.bn1 = torch.nn.BatchNorm2d(10)\r\n self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)\r\n self.bn2 = torch.nn.BatchNorm2d(20)\r\n self.fc1 = torch.nn.Linear(500, 50)\r\n self.fc2 = torch.nn.Linear(50, 10)\r\n\r\n def forward(self, iteri, x, y):\r\n def cond_fn(iteri, x, y):\r\n return iteri > 0\r\n\r\n def body_fn(iteri, x, y):\r\n y = F.relu(F.max_pool2d(self.conv1(x), 2))\r\n y = self.bn1(y) # torch.while_loop's body_fn might be modifying the input!\r\n y = F.relu(F.max_pool2d(self.conv2(y), 2))\r\n y = self.bn2(y)\r\n y = torch.flatten(y, 1)\r\n y = F.relu(self.fc1(y))\r\n y = self.fc2(y)\r\n\r\n return iteri - 1, x.clone(), F.log_softmax(y, dim=1)\r\n\r\n return while_loop(cond_fn, body_fn, (iteri, x, y))\r\n\r\n def forward_compare(self, iteri, x, y):\r\n y = F.relu(F.max_pool2d(self.conv1(x), 2))\r\n y = self.bn1(y) # torch.while_loop's body_fn might be modifying the input!\r\n y = F.relu(F.max_pool2d(self.conv2(y), 2))\r\n y = self.bn2(y)\r\n y = torch.flatten(y, 1)\r\n y = F.relu(self.fc1(y))\r\n y = self.fc2(y)\r\n return iteri - 1, x.clone(), F.log_softmax(y, dim=1)\r\n\r\n mnist = MNIST()\r\n bs=16\r\n l_in_0 = torch.randn(bs, 1, 28, 28, dtype=torch.float32)\r\n l_out = torch.randn(bs, 10, dtype=torch.float32)\r\n iteri = torch.tensor(3, dtype=torch.int64)\r\n _, _, res = mnist(iteri, l_in_0, l_out)\r\n\r\n # === expected result for one iteration to be compared since body_fn defined use the same input in each iteration ===\r\n _, _, expected_res = mnist.forward_compare(iteri, l_in_0, l_out)\r\n self.assertTrue(torch.all(torch.eq(res, expected_res)))\r\n```\r\n\r\n---\r\n\r\nfor code with `DispatchKey.XLA` and `torch.nn.BatchNorm2d`, it would stoped/failed at `[_has_potential_branch_input_mutation](https://github.com/pytorch/pytorch/blob/d6e3e89804c4063827ea21ffcd3d865e5fe365d9/torch/_higher_order_ops/while_loop.py#L250C16-L250C52)` check with ERROR:\r\n```\r\ntorch._higher_order_ops.utils.UnsupportedAliasMutationException: torch.while_loop's body_fn might be modifying the input!\r\n```\r\n\r\ndo we have example for model with layer like `torch.nn.BatchNorm2d` which `_has_potential_branch_input_mutation` is true, and without using pure while loop?\r\n\r\nmy local code with `DispatchKey.XLA` and `torch.nn.BatchNorm2d`:\r\n```\r\nimport torch\r\nimport torch_xla\r\nimport torch_xla.experimental.fori_loop\r\nfrom torch_xla.experimental.fori_loop import fori_loop\r\nfrom torch._higher_order_ops.while_loop import while_loop\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.core.xla_builder as xb\r\nimport torch_xla.utils.utils as xu\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\n\r\ndef test_while_loop_tpu_MNIST_inside_loop_without_BN(self):\r\n xm.mark_step()\r\n device = xm.xla_device()\r\n torch.set_grad_enabled(False)\r\n\r\n n_epochs = 3\r\n batch_size_train = 8\r\n batch_size_test = 10\r\n learning_rate = 0.01\r\n momentum = 0.5\r\n log_interval = 10\r\n random_seed = 1\r\n torch.backends.cudnn.enabled = False\r\n torch.manual_seed(random_seed)\r\n\r\n class MNIST(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5, stride=1, padding=2)\r\n self.bn1 = torch.nn.BatchNorm2d(10)\r\n self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)\r\n self.bn2 = torch.nn.BatchNorm2d(20)\r\n self.fc1 = torch.nn.Linear(500, 50)\r\n self.fc2 = torch.nn.Linear(50, 10)\r\n\r\n def forward(self, iteri, x, y):\r\n def cond_fn(iteri, x, y):\r\n return iteri > 0\r\n\r\n def body_fn(iteri, x, y):\r\n # y = self.bn1(F.relu(F.max_pool2d(self.conv1(x), 2)))\r\n # y = self.bn2(F.relu(F.max_pool2d(self.conv2(y), 2)))\r\n\r\n y = F.relu(F.max_pool2d(self.conv1(x), 2))\r\n y = self.bn1(y) # torch.while_loop's body_fn might be modifying the input!\r\n y = F.relu(F.max_pool2d(self.conv2(y", "url": "https://github.com/pytorch/pytorch/issues/127320", "state": "closed", "labels": [ "triaged", "module: xla", "oncall: pt2", "module: higher order operators", "module: pt2-dispatcher" ], "created_at": "2024-05-28T18:37:15Z", "updated_at": "2024-05-29T22:42:57Z", "user": "ManfeiBai" }, { "repo": "pytorch/pytorch", "number": 127075, "title": "What is the processing principle when the complex64 input tensor contains nan or inf for addition?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n>>> import torch\r\n>>> a = torch.tensor(complex(3, float('nan')))\r\n>>> torch.add(a,a)\r\ntensor(nan+nanj)\r\n\r\nThe rule for adding complex numbers is to add the real and imaginary parts separately.\r\nIn the above example, why is the real part nan instead of 4?\r\nHow to deal with nan/inf in the output when complex tensor addition contains nan or inf? Which codes in which directory should I refer to?\r\nThank you!\r\n\r\n### Versions\r\n\r\n'2.0.0+cpu'\r\nThe results of the cpu / cuda versions of torch2.3 are the same\r\n\r\n\r\ncc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @amjames", "url": "https://github.com/pytorch/pytorch/issues/127075", "state": "open", "labels": [ "triaged", "module: complex" ], "created_at": "2024-05-24T09:55:35Z", "updated_at": "2024-05-27T03:59:52Z", "user": "liying-1997" }, { "repo": "pytorch/torchchat", "number": 847, "title": "Figure out how to leverage kernels in torchao", "body": "For quantized linear a lot of the kernels will be living in torchao: https://github.com/pytorch/ao/tree/main/torchao/csrc\r\n\r\nWe need to figure out how to use these kernels in torchchat/executorch.\r\n", "url": "https://github.com/pytorch/torchchat/issues/847", "state": "closed", "labels": [], "created_at": "2024-05-23T19:04:48Z", "updated_at": "2024-07-21T21:53:58Z", "user": "larryliu0820" }, { "repo": "pytorch/xla", "number": 7103, "title": "Why does my 3-layer linear graph need to output two Transposes?", "body": "## \u2753 Questions and Help\r\ntorchxla is the latest version\r\nthis is my code\uff1a\r\n```\r\nimport torch\r\nimport torch_xla\r\nimport torch_xla.runtime as xr\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.experimental.xla_sharding as xs\r\nfrom torch_xla.experimental.xla_sharding import Mesh\r\nfrom torch_xla.amp import autocast, GradScaler\r\nimport numpy as np\r\nimport torch.optim as optim\r\nimport torch_xla.debug.profiler as xp\r\nimport time\r\nimport os\r\n# Setup profiler env var\r\nos.environ['XLA_HLO_DEBUG'] = '1'\r\n\r\nt1 = torch.randn(1600, 12800, device='cpu')\r\n\r\nxt1 = t1.to(xm.xla_device())\r\nclass MyModel(nn.Module):\r\n def __init__(self):\r\n self.linear1 = torch.nn.Linear(12800, 9600)\r\n self.linear2 = torch.nn.Linear(9600, 1280)\r\n self.linear3 = torch.nn.Linear(1280, 128)\r\n def forward(self, xt1):\r\n output = self.linear1(xt1)\r\n output1 = self.linear2(output)\r\n output2 = self.linear3(output1)\r\n return output2\r\nmy_model = MyModel().to(xm.xla_device())\r\nans = my_model(xt1)\r\nxm.mark_step()\r\n```\r\nIn the hlo graph that was dumped, you can see that there are two transpose tensors in the output field\uff1a\r\n```\r\nHloModule SyncTensorsGraph.30, entry_computation_layout={(f32[9600]{0}, f32[9600,12800]{1,0}, f32[1600,12800]{1,0}, f32[1280,9600]{1,0}, f32[1280]{0}, /*index=5*/f32[128,1280]{1,0}, f32[128]{0})->(f32[1600,9600]{1,0}, f32[9600,1280]{1,0}, f32[1600,1280]{1,0}, f32[1280,128]{1,0}, f32[1600,128]{1,0})}, replica_count=8\r\n\r\nENTRY SyncTensorsGraph.30 {\r\n p2.4 = f32[1600,12800]{1,0} parameter(2), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n p1.2 = f32[9600,12800]{1,0} parameter(1), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n transpose.3 = f32[12800,9600]{0,1} transpose(p1.2), dimensions={1,0}, metadata={op_type=\"aten__permute\" op_name=\"aten__permute\"}\r\n dot.5 = f32[1600,9600]{1,0} dot(p2.4, transpose.3), lhs_contracting_dims={1}, rhs_contracting_dims={0}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n p0.1 = f32[9600]{0} parameter(0), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n reshape.6 = f32[1,9600]{1,0} reshape(p0.1), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n broadcast.7 = f32[1,9600]{1,0} broadcast(reshape.6), dimensions={0,1}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n reshape.8 = f32[9600]{0} reshape(broadcast.7), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n broadcast.9 = f32[1600,9600]{1,0} broadcast(reshape.8), dimensions={1}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n add.10 = f32[1600,9600]{1,0} add(dot.5, broadcast.9), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n p3.11 = f32[1280,9600]{1,0} parameter(3), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n transpose.12 = f32[9600,1280]{0,1} transpose(p3.11), dimensions={1,0}, metadata={op_type=\"aten__permute\" op_name=\"aten__permute\"}\r\n dot.14 = f32[1600,1280]{1,0} dot(add.10, transpose.12), lhs_contracting_dims={1}, rhs_contracting_dims={0}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n p4.13 = f32[1280]{0} parameter(4), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n reshape.15 = f32[1,1280]{1,0} reshape(p4.13), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n broadcast.16 = f32[1,1280]{1,0} broadcast(reshape.15), dimensions={0,1}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n reshape.17 = f32[1280]{0} reshape(broadcast.16), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n broadcast.18 = f32[1600,1280]{1,0} broadcast(reshape.17), dimensions={1}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n add.19 = f32[1600,1280]{1,0} add(dot.14, broadcast.18), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n p5.20 = f32[128,1280]{1,0} parameter(5), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n transpose.21 = f32[1280,128]{0,1} transpose(p5.20), dimensions={1,0}, metadata={op_type=\"aten__permute\" op_name=\"aten__permute\"}\r\n dot.23 = f32[1600,128]{1,0} dot(add.19, transpose.21), lhs_contracting_dims={1}, rhs_contracting_dims={0}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n p6.22 = f32[128]{0} parameter(6), metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n reshape.24 = f32[1,128]{1,0} reshape(p6.22), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n broadcast.25 = f32[1,128]{1,0} broadcast(reshape.24), dimensions={0,1}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n reshape.26 = f32[128]{0} reshape(broadcast.25), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n broadcast.27 = f32[1600,128]{1,0} broadcast(reshape.26), dimensions={1}, metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n add.28 = f32[1600,128]{1,0} add(dot.23, broadcast.27), metadata={op_type=\"aten__addmm\" op_name=\"aten__addmm\"}\r\n ROOT tuple.29 = (f32[1600,9600]{1,0}, f32[9600,1280]{0,1}, f32[1600,1280]{1,0}, f32[1280,128]{0,1}, f32[", "url": "https://github.com/pytorch/xla/issues/7103", "state": "closed", "labels": [ "question" ], "created_at": "2024-05-23T08:54:02Z", "updated_at": "2025-04-07T13:59:14Z", "user": "mars1248" }, { "repo": "pytorch/xla", "number": 7102, "title": "Problem with mesh shape in HybridMesh on TPU", "body": "## \u2753 Questions and Help\r\nI recived error when try create sqmd mesh on kaggle notebook when flow [Huggingface optimum-tpu](https://github.com/huggingface/optimum-tpu/blob/695ee84d657d9ed2761fcf481685afad0e849a90/examples/language-modeling/run_clm.py#L484)\r\n\r\n```\r\nimport os\r\nimport numpy as np\r\n\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_multiprocessing as xmp\r\nfrom torch_xla.distributed.fsdp import checkpoint_module\r\nfrom torch_xla.distributed.fsdp.utils import apply_xla_patch_to_nn_linear\r\nimport torch_xla.distributed.parallel_loader as pl\r\nimport torch_xla.core.xla_env_vars as xenv\r\nimport torch_xla.debug.metrics as met\r\nimport torch_xla.distributed.spmd.xla_sharding as xs\r\nfrom torch_xla.distributed.spmd.xla_sharding import Mesh, HybridMesh\r\nfrom torch_xla.distributed.spmd.xla_sharded_tensor import XLAShardedTensor\r\nimport torch_xla.runtime as xr\r\nxr.use_spmd()\r\n\r\nos.environ['USE_TORCH'] = 'True'\r\nos.environ[\"PJRT_DEVICE\"] = \"TPU\"\r\nos.environ['TPU_NUM_DEVICES'] = '8'\r\nos.environ[xenv.TPU_VISIBLE_CHIPS] = '0,1,2,3'\r\nos.environ[xenv.TPU_PROCESS_BOUNDS] = '1,1,1'\r\nnum_devices = xr.global_runtime_device_count() # 8\r\nmodel_axis = 1\r\nassert xr.device_type() == 'TPU', \"Only TPU is supported\"\r\n# dcn_axis = model_args.spmd_dcn_parallelism # 1\r\ndcn_axis = 1\r\ndata_axis = num_devices // model_axis // dcn_axis\r\n# mesh data setup\r\nici_mesh_shape = (1, data_axis, model_axis)\r\ndcn_mesh_shape = (dcn_axis, 1, 1)\r\naxis_names=('dcn', 'data', 'model')\r\nprint('ici', ici_mesh_shape)\r\nprint('dcn', dcn_mesh_shape)\r\n# Note that we do not pass the spmd_mesh to the model because it is not JSON-serializable.\r\nspmd_mesh = HybridMesh(ici_mesh_shape=ici_mesh_shape, dcn_mesh_shape=dcn_mesh_shape, axis_names=axis_names)\r\n```\r\n\r\nfull error:\r\n\r\n```\r\nici (1, 8, 1)\r\ndcn (1, 1, 1)\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\nCell In[28], line 41\r\n 39 print('dcn', dcn_mesh_shape)\r\n 40 # Note that we do not pass the spmd_mesh to the model because it is not JSON-serializable.\r\n---> 41 spmd_mesh = HybridMesh(ici_mesh_shape=ici_mesh_shape, dcn_mesh_shape=dcn_mesh_shape, axis_names=axis_names)\r\n\r\nFile /usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py:188, in HybridMesh.__init__(self, ici_mesh_shape, dcn_mesh_shape, axis_names)\r\n 185 mesh = self._create_hybrid_device_mesh(self.ici_mesh_shape,\r\n 186 self.dcn_mesh_shape)\r\n 187 else:\r\n--> 188 mesh = self._create_device_mesh(self.ici_mesh_shape)\r\n 189 device_ids = mesh.flatten()\r\n 190 super().__init__(device_ids, mesh_shape, axis_names)\r\n\r\nFile /usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py:323, in HybridMesh._create_device_mesh(self, mesh_shape, devices)\r\n 319 raise ValueError(\r\n 320 f'Number of devices {len(devices)} must equal the product '\r\n 321 f'of mesh_shape {mesh_shape}')\r\n 322 physical_mesh = self._get_physical_tpu_mesh(devices)\r\n--> 323 device_mesh, assignment = self._create_device_mesh_for_nd_torus(\r\n 324 physical_mesh, mesh_shape)\r\n 325 return device_mesh\r\n\r\nFile /usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py:286, in HybridMesh._create_device_mesh_for_nd_torus(self, physical_mesh, mesh_shape)\r\n 282 else:\r\n 283 # If the num_axes for loop did not break, i.e. none of the candidates work\r\n 284 # goto here with this while-else construct.\r\n 285 if logical_axis_size > 1:\r\n--> 286 raise NotImplementedError(\r\n 287 'Failed to find assignment for logical_axis_index'\r\n 288 f' {logical_axis_index} of size {logical_axis_size} with remaining'\r\n 289 f' assignable mesh {assignable_physical_mesh}. The size of each'\r\n 290 ' axis in your logical mesh must be equal to the product of'\r\n 291 ' some subset of the physical mesh axis sizes. E.g logical mesh (4,'\r\n 292 ' 16) is compatible with physical mesh 4x4x4 since 4=4 and 16=4x4.'\r\n 293 )\r\n 294 # Flatten the assignment\r\n 295 transpose: List[int] = []\r\n\r\nNotImplementedError: Failed to find assignment for logical_axis_index 1 of size 8 with remaining assignable mesh [2, 2, 0]. The size of each axis in your logical mesh must be equal to the product of some subset of the physical mesh axis sizes. E.g logical mesh (4, 16) is compatible with physical mesh 4x4x4 since 4=4 and 16=4x4.\r\n```\r\n\r\nTPUv3-8 of kaggle have 8 cores(2x4) so I don't know why i get error. What problem? Thanks for your help!", "url": "https://github.com/pytorch/xla/issues/7102", "state": "closed", "labels": [ "question", "distributed", "xla:tpu" ], "created_at": "2024-05-23T06:39:44Z", "updated_at": "2025-04-17T13:33:19Z", "user": "hiwamk" }, { "repo": "pytorch/vision", "number": 8437, "title": "Add mobilenetv4 support and pretrained models?", "body": "### \ud83d\ude80 The feature\n\nGoogle has published the mobilenetv4 model. When will pytorch support it and open the pre-trained model?\n\n### Motivation, pitch\n\nI very much hope to use the latest lightweight backbone\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8437", "state": "closed", "labels": [], "created_at": "2024-05-22T06:16:00Z", "updated_at": "2024-06-14T02:01:20Z", "comments": 5, "user": "LiYufengzz" }, { "repo": "pytorch/audio", "number": 3797, "title": "RTSP with StreamReader", "body": "Does torchaudio supports RTSP streams? I've been using with RTMP but when running RTSP streams is always crashes, mainly reporting that \"threads\" argument passed to FFMPEG is not supported.\r\n\r\nUsing FFMPEG 6.0\r\n\r\n![image](https://github.com/pytorch/audio/assets/16081608/ebcb5642-9ae3-4997-b85c-dfc155ba8673)\r\n", "url": "https://github.com/pytorch/audio/issues/3797", "state": "closed", "labels": [], "created_at": "2024-05-21T14:55:21Z", "updated_at": "2024-05-21T15:59:40Z", "comments": 0, "user": "pedromoraesh" }, { "repo": "pytorch/torchchat", "number": 837, "title": "Cannot build mobile android app in unit test - due to licensing question in build process?", "body": "https://github.com/pytorch/torchchat/actions/runs/9161687849/job/25187114502?pr=831\r\n\r\nJanuary 16, 2019\r\n---------------------------------------\r\nAccept? (y/N): Skipping following packages as the license is not accepted:\r\nGoogle APIs Intel x86_64 Atom System Image\r\nThe following packages can not be installed since their licenses or those of the packages they depend on were not accepted:\r\n system-images;android-34;google_apis;x86_64\r\n[=======================================] 100% Computing updates... \r\n\r\n+ avdmanager list avd\r\n+ grep -q torchchat\r\n+ avdmanager create avd --name torchchat --package 'system-images;android-34;google_apis;x86_64'\r\nLoading local repository... \r\n[========= ] 25% Loading local repository... \r\n[========= ] 25% Fetch remote repository... \r\n[=======================================] 100% Fetch remote repository... \r\nError: Package path is not valid. Valid system image paths are:\r\nnull", "url": "https://github.com/pytorch/torchchat/issues/837", "state": "closed", "labels": [], "created_at": "2024-05-20T17:01:29Z", "updated_at": "2024-08-20T18:26:20Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/audio", "number": 3796, "title": "How to use my finetuned version of wave2vec2 for forced alignment as shown in example/", "body": "### \ud83d\udc1b Describe the bug\n\nExample script i am following, it used default pretrained model, where as. i want to use my own finetuned model.\r\n\r\nhttps://pytorch.org/audio/main/generated/torchaudio.pipelines.Wav2Vec2FABundle.html#torchaudio.pipelines.Wav2Vec2FABundle\n\n### Versions\n\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.24.4\r\n[pip3] onnx==1.15.0\r\n[pip3] onnxruntime==1.16.3\r\n[pip3] torch==2.2.2\r\n[pip3] torchaudio==2.2.2\r\n[pip3] torchvision==0.15.2\r\n[conda] numpy 1.24.4 pypi_0 pypi\r\n[conda] torch 2.2.2 pypi_0 pypi\r\n[conda] torchaudio 2.2.2 pypi_0 pypi\r\n[conda] torchvision 0.15.2 pypi_0 pypi\r\n", "url": "https://github.com/pytorch/audio/issues/3796", "state": "open", "labels": [], "created_at": "2024-05-19T19:13:25Z", "updated_at": "2024-05-19T19:13:25Z", "user": "omerarshad" }, { "repo": "pytorch/xla", "number": 7070, "title": "Cannot Import _XLAC", "body": "## \u2753 Questions and Help\r\nWhen I want to import torch_xla,the error occurs\r\n```shell\r\n>>> import torch_xla\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/code/pytorch/torch-xla/torch_xla/__init__.py\", line 114, in <module>\r\n import _XLAC\r\nImportError: /code/pytorch/torch-xla/_XLAC.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZNK5torch8autograd4Node4nameEv\r\n``` \r\n\r\nAnd I have followed the guide to make sure my torch version is the same as torch_xla\r\n[https://github.com/Lightning-AI/pytorch-lightning/discussions/8320](url)\r\n```shell\r\n>>> pip list | grep torch\r\n[2]+ Stopped python\r\n(torch_xla) root@0c9ffd606fd3:/code/pytorch/torch-xla# pip list | grep torch\r\nrotary-embedding-torch 0.6.0\r\ntorch 2.1.0+cu121 /root/miniconda3/envs/torch_xla/lib/python3.10/site-packages\r\ntorch-xla 2.1.0 /code/pytorch/torch-xla\r\ntorchaudio 2.1.0+cu121\r\ntorchview 0.2.6\r\ntorchvision 0.16.0+cu121\r\ntorchviz 0.0.2\r\n``` \r\nWhat should I do? TXS help", "url": "https://github.com/pytorch/xla/issues/7070", "state": "open", "labels": [ "question" ], "created_at": "2024-05-16T07:24:08Z", "updated_at": "2025-04-17T13:38:56Z", "user": "DarkenStar" }, { "repo": "pytorch/executorch", "number": 3620, "title": "how to calculate the vocab_size of new model", "body": "hi, \r\nwhen I tried to introduce the \"Blue LLM\" model and evaluate its ppl, there is a mistake as follow:\r\nTraceback (most recent call last):\r\n File \"/home/ufoe/anaconda3/envs/linchao/bin/lm_eval\", line 8, in <module>\r\n sys.exit(cli_evaluate())\r\n File \"/home/ufoe/linchao/lm-evaluation-harness/lm_eval/__main__.py\", line 341, in cli_evaluate\r\n results = evaluator.simple_evaluate(\r\n File \"/home/ufoe/linchao/lm-evaluation-harness/lm_eval/utils.py\", line 288, in _wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/ufoe/linchao/lm-evaluation-harness/lm_eval/evaluator.py\", line 180, in simple_evaluate\r\n lm = lm_eval.api.registry.get_model(model).create_from_arg_string(\r\n File \"/home/ufoe/linchao/lm-evaluation-harness/lm_eval/api/model.py\", line 134, in create_from_arg_string\r\n return cls(**args, **args2)\r\n File \"/home/ufoe/linchao/lm-evaluation-harness/lm_eval/models/huggingface.py\", line 203, in __init__\r\n self._create_model(\r\n File \"/home/ufoe/linchao/lm-evaluation-harness/lm_eval/models/huggingface.py\", line 544, in _create_model\r\n self._model = self.AUTO_MODEL_CLASS.from_pretrained(\r\n File \"/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py\", line 556, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 3502, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 3926, in _load_pretrained_model\r\n new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(\r\n File \"/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 805, in _load_state_dict_into_meta_model\r\n set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)\r\n File \"/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/accelerate/utils/modeling.py\", line 358, in set_module_tensor_to_device\r\n raise ValueError(\r\nValueError: Trying to set a tensor of shape torch.Size([100008, 4096]) in \"weight\" (which has shape torch.Size([100096, 4096])), this look incorrect.\r\n\r\nhow to calculate the vocab_size?\r\nthank you", "url": "https://github.com/pytorch/executorch/issues/3620", "state": "closed", "labels": [], "created_at": "2024-05-15T12:20:13Z", "updated_at": "2024-05-16T05:12:15Z", "user": "l2002924700" }, { "repo": "pytorch/extension-cpp", "number": 93, "title": "[feature request] Instruction on how to setup compile-env for Windows ", "body": "Hi\r\n\r\nI have been working with extensions successfully on Linux (shipping as `whl`)\r\nAn end-user has asked me to provide a windows version of an extension, and I have to admit that it was not as simple as the documentation suggested [here](https://pytorch.org/tutorials/advanced/cpp_extension.html).\r\n\r\nCan you please provide a minimal explanation or example on how to setup the compile env for this repo?\r\nI don't mind if it is based on `setuptools` or `cmake`, as long as it does not include a non-free tool like VS-pro [here](https://github.com/mszhanyi/VSIXTorch)\r\n\r\n--------------------------------\r\n\r\nHere are some general frame of work that will help:\r\n- OS: >=Win10\r\n- PyTorch version: >=1.6.0\r\n- How you installed PyTorch (conda, pip, source): both conda and pip\r\n- Python version: >=1.8\r\n- CUDA version: >=10.2\r\n", "url": "https://github.com/pytorch/extension-cpp/issues/93", "state": "open", "labels": [], "created_at": "2024-05-15T06:10:08Z", "updated_at": "2024-05-15T06:10:08Z", "user": "litaws" }, { "repo": "pytorch/xla", "number": 7057, "title": "Experiencing slow recompilation when manually building XLA", "body": "## \u2753 Questions and Help\r\n\r\nHi, I am interested in contributing to XLA community but I encounter a small challenge. After manually building `torch` and `torch_xla` on a CPU-based(CPU: **Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz**) Docker env, I noticed that the `python setup.py develop` process will take about **1 minutes** each time. So could you suggest any Dockerfile configurations or other changes that might speed up the recompilation process? Thanks for your help!", "url": "https://github.com/pytorch/xla/issues/7057", "state": "open", "labels": [ "question" ], "created_at": "2024-05-14T03:28:42Z", "updated_at": "2025-04-17T13:41:57Z", "user": "wenboqian" }, { "repo": "pytorch/xla", "number": 7056, "title": "Export nn.Module.forward with kwargs to StableHLO", "body": "## \u2753 Questions and Help\r\nI see in [_exported_program_to_stablehlo_bundle()](https://github.com/pytorch/xla/blob/6f0b61e5d782913a0fc7743812f2a8e522189111/torch_xla/stablehlo.py#L318) that exporting with kwargs isn't support _**yet**_.\r\n\r\nDo you expect to support this in the near future?\r\n\r\nIf not, is there another way to lower a torch.nn.Module's `forward` method with kwargs to StableHLO?", "url": "https://github.com/pytorch/xla/issues/7056", "state": "closed", "labels": [ "question", "stablehlo" ], "created_at": "2024-05-13T21:21:42Z", "updated_at": "2025-04-17T13:42:55Z", "user": "johnmatter" }, { "repo": "pytorch/torchchat", "number": 784, "title": "Can't use TorchChat with Python-3.9", "body": "Because of https://github.com/pytorch/torchchat/blob/a276b5fdd12d0dd843fd81543ceffb57065354e3/cli.py#L318-L319\r\n\r\nThat was added by https://github.com/pytorch/torchchat/pull/746 with a very descriptive title \"CLI check\"\r\n\r\nIf this is indeed a product requirement, can we specify it somewhere in README.MD (and perhaps have some discussion about it?)", "url": "https://github.com/pytorch/torchchat/issues/784", "state": "closed", "labels": [ "launch blocker" ], "created_at": "2024-05-13T18:50:16Z", "updated_at": "2024-05-13T19:01:22Z", "comments": 2, "user": "malfet" }, { "repo": "pytorch/TensorRT", "number": 2830, "title": "\u2753 [Question] How to specific aten operators must be run by LibTorch in C++?", "body": "## \u2753 Question\r\n\r\nWhen I compile the SwinTransformer model using Torch-TensorRT, an error appears:\r\n```\r\nterminate called after throwing an instance of 'c10::Error'\r\n what(): 0 INTERNAL ASSERT FAILED at \"../torch/csrc/jit/ir/alias_analysis.cpp\":615, please report a bug to PyTorch. We don't have an op for aten::floor_divide but it isn't a special case. Argument types: int, int, \r\n\r\nCandidates:\r\n aten::floor_divide(Tensor self, Tensor other) -> Tensor\r\n aten::floor_divide.Scalar(Tensor self, Scalar other) -> Tensor\r\n aten::floor_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)\r\n aten::floor_divide.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> Tensor(a!)\r\n```\r\n\r\nI checked out this [link](https://github.com/facebookresearch/segment-anything/issues/446), This error is because torch-trt dont support % op.\r\n\r\nFine, I can select to run floor_divide using LibTorch.\r\n```C++\r\ntorchtrt::ts::CompileSpec compile_settings({ input });\r\ncompile_settings.enabled_precisions.insert(build_type);\r\ncompile_settings.workspace_size = _1_GB;\r\ncompile_settings.truncate_long_and_double = true;\r\ncompile_settings.num_avg_timing_iters = 1;\r\ncompile_settings.torch_executed_ops.push_back(\"aten::floor_divide\"); // here\r\ntorchtrt::ts::compile(model, compile_settings)\r\n```\r\n\r\nIt's strange that the setting does not take effect. This error still persists.\r\n\r\nWhat can I do about this mistake? \r\n\r\nFurthermore, How to specific aten operators must be run by LibTorch in C++?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):2.2.1\r\n - CPU Architecture:x86\r\n - OS (e.g., Linux):ubuntu22.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:12.2\r\n - GPU models and configuration:\r\n - Any other relevant information:", "url": "https://github.com/pytorch/TensorRT/issues/2830", "state": "open", "labels": [ "question" ], "created_at": "2024-05-13T10:10:09Z", "updated_at": "2024-05-27T01:40:49Z", "user": "demuxin" }, { "repo": "pytorch/xla", "number": 7049, "title": "Spmd whether expert parallelism is supported\uff1f", "body": "torchxla spmd whether expert parallelism is supported\uff1f\r\nIf it is a moe model, how should it be computed in xla\uff1f\r\n## \u2753 Questions and Help\r\n", "url": "https://github.com/pytorch/xla/issues/7049", "state": "open", "labels": [ "question", "distributed" ], "created_at": "2024-05-13T03:23:20Z", "updated_at": "2025-09-03T20:34:04Z", "user": "mars1248" }, { "repo": "pytorch/torchchat", "number": 776, "title": "[tune/chat integration] component sharing", "body": "We seem to be doing the same rote stuff like manage checkpoints, download them, manager permissions, convert checkpoints and what have you...\r\n\r\nMaybe this might be a good opportunity to reduce our joint workload by pooling some of these functions. It would likely also improve user experience thanks to consistency and because we can invest the save person-months elsewhere.\r\n\r\nThis is still early, and I'm not suggesting doing this at this very moment (or we'll never launch!), but it's something I wanted to raise both for efficiency and consistency.", "url": "https://github.com/pytorch/torchchat/issues/776", "state": "closed", "labels": [], "created_at": "2024-05-13T02:44:08Z", "updated_at": "2024-07-21T21:50:46Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 775, "title": "[INTEGRATION] torchtune integration for e2e workflow with torchchat ", "body": "Hey, I\u2019m working myself thru our documentation and try to make it run in CI. That aligns pretty well with the user experience we have in mind where users can just cut & paste commands\u2026\r\n\r\nAlso, we have so many dependences that unless we test at least the instructions for the users nothing works\u2026\r\n\r\nI have a couple of questions:\r\n1 - so you install torchtune and then you assume that you CWD is where? If we assume we\u2019re in torchchat which our users will have been conditioned to be (at least in the first release), are they going to find torchtune? Is that abona fide package\u201d\r\n\r\n2 - you access the config assuming it\u2019s in llama3/8B_lora_single_device \u2014 we don\u2019t have that file\u2026. should we? Can we put it somewhere like ~torchchat/tune/config/llama3 ? Any other things I should be knowing?\r\n\r\n3 - what are you fine tuning on?\r\n\r\n4 - our users may already have downloaded checkpoints? Can they use those? Or are you loading special versions?\r\n\r\n5 - we run tests on-pr for every PR that\u2019s submitted\u2026 which doesn\u2019t work with llama3-8B because of time and cost. Is there anything that would prevent us from running stories15M (or some other very small model), not because it will have great output quality, but it will force resolution of names, finding of all the imports, and produce intelligible (if not great output). Is there anything that would prevent that?\r\n\r\n6 - what other assumptions does your build have @ https://github.com/pytorch/torchchat/blob/main/docs/torchtune.md. Is it up to date?\r\n\r\n7 - can I substitute CPU or MPS, or\u2026. whatever my favorite device is? How much pain should I expect? has anybody done this on a MacBook for example? \r\n\r\n8 - do we need any corpora or other such for finetuning?\r\n\r\n9 - anything else I forgot to ask, but I should have?\r\n\r\nSo, the updates instructions are here => https://github.com/pytorch/torchchat/pull/774\r\n\r\nI pull the instructions out of the markdown source by marking it up, and then have a script run\u2026. \r\n```\r\n python3 scripts/updown.py --file docs/torchtune.md --replace 'llama3:stories15M,-l 3:-l 2,meta-llama/Meta-Llama-3-8B-Instruct:stories15M' --suppress huggingface-cli,HF_TOKEN > ./run-torchtune.sh\r\n```\r\n\r\nThe pattern replacers for on-pr need to be adapted for this example (another reason why I would actually love to use the ownloaded checkpoints\u2026 I have it down for thiose\u2026 but you may have intermediate results and all that should not go in the downloaded files\u2026.\r\n\r\nAlthough we could just do \r\n```\r\ncp -r `python3 torchchat.py where llama3`/* ~/wherever-tune-needs-it\r\n```\r\n\r\nand it would work\r\n\r\nFailures appear pretty benign, just a HF token issue. (And llama3->stories15M substitution not working.\r\n\r\nAre there references to the model name and path in the config that would need to be adjusted?\r\n\r\nThis is the script generated from the markdown instructions\u2026. https://www.internalfb.com/intern/paste/P1360945144/\r\nDo you see any issues with it? This is not a human using it but `bash -x ./tune-script.sh` so it can\u2019t be sorta right and user will figure it out \u2014 it needs to be 100% up to snuff\r\n\r\nThis the error at the moment? Seems benign, like updating download process?\r\n\r\n(base) mikekg@mikekg-mbp torchchat % bash -x ./run-torchtune.sh|& pastry\r\nP1360947478: https://www.internalfb.com/intern/paste/P1360947478/\r\n\r\nHere's what happens in detail in CI. https://github.com/pytorch/torchchat/actions/runs/9056119551/job/24878207016?pr=774\r\n(I know, the build bars are TMI lolol)\r\n\r\nHere\u2019s the error message in detail:\r\n```\r\n Ignoring files matching the following patterns: *.safetensors\r\n usage: tune download <repo-id> [OPTIONS]\r\n tune download: error: It looks like you are trying to access a gated repository. Please ensure you have access to the repository and have provided the proper Hugging Face API token using the option `--hf-token` or by running `huggingface-cli login`.You can find your token by visiting https://huggingface.co/settings/tokens\r\n```\r\n\r\nThanks for working with us to build a rock-solid end-to-end story from rune to chat. Looking forward to figuring this out and build an amazing experience for our joint users!", "url": "https://github.com/pytorch/torchchat/issues/775", "state": "closed", "labels": [], "created_at": "2024-05-13T02:35:21Z", "updated_at": "2024-07-21T21:46:30Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 773, "title": "[DOCS] GGUF instructions in docs/ADVANCED-USERS.md", "body": "\r\nthe instructions for GGUF in https://github.com/pytorch/torchchat/blob/main/docs/ADVANCED-USERS.md state:\r\n\r\n> To use the quantize tool, install the GGML tools at ${GGUF} . Then, you can, for example, convert a quantized model to f16 format:\r\n\r\nHow do I do that? Can we put this in the doc, including with a definition of the GGUF environment variable, so when we extract the commands and try to run them we have all the pieces?\r\n\r\nxref: https://github.com/pytorch/torchchat/pull/772", "url": "https://github.com/pytorch/torchchat/issues/773", "state": "closed", "labels": [], "created_at": "2024-05-13T01:26:16Z", "updated_at": "2024-05-20T12:56:45Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/examples", "number": 1257, "title": "multi-node Tensor Parallel", "body": "Hello, could you add an new example of the tensor parallel + fsdp but using a multi-node setup?\r\nIs it possible to do multi-node tensor parallelization with pytorch 2.3? I am trying to use 2 nodes with 4 GPUs each.\r\n05/12/2024 04:32:52 PM Device Mesh created: device_mesh=DeviceMesh([[0, 1, 2, 3], [4, 5, 6, 7]], mesh_dim_names=('dp', 'tp'))\r\n\r\nWhen I try the actual example on multiple nodes I get the following errors. \r\n\r\nThank you.\r\n```\r\n\r\nas07r1b31:3011779:3012101 [0] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 1b000\r\nas07r1b31:3011783:3012102 [0] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 1b000\r\nas07r1b31:3011782:3012104 [3] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device ad000\r\nas07r1b31:3011786:3012107 [3] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device ad000\r\nas07r1b31:3011780:3012106 [1] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 2c000\r\nas07r1b31:3011784:3012108 [1] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 2c000\r\nas07r1b31:3011781:3012110 [2] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 9d000\r\nas07r1b31:3011785:3012111 [2] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 9d000\r\n\r\n[rank0]: Traceback (most recent call last):\r\n[rank0]: File \"/gpfs/mn4/AE_tp/tests.py\", line 91, in <module>\r\n[rank0]: _, output = sharded_model(inp)\r\n[rank0]: ^^^^^^^^^^^^^^^^^^\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1532, in _wrapped_call_impl\r\n[rank0]: return self._call_impl(*args, **kwargs)\r\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1541, in _call_impl\r\n[rank0]: return forward_call(*args, **kwargs)\r\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 843, in forward\r\n[rank0]: args, kwargs = _pre_forward(\r\n[rank0]: ^^^^^^^^^^^^^\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py\", line 380, in _pre_forward\r\n[rank0]: unshard_fn(state, handle)\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py\", line 415, in _pre_forward_unshard\r\n[rank0]: _unshard(state, handle, state._unshard_stream, state._pre_unshard_stream)\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py\", line 299, in _unshard\r\n[rank0]: handle.unshard()\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py\", line 1308, in unshard\r\n[rank0]: padded_unsharded_flat_param = self._all_gather_flat_param(unsharded_flat_param)\r\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py\", line 1399, in _all_gather_flat_param\r\n[rank0]: dist.all_gather_into_tensor(\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/c10d_logger.py\", line 75, in wrapper\r\n[rank0]: return func(*args, **kwargs)\r\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^\r\n[rank0]: File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py\", line 2948, in all_gather_into_tensor\r\n[rank0]: work = group._allgather_base(output_tensor, input_tensor, opts)\r\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n[rank0]: torch.distributed.DistBackendError: NCCL error in: /opt/conda/conda-bld/pytorch_1712608847532/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1970, invalid usage (run with NCCL_DEBUG=WARN for details), NCCL version 2.20.5\r\n[rank0]: ncclInvalidUsage: This usually reflects invalid usage of NCCL library.\r\n[rank0]: Last error:\r\n[rank0]: Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 1b000\r\n[same on other ranks]\r\n\r\nTraceback (most recent call last):\r\n File \"/home/mn4/AE_tp/mdae2.3/bin/torchrun\", line 33, in <module>\r\n sys.exit(load_entry_point('torch==2.3.0', 'console_scripts', 'torchrun')())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 347, in wrapper\r\n return f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/run.py\", line 879, in main\r\n run(args)\r\n File \"/home/mn4/AE_", "url": "https://github.com/pytorch/examples/issues/1257", "state": "open", "labels": [], "created_at": "2024-05-12T15:19:26Z", "updated_at": "2024-11-05T09:15:28Z", "comments": 1, "user": "PieterZanders" }, { "repo": "pytorch/torchchat", "number": 757, "title": "[LAUNCH DOCS] Add instructions what needs to be installed, and how to README ", "body": "At present, running the instructions in the README will fail for the xcode project. See [#755](https://github.com/pytorch/torchchat/pull/755)\r\n\r\nAt a minimum we should specify what should be installed and what the minimum xcode version (and any other requirements) are?\r\n\r\n\r\nAlso, I would expect this to fail even then, because like this might be GUI based with no fully scriptable set of instructions (plus it's not clear we'd want the script instructions when most devs are more likely going to like to start around with the GUI builder?). So, how can/should we test iOS app build in open source? \r\n\r\nAs a corollary, how do we automate testing of README for correctness? (and maybe the answer is \"it's too involved\", and that's OK if that turns out to be the right answer)\r\n\r\ncc: @byjlw @shoumikhin", "url": "https://github.com/pytorch/torchchat/issues/757", "state": "closed", "labels": [], "created_at": "2024-05-12T04:50:32Z", "updated_at": "2024-07-27T01:53:39Z", "user": "mikekgfb" }, { "repo": "pytorch/executorch", "number": 3585, "title": "How can I use ExecuTorch to deploy a model to a MicroController,such as Infineon TC3xxx ?", "body": "\"ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, **embedded devices** and **microcontrollers**\"\r\n\r\nHello,above expression presents in [ExecuTorch doc:](https://pytorch.org/executorch/stable/intro-overview.html)\r\n\r\nI want to know:\r\n\r\nwhat types of MicroController(mainly bare metals) got supported already or will get supported?\r\n\r\nIf wanting to deploy to Infineon TC3xxx microcontroller,is it possible?If yes,any suggestion about how to do it?", "url": "https://github.com/pytorch/executorch/issues/3585", "state": "closed", "labels": [ "module: backend" ], "created_at": "2024-05-11T07:13:57Z", "updated_at": "2025-02-05T17:22:54Z", "user": "AlexLuya" }, { "repo": "pytorch/torchchat", "number": 740, "title": "[FEATURE REQUEST] Could not find... Probably missing HF token/login, but if so we might indicate?", "body": "\r\n(base) mikekg@mikekg-mbp torchchat % python3 torchchat.py generate llama3 --device cpu --compile\r\nDownloading meta-llama/Meta-Llama-3-8B-Instruct from HuggingFace...\r\nConverting meta-llama/Meta-Llama-3-8B-Instruct to torchchat format...\r\nknown configs: ['13B', '70B', 'CodeLlama-7b-Python-hf', '34B', 'stories42M', '30B', 'stories110M', '7B', 'stories15M', 'Mistral-7B', 'Meta-Llama-3-8B']\r\nModel config {'block_size': 2048, 'vocab_size': 128256, 'n_layers': 32, 'n_heads': 32, 'dim': 4096, 'hidden_dim': 14336, 'n_local_heads': 8, 'head_dim': 128, 'rope_base': 500000.0, 'norm_eps': 1e-05, 'multiple_of': 1024, 'ffn_dim_multiplier': 1.3, 'use_tiktoken': True, 'max_seq_length': 8192}\r\nTraceback (most recent call last):\r\n File \"/Users/mikekg/m14/torchchat/torchchat.py\", line 143, in <module>\r\n check_args(args, \"generate\")\r\n File \"/Users/mikekg/m14/torchchat/cli.py\", line 39, in check_args\r\n download_and_convert(args.model, args.model_directory, args.hf_token)\r\n File \"/Users/mikekg/m14/torchchat/download.py\", line 91, in download_and_convert\r\n _download_hf_snapshot(model_config, temp_dir, hf_token)\r\n File \"/Users/mikekg/m14/torchchat/download.py\", line 55, in _download_hf_snapshot\r\n convert_hf_checkpoint(\r\n File \"/Users/mikekg/miniconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/m14/torchchat/build/convert_hf_checkpoint.py\", line 60, in convert_hf_checkpoint\r\n raise RuntimeError(\r\nRuntimeError: Could not find /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/pytorch_model.bin.index.json or /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/original/consolidated.00.pth plus /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/original/tokenizer.model", "url": "https://github.com/pytorch/torchchat/issues/740", "state": "closed", "labels": [], "created_at": "2024-05-10T22:18:51Z", "updated_at": "2024-07-30T17:22:27Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/pytorch", "number": 125902, "title": "How to export onnx with fixed shape output ?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n```\r\nimport torch\r\n\r\nclass TRT_SCA(torch.autograd.Function): \r\n @staticmethod\r\n def forward(ctx,\r\n query,\r\n key,\r\n value,\r\n reference_points,\r\n spatial_shapes,\r\n reference_points_cam,\r\n bev_mask,\r\n level_start_index):\r\n out = torch.randn(1, 1600, 256, dtype=torch.float32) \r\n return out # I just want to assign the out shape is [1, 1600, 256] \r\n\r\n @staticmethod\r\n def symbolic(g, \r\n query,\r\n key,\r\n value,\r\n reference_points,\r\n spatial_shapes,\r\n reference_points_cam,\r\n bev_mask,\r\n level_start_index):\r\n return g.op(\"TRT::SCATT\",\r\n query,\r\n key,\r\n value,\r\n reference_points,\r\n spatial_shapes,\r\n reference_points_cam,\r\n bev_mask,\r\n level_start_index)\r\n \r\ntrt_sca = TRT_SCA.apply\r\n\r\nclass SpatialCrossAttention(torch.nn.Module):\r\n def __init__(self):\r\n super(SpatialCrossAttention, self).__init__()\r\n \r\n def forward(self,\r\n query,\r\n key,\r\n value,\r\n reference_points=None,\r\n spatial_shapes=None,\r\n reference_points_cam=None,\r\n bev_mask=None,\r\n level_start_index=None): \r\n return trt_sca(\r\n query,\r\n key,\r\n value,\r\n reference_points,\r\n spatial_shapes,\r\n reference_points_cam,\r\n bev_mask,\r\n level_start_index) \r\n\r\nquery= torch.randn(1, 1600, 256, dtype=torch.float32) \r\nkey= torch.randn(6, 5315, 1, 256, dtype=torch.float32) \r\nvalue= torch.randn(6, 5315, 1, 256, dtype=torch.float32) \r\n\r\nreference_points = torch.randn(1, 4, 1600, 3, dtype=torch.float32) \r\nspatial_shapes= torch.tensor( [[ 40, 100],\r\n [ 20, 50],\r\n [ 10, 25],\r\n [ 5, 13]], dtype=torch.int64)\r\n\r\nreference_points_cam=torch.randn(6, 1, 1600, 4, 2, dtype=torch.float32) \r\nbev_mask=torch.where(torch.randn(6, 1, 1600, 4) > 0.2, 1, 0)\r\nlevel_start_index= torch.tensor([ 0, 4000, 5000, 5250], dtype=torch.int64)\r\n\r\nnn_model = SpatialCrossAttention() \r\n\r\nprint(\"------------------------------------\")\r\n\r\noutput_file = 'sca.onnx' \r\ntorch.onnx.export(\r\n nn_model,\r\n (query,\r\n key,\r\n value,\r\n reference_points,\r\n spatial_shapes,\r\n reference_points_cam,\r\n bev_mask,\r\n level_start_index),\r\n output_file,\r\n export_params=True,\r\n keep_initializers_as_inputs=True,\r\n do_constant_folding=True,\r\n enable_onnx_checker=True, \r\n verbose=True,\r\n opset_version=11,\r\n)\r\nprint(\"export done\")\r\n```\r\n\r\n\r\n### Versions\r\n\r\nonnx 1.15.0 \r\nonnx-graphsurgeon 0.3.21 \r\nonnx-simplifier 0.4.36 \r\nonnxruntime 1.17.1 \r\ntorch 1.10.0+cu113 \r\ntorchaudio 0.10.0+cu113 \r\ntorchvision 0.11.0+cu113 \r\n\r\n### Result \r\n\r\n![image](https://github.com/pytorch/pytorch/assets/38753233/4bda011f-b520-4576-9907-9dd1b73573cf)\r\n", "url": "https://github.com/pytorch/pytorch/issues/125902", "state": "open", "labels": [ "module: onnx", "triaged" ], "created_at": "2024-05-10T05:58:23Z", "updated_at": "2024-05-17T04:35:24Z", "user": "lix19937" }, { "repo": "pytorch/text", "number": 2264, "title": "t5_demo can't retrieve CNNDM from drive.google; how to use local copy?", "body": "## \ud83d\udc1b Bug\r\n\r\n**Describe the bug** A clear and concise description of what the bug is.\r\n\r\nFollowing the [t5_demo](https://pytorch.org/text/stable/tutorials/t5_demo.html), but when it tries to access the CNN data at ` https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ` \r\n\r\n**To Reproduce** Steps to reproduce the behavior:\r\n\r\n1. Get notebook at [t5_demo](https://pytorch.org/text/stable/tutorials/t5_demo.html),\r\n2. Try to run it. It gets as far as `batch = next(iter(cnndm_dataloader))` (https://pytorch.org/text/stable/tutorials/t5_demo.html#generate-summaries) where `cnndm_datapipe = CNNDM(split=\"test\")` (https://pytorch.org/text/stable/tutorials/t5_demo.html#datasets)\r\n\r\n3. Get error like:\r\n\r\n> RuntimeError: Google drive link\r\n> \r\n> https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t\r\n> internal error: headers don't contain content-disposition. This is\r\n> usually caused by using a sharing/viewing link instead of a download\r\n> link. Click 'Download' on the Google Drive page, which should\r\n> redirect you to a download page, and use the link of that page.\r\n> \r\n> This exception is thrown by __iter__ of\r\n> GDriveReaderDataPipe(skip_on_error=False,\r\n> source_datapipe=OnDiskCacheHolderIterDataPipe, timeout=None)\r\n\r\n**Expected behavior** \r\n\r\nLooking at others with similar error messages makes it seem like there is some timeout issue retrieving from drive.google? So I went and got the `cnn_stories.tgz` and `dailymail_stories.tgz` and unpacked them:\r\n\r\n> .\r\n> \u251c\u2500\u2500 CNNDM\r\n> \u2502\u00a0\u00a0 \u251c\u2500\u2500 cnn\r\n> \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 stories\r\n> \u2502\u00a0\u00a0 \u2514\u2500\u2500 dailymail\r\n> \u2502\u00a0\u00a0 \u2514\u2500\u2500 stories\r\n\r\n**How can I modify the calls retrieve from my local cache?**\r\n\r\n\r\n**Environment**\r\n\r\n> % python collect_env.py \r\n> Collecting environment information...\r\n> PyTorch version: 2.1.0.post100\r\n> Is debug build: False\r\n> CUDA used to build PyTorch: None\r\n> ROCM used to build PyTorch: N/A\r\n> \r\n> OS: macOS 14.4.1 (arm64)\r\n> GCC version: Could not collect\r\n> Clang version: 15.0.0 (clang-1500.1.0.2.5)\r\n> CMake version: Could not collect\r\n> Libc version: N/A\r\n> \r\n> Python version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ] (64-bit runtime)\r\n> Python platform: macOS-14.4.1-arm64-arm-64bit\r\n> Is CUDA available: False\r\n> CUDA runtime version: No CUDA\r\n> CUDA_MODULE_LOADING set to: N/A\r\n> GPU models and configuration: No CUDA\r\n> Nvidia driver version: No CUDA\r\n> cuDNN version: No CUDA\r\n> HIP runtime version: N/A\r\n> MIOpen runtime version: N/A\r\n> Is XNNPACK available: True\r\n> \r\n> CPU:\r\n> Apple M1 Pro\r\n> \r\n> Versions of relevant libraries:\r\n> [pip3] mypy-extensions==1.0.0\r\n> [pip3] numpy==1.26.3\r\n> [pip3] torch==2.1.0.post100\r\n> [pip3] torchaudio==2.1.2\r\n> [pip3] torchdata==0.7.1\r\n> [pip3] torchtext==0.16.1\r\n> [pip3] torchvision==0.16.2\r\n> [conda] captum 0.7.0 0 pytorch\r\n> [conda] numpy 1.26.2 pypi_0 pypi\r\n> [conda] numpy-base 1.26.3 py311hfbfe69c_0 \r\n> [conda] pytorch 2.1.0 gpu_mps_py311hf322ab5_100 \r\n> [conda] torch 2.1.2 pypi_0 pypi\r\n> [conda] torchaudio 2.1.2 pypi_0 pypi\r\n> [conda] torchdata 0.7.1 pypi_0 pypi\r\n> [conda] torchtext 0.16.1 pypi_0 pypi\r\n> [conda] torchvision 0.16.2 pypi_0 pypi\r\n> \r\n> \r\n\r\n**Additional context** Add any other context about the problem here.\r\n", "url": "https://github.com/pytorch/text/issues/2264", "state": "open", "labels": [], "created_at": "2024-05-10T03:55:13Z", "updated_at": "2024-05-10T03:55:13Z", "user": "rbelew" }, { "repo": "pytorch/tutorials", "number": 2861, "title": "Performance Tuning Guide is very out of date", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\nThe first thing you see when you Google PyTorch performance is this. The recipe is well written but it's very much out of data today\r\nhttps://pytorch.org/tutorials/recipes/recipes/tuning_guide.html\r\n\r\nSome concrete things we should fix\r\n1. For fusions we should talk about torch.compile instead of jit.script\r\n2. We should mention overhead reduction with cudagraphs\r\n3. We should talk about the *-fast series as places people can learn more\r\n4. For CPU specific optimization the most important one is launcher core pinning so we should either make that a default or explain the point more\r\n5. Instead of the CPU section we can instead go more into the inductor CPU backend\r\n6. AMP section is fine but maybe expand to quantization\r\n7. DDP section needs to be moved somewhere else with some FSDP performance guide\r\n8. GPU sync section is good\r\n9. Mention tensor cores and how to enable them and why they're not enabled by default\r\n\r\ncc @sekyondaMeta @svekars @kit1980 @drisspg who first made me aware of this with an internal note that was important enough to make public\r\n\r\n### Existing tutorials on this topic\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_", "url": "https://github.com/pytorch/tutorials/issues/2861", "state": "closed", "labels": [ "medium", "docathon-h1-2024" ], "created_at": "2024-05-09T16:57:35Z", "updated_at": "2024-06-12T16:11:31Z", "comments": 9, "user": "msaroufim" }, { "repo": "pytorch/xla", "number": 7042, "title": "model.to(xla_device) increases the number of named_parameters", "body": "## \ud83d\udc1b Bug\r\nCopy model to xla device affects the number of model's parameters.\r\n![image](https://github.com/pytorch/xla/assets/5349065/c1d69927-fcb4-4db2-bc94-193c99ede65a)\r\n\r\n## To Reproduce\r\n```bash\r\npython xla/benchmarks/experiment_runner.py --suite-name torchbench --accelerator cuda --dynamo openxla --dynamo None --test train --repeat 30 --iterations-per-run 5 --print-subprocess --no-resume --model-config='{\"model_name\": \"hf_Bart\"}' --experiment-config='{\"accelerator\": \"cuda\", \"xla\": \"PJRT\", \"xla_flags\": null, \"dynamo\": \"openxla\", \"test\": \"train\"}'\r\n```\r\nSteps to reproduce the behavior:\r\n\r\n1. Run the above command\r\n2. insert pdb hook at `xla/benchmarks/benchmark_model.py`\r\n```python\r\n110 def prepare_for_experiment(self, dynamo_compilation_opts):\r\n111 self.device = self.benchmark_experiment.get_device()\r\n112 self.dtype = self.conversion_dtype()\r\n113 \r\n114 if self.dtype is not None:\r\n115 self.module = self.module.to(self.dtype)\r\n116 self.example_inputs = cast_to_dtype(self.example_inputs, self.dtype)\r\n117 \r\n118 import pdb\r\n119 pdb.set_trace()\r\n120 self.module = self.module.to(self.device)\r\n121 self.example_inputs = move_to_device(self.example_inputs, self.device)\r\n122 \r\n123 if self.benchmark_experiment.test == \"eval\":\r\n124 self._prepare_for_eval()\r\n125 elif self.benchmark_experiment.test == \"train\":\r\n126 self._prepare_for_train()\r\n127 else:\r\n128 raise NotImplementedError\r\n129 \r\n130 if self.benchmark_experiment.dynamo:\r\n131 compilation_opts = dynamo_compilation_opts.copy()\r\n132 compilation_opts['backend'] = self.benchmark_experiment.dynamo\r\n133 \r\n134 logger.info(f\"Running torch.compile with opts {compilation_opts}\")\r\n135 self.model_iter_fn = torch.compile(self.model_iter_fn, **compilation_opts)\r\n```\r\n3. print the number of named_parameter of model before the copy to xla device and after the copy like the picture above shows.\r\n```bash\r\n(Pdb) new_model = copy.deepcopy(self.module).to(\"cpu\").to(self.device) \u2502105 self.optimizer = self.optimizer_class(self.module.parameters(), lr=0.01)\r\n(Pdb) len([param for param, value in new_model.named_parameters()]) \u2502106 \r\n262 \u2502107 def conversion_dtype(self):\r\n(Pdb) len([param for param, value in self.module.named_parameters()]) \u2502108 return None\r\n259 \u2502109 \r\n(Pdb) len([param for param, value in self.module.named_buffers()]) \u2502110 def prepare_for_experiment(self, dynamo_compilation_opts):\r\n1 \u2502111 self.device = self.benchmark_experiment.get_device()\r\n(Pdb) len([param for param, value in new_model.named_buffers()]) \u2502112 self.dtype = self.conversion_dtype()\r\n1 \r\n```\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->\r\n\r\n## Expected behavior\r\n\r\n`len([param for param, value in new_model.named_parameters()])` is expected to return 259 \r\n## Environment\r\n\r\n - Reproducible on XLA backend [CPU/TPU/CUDA]: CUDA\r\n - torch_xla version:\r\n2.3.0-rc12\r\n", "url": "https://github.com/pytorch/xla/issues/7042", "state": "closed", "labels": [ "question" ], "created_at": "2024-05-09T13:53:03Z", "updated_at": "2025-04-17T13:51:16Z", "user": "shenh10" }, { "repo": "pytorch/xla", "number": 7040, "title": "[torchbench] The official benchmark for performance and accuracy check", "body": "## \u2753 Questions and Help\r\nHi I found two available codebases for testing torchbench with pytorch/xla:\r\n1. The one provided by pytorch official: https://github.com/pytorch/pytorch/tree/main/benchmarks/dynamo\r\n2. Another one provided by pytorch/xla team: https://github.com/pytorch/xla/tree/master/benchmarks\r\n\r\nHowever for the first codebase, it seems the support for dynamo + openxla backend would not trigger xla compilation actually. Is it no longer maintained?\r\n\r\nAnd for the second one, I found it is able to test the performance, but has no way to validate the accuracy comparing to eager mode, while the first benchmark tool is able to do that. Any support for this?\r\n\r\n\r\nLooking forward to your feedback.", "url": "https://github.com/pytorch/xla/issues/7040", "state": "closed", "labels": [ "question", "benchmarking" ], "created_at": "2024-05-09T08:33:21Z", "updated_at": "2025-04-17T13:53:39Z", "user": "shenh10" }, { "repo": "pytorch/tutorials", "number": 2860, "title": "requires_grad=True for an input datapoint?", "body": "https://github.com/pytorch/tutorials/blob/f4ebb4d007792f5bc302affa7b360a9710e4a88b/advanced_source/super_resolution_with_onnxruntime.py#L144\r\n\r\nIt is obscure to me why there is the need to set the flag requires_grad to True for datapoint \"x\", which has no parameters to be learnt.\r\nIs it something required to export the model in onnx?\r\n\r\nThanks.\n\ncc @titaiwangms @xadupre @justinchuby @BowenBao", "url": "https://github.com/pytorch/tutorials/issues/2860", "state": "closed", "labels": [ "question", "onnx" ], "created_at": "2024-05-08T15:25:54Z", "updated_at": "2025-04-16T21:22:11Z", "user": "ggbioing" }, { "repo": "pytorch/TensorRT", "number": 2822, "title": "\u2753 [Question] Model inference is much slower after updating to TensorRT 9.3", "body": "## \u2753 Question\r\n\r\nI have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.\r\n\r\nI acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.\r\n\r\nWhat might be causing this issue?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - Libtorch Version (e.g., 1.0): 2.2.1\r\n - CPU Architecture: \r\n - OS (e.g., Linux): ubuntu22.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: Yes\r\n - Python version:\r\n - CUDA version: 12.2\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2822", "state": "open", "labels": [ "question" ], "created_at": "2024-05-08T03:20:18Z", "updated_at": "2025-09-03T20:08:33Z", "user": "demuxin" }, { "repo": "pytorch/expecttest", "number": 18, "title": "How to use it in pytest based testing?", "body": "The readme seems to be written for testcase only.", "url": "https://github.com/pytorch/expecttest/issues/18", "state": "closed", "labels": [], "created_at": "2024-05-07T22:27:37Z", "updated_at": "2024-05-07T23:09:38Z", "user": "youkaichao" }, { "repo": "pytorch/xla", "number": 7033, "title": "constant folding for AvgPool2d", "body": "## \u2753 Questions and Help\r\n\r\nexporting simple `AvgPool2d` using `torch_xla 2.3` results in two different `stablehlo.reduce_window` ops, the second one only takes args as constants. Is there a way to fold it into a constant in `exported_program_to_stablehlo`? @lsy323 @qihqi \r\ne.g. `%4` in the following example.\r\n\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch_xla.stablehlo import exported_program_to_stablehlo\r\n\r\nm = nn.AvgPool2d(kernel_size=2)\r\ninp_args = (torch.randn(1, 4, 4),)\r\nem = torch.export.export(m, inp_args)\r\nstablehlo_program = exported_program_to_stablehlo(em)\r\nprint(stablehlo_program.get_stablehlo_text())\r\n```\r\n\r\n```cpp\r\nmodule @IrToHlo.26 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\r\n func.func @main(%arg0: tensor<1x4x4xf32>) -> tensor<1x2x2xf32> {\r\n %0 = stablehlo.constant dense<1.000000e+00> : tensor<4x4xf32>\r\n %1 = stablehlo.constant dense<0.000000e+00> : tensor<f32>\r\n %2 = stablehlo.reshape %arg0 : (tensor<1x4x4xf32>) -> tensor<1x1x4x4xf32>\r\n %3 = \"stablehlo.reduce_window\"(%2, %1) ({\r\n ^bb0(%arg1: tensor<f32>, %arg2: tensor<f32>):\r\n %8 = stablehlo.add %arg1, %arg2 : tensor<f32>\r\n stablehlo.return %8 : tensor<f32>\r\n }) {base_dilations = array<i64: 1, 1, 1, 1>, padding = dense<0> : tensor<4x2xi64>, window_dilations = array<i64: 1, 1, 1, 1>, window_dimensions = array<i64: 1, 1, 2, 2>, window_strides = array<i64: 1, 1, 2, 2>} : (tensor<1x1x4x4xf32>, tensor<f32>) -> tensor<1x1x2x2xf32>\r\n %4 = \"stablehlo.reduce_window\"(%0, %1) ({\r\n ^bb0(%arg1: tensor<f32>, %arg2: tensor<f32>):\r\n %8 = stablehlo.add %arg1, %arg2 : tensor<f32>\r\n stablehlo.return %8 : tensor<f32>\r\n }) {base_dilations = array<i64: 1, 1>, padding = dense<0> : tensor<2x2xi64>, window_dilations = array<i64: 1, 1>, window_dimensions = array<i64: 2, 2>, window_strides = array<i64: 2, 2>} : (tensor<4x4xf32>, tensor<f32>) -> tensor<2x2xf32>\r\n %5 = stablehlo.reshape %4 : (tensor<2x2xf32>) -> tensor<1x1x2x2xf32>\r\n %6 = stablehlo.divide %3, %5 : tensor<1x1x2x2xf32>\r\n %7 = stablehlo.reshape %6 : (tensor<1x1x2x2xf32>) -> tensor<1x2x2xf32>\r\n return %7 : tensor<1x2x2xf32>\r\n }\r\n}\r\n```", "url": "https://github.com/pytorch/xla/issues/7033", "state": "closed", "labels": [ "stablehlo" ], "created_at": "2024-05-07T07:34:11Z", "updated_at": "2024-09-23T21:45:42Z", "comments": 10, "user": "thong3le" }, { "repo": "pytorch/torchchat", "number": 708, "title": "--num-samples xxx does not work for getting multiple prompt responses", "body": "Previouslu, users could use --num-samples to get reliable benchmarking. WIth recent updates, num-samples no longer appears to work. \r\n\r\nhttps://github.com/pytorch/pytorch/pull/125611 shows nice performance gains on gpt-fast, and @helloguo would like to validate on torchchat to ensure this also accelerates our code. Is there another way he can run multiple prompts to avoid cold start effects?\r\n\r\n\r\n```\r\n(py311) mikekg@mikekg-mbp torchchat % python3 torchchat.py generate stories15M --device fast --num-samples 20\r\nUsing device=cpu Apple M1 Max\r\nLoading model...\r\nTime to load model: 0.09 seconds\r\nHello, my name is Pete the mouse. He was a very curious mouse, and he loved to explore. One day, he saw a big, white sign. He had never seen it before, and he was curious to get a closer look.\r\nHe decided to take a look, and he squealed with joy when he reached for the sign. On the sign, there was a big, white, friendly door. He was so excited, he quickly ran over to it and opened the door.\r\nOn the other side of the door, he found a room filled with toys, cars and people. He cheered with joy, and he could not wait to explore.\r\nBut then, something unexpected happened - the door suddenly closed, and Pete was so scared. He tried to push the door open, but it just wouldn't budge. He looked around and spotted a small, white house.\r\nPete pushed the door open, and there he was - a friendly\r\nMax Sequence Length Reached. Ending Conversation.\r\n==========\r\n```", "url": "https://github.com/pytorch/torchchat/issues/708", "state": "closed", "labels": [], "created_at": "2024-05-06T23:45:52Z", "updated_at": "2024-05-12T21:23:06Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchtitan", "number": 312, "title": "Question on Model Init", "body": "I noticed that there are two parts of implementation that are related to model initialization. \r\n\r\n### Instancing the model with meta tensor\r\nhttps://github.com/pytorch/torchtitan/blob/f72a2a0da0bdfc394faaab9b3c0f35d0b6f5be50/train.py#L177-L181\r\n\r\n### Doing explicit model initalization \r\nhttps://github.com/pytorch/torchtitan/blob/f72a2a0da0bdfc394faaab9b3c0f35d0b6f5be50/train.py#L209-L210\r\n\r\nThe issue is that if we do any weight initalization when instancing the module, it will ineffective becuase of the `meta tensor`. \r\nAs a result, we have to do ***all*** initalization explicitly in the `model.init_weights()`. \r\n\r\nMy question is why we want to instance model with `meta tensor`? \r\nIf effencicy is not an issue, can we simply remove the `with torch.device(\"meta\"):`", "url": "https://github.com/pytorch/torchtitan/issues/312", "state": "open", "labels": [ "question" ], "created_at": "2024-05-06T17:35:15Z", "updated_at": "2024-05-13T13:30:51Z", "user": "XinDongol" }, { "repo": "pytorch/torchchat", "number": 692, "title": "[LAUNCH BLOCKER] TorchChat results seems less connected than they could have been", "body": "For example generating text from the same prompt using llama.cpp and TorchChat produces following results:\r\n```\r\nHello, my name is **Marcus**, and I am a 33-year-old software developer from California. I have been using the internet for the past 20 years, and I have seen it evolve into a powerful tool for communication, entertainment, and information. However, I have also seen the darker side of the internet, including cyberbullying, harassment, and the spread of misinformation.\r\n\r\nAs a software developer, I have a unique perspective on the internet and its potential impact on society. I believe that the internet can be a force for good, but it must be used responsibly and ethically. This is why I am passionate about promoting digital citizenship and raising awareness about the importance of online safety and security.\r\n\r\nIn my free time, I enjoy writing, hiking, and playing music. I am also a volunteer firefighter, and I have seen firsthand the impact of the\r\n\r\n```\r\nvs \r\n```\r\nHello, my name is _______________ and I'm here to talk about my experience with ______________ (addiction, trauma, mental health issue, etc.).\r\nI understand that you are here to help me and I appreciate your willingness to listen. It takes a lot of courage to share my story, but I hope that by doing so, it will help me heal and move forward.\r\nCan you tell me more about the support groups you offer? How do they work? What kind of people attend them? Are they confidential?\r\nI'm still not sure if this is the right place for me, but I'm willing to give it a try. Can you tell me more about your program and how it can help me?\r\nI've tried other programs before, but they didn't work for me. What makes your program different?\r\nI'm worried that if I share my story, people will judge me or think less of me. Can you guarantee confidentiality?\r\nThank you for being here for me and supporting me on this journey. I really appreciate it. [end of text]\r\n```\r\n\r\nIt's very subjective, but 2nd text (about person who wants to find more information about metal health/addiction programs, feels more believable/coherent then story about 33 SWE who is also a volunteer firefighter. What it looks like is that by 3rd paragraph TorchChat lost context about two previous ones, which sounds like a context size of stories15M, but not of Llama-2", "url": "https://github.com/pytorch/torchchat/issues/692", "state": "closed", "labels": [ "launch blocker" ], "created_at": "2024-05-06T16:31:38Z", "updated_at": "2024-07-21T22:00:21Z", "comments": 9, "user": "malfet" }, { "repo": "pytorch/TensorRT", "number": 2813, "title": "\u2753 [Question] How to solve this warning: Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled.", "body": "## \u2753 Question\r\n\r\nI used Torch-TensorRT to compile the torchscript model in C++. When compiling or loading torchtrt model, it displays many warnings.\r\n\r\n```\r\nWARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)\r\nWARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)\r\nWARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)\r\n```\r\n\r\n## What you have already tried\r\n\r\nI found this [link](https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode) is useful, but it only provides Python API.\r\n\r\nI checked the source code, but I still haven't figured out how to set up MULTI_DEVICE_SAFE_MODE in C++.\r\n\r\nWhat can I do to address this warning?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): ubuntu18\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version: 12.2\r\n - GPU models and configuration: 1080Ti\r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2813", "state": "closed", "labels": [ "question" ], "created_at": "2024-05-06T09:39:02Z", "updated_at": "2024-05-21T17:02:12Z", "user": "demuxin" }, { "repo": "pytorch/torchchat", "number": 685, "title": "[PRE-LAUNCH] Test for quantization.md does not work... is attempt to install et when it has already been installed to blame?", "body": "https://github.com/pytorch/torchchat/actions/runs/8961642013/job/24609465486?pr=684\r\n\r\nAs part of the setup for this test, we build and install et. But, et is already installed. Should this pass?\r\nAnd if not, should it? Are we condemning everybody who re-runs install_et to fail?\r\n```\r\n -- Detecting CXX compile features - done\r\n -- Downloading FXdiv to /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/FXdiv-source (define FXDIV_SOURCE_DIR to avoid it)\r\n -- Configuring done (0.1s)\r\n -- Generating done (0.0s)\r\n -- Build files have been written to: /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/FXdiv-download\r\n [ 11%] Creating directories for 'fxdiv'\r\n [ 22%] Performing download step (git clone) for 'fxdiv'\r\n Cloning into 'FXdiv-source'...\r\n Already on 'master'\r\n Your branch is up to date with 'origin/master'.\r\n [ 33%] Performing update step for 'fxdiv'\r\n [ 44%] No patch step for 'fxdiv'\r\n [ 55%] No configure step for 'fxdiv'\r\n [ 66%] No build step for 'fxdiv'\r\n [ 77%] No install step for 'fxdiv'\r\n [ 88%] No test step for 'fxdiv'\r\n [100%] Completed 'fxdiv'\r\n [100%] Built target fxdiv\r\n -- Performing Test CMAKE_HAVE_LIBC_PTHREAD\r\n -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\r\n -- Found Threads: TRUE\r\n -- Using python executable '/Library/Frameworks/Python.framework/Versions/3.10/bin/python'\r\n -- Resolved buck2 as /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/buck2-bin/buck2-99e407b49dc432eda0cbddd67ea78346.\r\n -- Killing buck2 daemon\r\n -- executorch: Generating source lists\r\n -- executorch: Generating source file list /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/executorch_srcs.cmake\r\n\r\n Error while generating /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/executorch_srcs.cmake. Exit code: 1\r\n Output:\r\n \r\n Error:\r\n Traceback (most recent call last):\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/buck_util.py\", line 26, in run\r\n cp: subprocess.CompletedProcess = subprocess.run(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 526, in run\r\n raise CalledProcessError(retcode, process.args,\r\n subprocess.CalledProcessError: Command '['/Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/buck2-bin/buck2-99e407b49dc432eda0cbddd67ea78346', 'cquery', \"inputs(deps('//runtime/executor:program'))\"]' returned non-zero exit status 2.\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 218, in <module>\r\n main()\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 203, in main\r\n target_to_srcs[name] = sorted(target.get_sources(graph, runner))\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 116, in get_sources\r\n sources: set[str] = set(runner.run([\"cquery\", query]))\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/buck_util.py\", line 31, in run\r\n raise RuntimeError(ex.stderr.decode(\"utf-8\")) from ex\r\n RuntimeError: Command failed:\r\n Error validating working directory\r\n \r\n Caused by:\r\n 0: Failed to stat `/Users/runner/work/torchchat/torchchat/et-build/src/executorch/buck-out/v2`\r\n 1: ENOENT: No such file or directory\r\n \r\n \r\n CMake Error at build/Utils.cmake:191 (message):\r\n executorch: source list generation failed\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:311 (extract_sources)\r\n ```", "url": "https://github.com/pytorch/torchchat/issues/685", "state": "closed", "labels": [ "bug" ], "created_at": "2024-05-05T23:01:07Z", "updated_at": "2024-05-12T20:40:53Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/vision", "number": 8409, "title": "Mask r-cnn training runs infinitely without output or error ", "body": "### \ud83d\udc1b Describe the bug\n\nHere\u2019s a brief overview of my process:\r\n\r\n1.I generated a dataset using PyTorch by applying the SAM mask from bounding boxes to my images.\r\n2.After creating the dataset, I split it into training and testing sets.\r\n3.I loaded both sets using torch.utils.data.DataLoader.\r\n4.I\u2019m using a pre-trained model with 11 classes.\r\n\r\nHowever, I\u2019m encountering an issue during training. The process seems to take an unusually long time, and I\u2019m not seeing any progress or error messages to troubleshoot from.\r\n\r\n![image](https://github.com/pytorch/vision/assets/142050727/f659377c-37d5-417b-8a2c-910029f3be6c)\r\n\r\nWhat might be going wrong or how to improve my training process?\n\n### Versions\n\n\r\n--2024-05-05 11:05:17-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py\r\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\r\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\r\nHTTP request sent, awaiting response... 200 OK\r\nLength: 22068 (22K) [text/plain]\r\nSaving to: \u2018collect_env.py\u2019\r\n\r\ncollect_env.py 100%[===================>] 21.55K --.-KB/s in 0.002s \r\n\r\n2024-05-05 11:05:18 (12.6 MB/s) - \u2018collect_env.py\u2019 saved [22068/22068]\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.2.1+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: 14.0.0-1ubuntu1.1\r\nCMake version: version 3.27.9\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\r\nPython platform: Linux-6.1.58+-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 12.2.140\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: Tesla T4\r\nNvidia driver version: 535.104.05\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 8\r\nOn-line CPU(s) list: 0-7\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Xeon(R) CPU @ 2.30GHz\r\nCPU family: 6\r\nModel: 63\r\nThread(s) per core: 2\r\nCore(s) per socket: 4\r\nSocket(s): 1\r\nStepping: 0\r\nBogoMIPS: 4599.99\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities\r\nHypervisor vendor: KVM\r\nVirtualization type: full\r\nL1d cache: 128 KiB (4 instances)\r\nL1i cache: 128 KiB (4 instances)\r\nL2 cache: 1 MiB (4 instances)\r\nL3 cache: 45 MiB (1 instance)\r\nNUMA node(s): 1\r\nNUMA node0 CPU(s): 0-7\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Mitigation; PTE Inversion\r\nVulnerability Mds: Vulnerable; SMT Host state unknown\r\nVulnerability Meltdown: Vulnerable\r\nVulnerability Mmio stale data: Vulnerable\r\nVulnerability Retbleed: Vulnerable\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Vulnerable\r\nVulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers\r\nVulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.25.2\r\n[pip3] torch==2.2.1+cu121\r\n[pip3] torchaudio==2.2.1+cu121\r\n[pip3] torchdata==0.7", "url": "https://github.com/pytorch/vision/issues/8409", "state": "closed", "labels": [], "created_at": "2024-05-05T11:09:04Z", "updated_at": "2024-05-07T10:48:07Z", "comments": 1, "user": "MontassarTn" }, { "repo": "pytorch/examples", "number": 1253, "title": "Drawbacks of making the C++ API look like Python", "body": "Thank you for creating a C++ version of Pytorch. However, I wonder if you could create an example that looks like C++ and not like Python?\r\n\r\nThe [DCGAN sample project](https://github.com/pytorch/examples/blob/main/cpp/dcgan/dcgan.cpp) makes extensive use of ```auto``` so that it can show how it can be made to look and feel like Python by avoiding standard C++ things like unique_ptr<>, shared_ptr<> etc.\r\n\r\nHowever, I am a C++ programmer, not a Python programmer. I am very happy working with standard C++ things like classes with methods and smart pointers. The noble attempt to make \"feel like Python\" with ```auto``` variables isn't helpful for me. For example, it assumes that I will be able to put my entire program into a single method. That's an unfortunate restriction, as I want to build, store and pass objects between a number of different methods.\r\n\r\nI have tried unwrapping the ```auto``` using some decltype() statements, but the Pytorch C++ templating makes this quite laborious. Perhaps that is an unavoidable result of the way that the underlying library is built? If so, could you create an C++ example that shows how to unwrap the various templates in one case, splitting the operations across several methods of a class for me?\r\n\r\nWould that be straightforward to do? It would be a great help for me to get an idea of how your templating structure works and I can then build up from that.\r\n\r\nI've only just started working with the library (that's why I'm looking at the example), so maybe I've missed something in the tutorial? I apologize if that's the case and ask if you would point me at the example that I should be looking at?\r\n\r\nMany thanks,\r\n\r\nDan\r\n", "url": "https://github.com/pytorch/examples/issues/1253", "state": "closed", "labels": [], "created_at": "2024-05-04T15:39:22Z", "updated_at": "2024-05-11T09:39:36Z", "comments": 10, "user": "dannypike" }, { "repo": "pytorch/torchchat", "number": 676, "title": "[PRE-LAUNCH] On some MacOS/xcode version install fails with an error", "body": "\r\nThis happens in our cloud runners. Does not affect most users, but only those that have certain versions of the Apple linker installed. Do we need to cover this in common problems?\r\n\r\nFixing this may not be a launch blocker, but being intentional about it probably is.", "url": "https://github.com/pytorch/torchchat/issues/676", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-05-04T15:31:19Z", "updated_at": "2024-05-12T20:43:17Z", "comments": 4, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 674, "title": "[LAUNCH BLOCKER] Build of ET - Commands from README fail", "body": "#670 adds building on MacOS for the entire flow but fails very much towards the end of macOS ci.\r\nHowever the status is reported as green/correct execution. Why, and how do we make it red when it fails?\r\n\r\nBuilding ET fails according to readme logs, witj an error we have seen before from the linker:\r\nhttps://github.com/pytorch/torchchat/actions/runs/8949063846/job/24582907497?pr=670\r\n\r\n```\r\n [ 64%] Building C object backends/xnnpack/third-party/XNNPACK/CMakeFiles/microkernels-all.dir/src/x32-zip/x32-zip-xm-neon.c.o\r\n 0 0x10107f648 __assert_rtn + 72\r\n 1 0x100fa7c5c ld::Fixup::applyFixup(ld::Atom const*, ld::LayoutLinkedImage const&, unsigned char*) const + 8268\r\n 2 0x10103a7d8 ___ZN2ld16LayoutExecutable27writeContentWithoutLinkEditENSt3__14spanIhLm18446744073709551615EEEy_block_invoke + 332\r\n 3 0x195836428 _dispatch_client_callout2 + 20\r\n 4 0x19584a850 _dispatch_apply_invoke3 + 336\r\n 5 0x1958363e8 _dispatch_client_callout + 20\r\n 6 0x195837c68 _dispatch_once_callout + 32\r\n 7 0x19584aeec _dispatch_apply_invoke_and_wait + 372\r\n 8 0x195849e9c _dispatch_apply_with_attr_f + 1212\r\n 9 0x19584a08c dispatch_apply + 96\r\n 10 0x10103a9e4 void mapReduce<ld::Atom const*, mach_o::Error>(std::__1::span<ld::Atom const*, 18446744073709551615ul>, unsigned long, void (unsigned long, mach_o::Error&, std::__1::span<ld::Atom const*, 18446744073709551615ul>) block_pointer, void (std::__1::span<mach_o::Error, 18446744073709551615ul>) block_pointer) + 336\r\n 11 0x10103a594 ld::LayoutExecutable::writeContentWithoutLinkEdit(std::__1::span<unsigned char, 18446744073709551615ul>, unsigned long long) + 1180\r\n 12 0x101040020 ld::LayoutExecutable::writeToFile(char const*) + 15248\r\n 13 0x100ff22e8 main + 9424\r\n ld: Assertion failed: (extras.otherInstrOffset != 0 && \"Kind::arm64_adrp_ldr missing extra info\"), function applyFixup, file Fixup.cpp, line 793.\r\n clang: error: linker command failed with exit code 1 (use -v to see invocation)\r\n make[2]: *** [executor_runner] Error 1\r\n make[1]: *** [CMakeFiles/executor_runner.dir/all] Error 2\r\n make[1]: *** Waiting for unfinished jobs....\r\n[...]\r\n [100%] Building C object backends/xnnpack/third-party/XNNPACK/CMakeFiles/microkernels-all.dir/src/tables/vlog.c.o\r\n [100%] Built target microkernels-all\r\n make: *** [all] Error 2\r\n error: command '/Users/runner/work/_temp/miniconda/bin/cmake' failed with exit code 2\r\n error: subprocess-exited-with-error\r\n \r\n \u00d7 Building wheel for executorch (pyproject.toml) did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> See above for output.\r\n ```", "url": "https://github.com/pytorch/torchchat/issues/674", "state": "closed", "labels": [], "created_at": "2024-05-04T10:30:39Z", "updated_at": "2024-05-05T20:27:32Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 663, "title": "[PRE-LAUNCH] Why is necessary to disable int8pack_mm with compilation? Is it not working or slow ?", "body": "\r\nCurious why we're disabling the int4pack_mm for CPU compilation - are we thinking generated code is more performant? (Then we should document that someplace...) Or is it not working to call this operator from AOTI? \r\n\r\nWhy not? I thought there was an automatic fallback. @desertfire ", "url": "https://github.com/pytorch/torchchat/issues/663", "state": "closed", "labels": [], "created_at": "2024-05-04T03:34:20Z", "updated_at": "2024-05-17T13:08:15Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 660, "title": "[LABEL TBD] torchchat redownloads model when rebased?", "body": "A few days ago, I played with torchchat as follows (in the context of https://github.com/pytorch/torchchat/issues/621):\r\n\r\n`python3 torchchat.py download llama3`\r\n`python3 torchchat.py generate llama3`\r\n\r\n\r\n\r\nToday, I rebased and continued where I left of. In particular, i called the following command: \r\n\r\n`python3 torchchat.py generate llama3 --quantize config/data/desktop.json --prompt \"Hello, my name is\"`\r\n\r\nBut interestingly, it redownloads the 16GB llama3 model even though the model already exists in `.model-artifacts` folder from a few days ago.\r\n\r\nIs this a bug or a feature? Please label appropriately.\r\n\r\nInternal Task: [T187938966](https://www.internalfb.com/intern/tasks/?t=187938966)", "url": "https://github.com/pytorch/torchchat/issues/660", "state": "closed", "labels": [], "created_at": "2024-05-03T22:01:22Z", "updated_at": "2024-05-06T15:13:30Z", "comments": 2, "user": "mergennachin" }, { "repo": "pytorch/xla", "number": 7014, "title": "Export debug information to StableHLO", "body": "## \u2753 Questions and Help\r\n\r\nHi team, the debugging information is lost during `exported_program_to_stablehlo`, is there a way to export this information?\r\n\r\nFor example, `torch.export` generates file and line number for each op,\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch_xla.stablehlo import exported_program_to_stablehlo\r\n\r\nclass Test(nn.Module):\r\n def forward(self, a, b):\r\n a += 1\r\n b += 2\r\n return a + b\r\n\r\nep = torch.export.export(Test(), (torch.randn(1, 5), torch.randn(1, 5)))\r\nprint(ep)\r\n# ExportedProgram:\r\n# class GraphModule(torch.nn.Module):\r\n# def forward(self, arg0_1: \"f32[1, 5]\", arg1_1: \"f32[1, 5]\"):\r\n# # File: /home/thonle/ai/data/stablehlo/add/add.py:7 in forward, code: a += 1\r\n# add: \"f32[1, 5]\" = torch.ops.aten.add.Tensor(arg0_1, 1); arg0_1 = None\r\n \r\n# # File: /home/thonle/ai/data/stablehlo/add/add.py:8 in forward, code: b += 2\r\n# add_1: \"f32[1, 5]\" = torch.ops.aten.add.Tensor(arg1_1, 2); arg1_1 = None\r\n \r\n# # File: /home/thonle/ai/data/stablehlo/add/add.py:9 in forward, code: return a + b\r\n# add_2: \"f32[1, 5]\" = torch.ops.aten.add.Tensor(add, add_1)\r\n# return (add, add_1, add_2)\r\n``` \r\n\r\nhowever, when we export to stablehlo, we couldn't find this information in `StableHLOModelBundle`.\r\n```python\r\nom = exported_program_to_stablehlo(ep)\r\nprint(om._bundle)\r\n\r\n# StableHLOModelBundle(state_dict={}, additional_constants=[array(2., dtype=float32)], stablehlo_funcs=[StableHLOFunc(meta=StableHLOFunctionMeta(name='forward', stablehlo_version='0.0.0', input_signature=[VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[])], output_signature=[VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[])], input_locations=[InputLocation(type_=<VariableType.INPUT_ARG: 'input_arg'>, position=0, name=''), InputLocation(type_=<VariableType.CONSTANT: 'constant'>, position=0, name=''), InputLocation(type_=<VariableType.INPUT_ARG: 'input_arg'>, position=1, name='')], unused_inputs=[], input_pytree_spec='[1, {\"type\": \"builtins.tuple\", \"context\": \"null\", \"children_spec\": [{\"type\": \"builtins.tuple\", \"context\": \"null\", \"children_spec\": [{\"type\": null, \"context\": null, \"children_spec\": []}, {\"type\": null, \"context\": null, \"children_spec\": []}]}, {\"type\": \"builtins.dict\", \"context\": \"[]\", \"children_spec\": []}]}]', output_pytree_spec='[1, {\"type\": null, \"context\": null, \"children_spec\": []}]'), bytecode=b\"ML\\xefR\\rStableHLO_v0.19.1\\x00\\x01\\x1d\\x05\\x01\\x05\\r\\x01\\x03\\x0b\\x03\\x0b\\x0f\\x13\\x17\\x1b\\x1f\\x03S1\\x0f\\x01%\\x07\\x0f#\\x0b\\x0b\\x0b\\x0b\\x0b\\x0f\\x0b\\x0f\\x0b\\x0f\\x0b\\x0f\\x0b\\x0f\\x0b\\x03\\r\\x0b\\x0b\\x0b\\x0b\\x1f\\x0f\\x01\\x03\\x0b\\x03\\r\\x17\\x07\\x0f'\\x13\\x07\\x02\\xb5\\x1f\\x11\\x01\\x00\\x03\\x07\\x07\\t\\x0b\\x03\\r\\x03\\x05\\x11\\x01\\x01\\x05\\x13\\x05\\x15\\x05\\x17\\x1d\\x13\\x01\\x05\\x19\\x1d\\x17\\x01\\x05\\x1b\\x1d\\x1b\\x01\\x05\\x1d\\x1d\\x1f\\x01\\x05\\x1f\\x1d#\\x01\\x05!\\x03\\x01#\\t\\x1d#\\x1d%\\x1f\\x03\\t\\x00\\x00\\x80?\\x1f\\x0b\\x01\\x01\\t)\\x05\\x05\\x15\\x05\\t)\\x01\\x05\\x11\\x07\\x03\\x07\\x03\\x07\\x03\\x03\\x03)\\x03\\x01\\r\\x1d\\x04\\x91\\x05\\x01Q\\x01\\x05\\x01\\x07\\x04\\x7f\\x03\\x01\\x05\\x05P\\x01\\x03\\x07\\x04k\\x03\\x11\\x1b\\x07\\x05\\r\\x05\\x00\\x07B\\x11\\x05\\x03\\x03\\x03\\x06\\x15\\x03\\x03\\x05\\x01\\x07\\tF\\x19\\x07\\x03\\x03\\x03\\x03\\x03\\x06\\x1d\\x03\\x03\\x05\\x05\\x0b\\x03\\x06!\\x03\\x03\\x05\\t\\r\\x0b\\x04\\x01\\x07\\t\\r\\x0f\\x06\\x03\\x01\\x05\\x01\\x00\\xb6\\x03'\\x03\\x0b\\x0f\\x0f\\x1b\\r\\x19\\x17A!=\\x15)\\x19\\x11\\x0f\\x0f\\x0b\\x11builtin\\x00vhlo\\x00module\\x00add_v1\\x00func_v1\\x00constant_v1\\x00broadcast_in_dim_v1\\x00return_v1\\x00mhlo.cross_program_prefetches\\x00mhlo.is_dynamic\\x00mhlo.use_auto_spmd_partitioning\\x00IrToHlo.18\\x00broadcast.5\\x00add.6\\x00broadcast.11\\x00add.12\\x00add.16\\x00main\\x00\\x00\\x08\\x1d\\t\\x05\\x1f\\x01\\x0b%'%)+\\x03-\\x03/\", text='module @IrToHlo.18 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\\n func.func @main(%arg0: tensor<1x5xf32>, %arg1: tensor<f32>, %arg2: tensor<1x5xf32>) -> (tensor<1x5xf32>, tensor<1x5xf32>, tensor<1x5xf32>) {\\n %0 = stablehlo.constant dense<1.000000e+00> : tensor<1x5xf32>\\n %1 = stablehlo.add %arg0, %0 : tensor<1x5xf32>\\n %2 = stablehlo.broadcast_in_dim %arg1, dims = [] : (tensor<f32>) -> tensor<1x5xf32>\\n %3 = stablehlo.add %arg2, %2 : tensor<1x5xf32>\\n %4 = stablehlo.add %1, %3 : tensor<1x5xf32>\\n return %1, %3, %4 : tensor<1x5xf32>, tensor<1x5xf32>, tensor<1x5xf32>\\n }\\n}\\n')])\r\n```", "url": "https://github.com/pytorch/xla/issues/7014", "state": "closed", "labels": [ "stablehlo" ], "created_at": "2024-05-01T21:27:11Z", "updated_at": "2024-05-14T16:45:17Z", "comments": 11, "user": "thong3le" }, { "repo": "pytorch/TensorRT", "number": 2798, "title": "Convert torchscript model to tensorrt", "body": "Can I convert the torchscript model to tensorrt format through torch_tensorrt? Is there any corresponding script that you can give me for reference?", "url": "https://github.com/pytorch/TensorRT/issues/2798", "state": "open", "labels": [ "question" ], "created_at": "2024-04-30T08:11:09Z", "updated_at": "2024-04-30T20:59:03Z", "user": "pengxin233" }, { "repo": "pytorch/torchchat", "number": 579, "title": "[User Experience] User does not know what is expected by prompts", "body": "@ali-khosh user report:\r\n\r\nI\u2019m being asked \u201cDo you want to enter a system prompt? Enter y for yes and anything else for no.\u201d not sure what this means. When I hit yes, it asks \u201cwhat is your system prompt?\u201d still don\u2019t know what that means. I entered \u201chello my name is\u201d and it\u2019s now asking me for \u201cUser:\u201d no clue what that is. I entered some text. And it\u2019s thinking, without doing anything, or telling me I should wait. I gave up after ~10 minutes, killed the process, and tried again this time answering no to that question. It again asked me for \u201cUser:\u201d, I typed \u201cali\u201d and have been waiting for some time with no response from my laptop. ", "url": "https://github.com/pytorch/torchchat/issues/579", "state": "open", "labels": [], "created_at": "2024-04-30T06:39:23Z", "updated_at": "2024-04-30T06:39:50Z", "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 575, "title": "unimplemented operators - workarounds and long term perspective", "body": "Today users have to set PYTORCH_ENABLE_MPS_FALLBACK=1 when they call torchchat if they want to use _weight_int4pack_mm. Can we set that automatically, from inside the program. This is a crude workaround, maybe we can get an implementation of _weight_int4pack_mm for MPS? (This would also be goodness for mobile.)\r\n", "url": "https://github.com/pytorch/torchchat/issues/575", "state": "open", "labels": [], "created_at": "2024-04-30T05:58:13Z", "updated_at": "2024-07-30T20:44:26Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 565, "title": "[LAUNCH BLOCKER] Llama3 8B Instruct model hangs on chat", "body": "(.venv) (base) mikekg@mikekg-mbp torchchat % # Llama 3 8B Instruct\r\npython3 torchchat.py chat llama3\r\nzsh: command not found: #\r\nUsing device=cpu Apple M1 Max\r\nLoading model...\r\nTime to load model: 10.23 seconds\r\nEntering Chat Mode. Will continue chatting back and forth with the language model until the models max context length of 8192 tokens is hit or until the user says /bye\r\nDo you want to enter a system prompt? Enter y for yes and anything else for no. \r\ny\r\nWhat is your system prompt? \r\nYou are a techer and you treat every interaction as a teachable moment, providing lots of unrequested extra info\r\nUser: what are the 7 continents\r\n\r\n", "url": "https://github.com/pytorch/torchchat/issues/565", "state": "closed", "labels": [], "created_at": "2024-04-29T22:15:12Z", "updated_at": "2024-04-29T22:42:26Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 561, "title": "[FEATURE REQUEST] raise connection error fails download / we don't offer. plan b, or a way to resume", "body": "so, does this have a common error instruction? Should we tell people to download another model if they can\u2019t get Meta approval, or there\u2019s an error like in my case?\r\n\r\nAlso, this engineer having been on the slwo end of a pipe before.... are there any instructions how to resume a failed download that's say, frustratingly 95% complete? Or am I don't and I need to load the whole thing again?\r\n(If there's no way to retsart, ok. Also, if I'm on a slow pipe I would like to retry more often and get a byute at a time, per retry if that's what I need)\r\n\r\n```\r\nFile \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/tqdm/std.py\", line 1181, in _iter_\r\n for obj in iterable:\r\n File \"/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield _result_or_cancel(fs.pop())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py\", line 317, in _result_or_cancel\r\n return fut.result(timeout)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py\", line 456, in result\r\n return self.__get_result()\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py\", line 401, in __get_result\r\n raise self._exception\r\n File \"/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/_snapshot_download.py\", line 290, in _inner_hf_hub_download\r\n return hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py\", line 119, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py\", line 1492, in hf_hub_download\r\n http_get(\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py\", line 552, in http_get\r\n return http_get(\r\n ^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py\", line 552, in http_get\r\n return http_get(\r\n ^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py\", line 552, in http_get\r\n return http_get(\r\n ^^^^^^^^^\r\n [Previous line repeated 1 more time]\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py\", line 456, in http_get\r\n r = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py\", line 392, in _request_wrapper\r\n response = get_session().request(method=method, url=url, **params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request\r\n resp = self.send(prep, **send_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 703, in send\r\n r = adapter.send(request, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_http.py\", line 68, in send\r\n return super().send(request, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/requests/adapters.py\", line 519, in send\r\n raise ConnectionError(e, request=request)\r\nrequests.exceptions.ConnectionError: (MaxRetryError('HTTPSConnectionPool(host=\\'[cdn-lfs-us-1.huggingface.co](http://cdn-lfs-us-1.huggingface.co/)\\', port=443): Max retries exceeded with url: /repos/55/ac/55acddbb5c2ac2041b89a858eeba82e6130c6160294d75fe51bfa8bd7a4e4518/be52262c9289304f3e8240e0749bf257bc04264405a86cd4de38efb9068724ee?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27consolidated.00.pth%3B+filename%3D%22consolidated.00.pth%22%3B&Expires=1714684610&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxNDY4NDYxMH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzU1L2FjLzU1YWNkZGJiNWMyYWMyMDQxYjg5YTg1OGVlYmE4MmU2MTMwYzYxNjAyOTRkNzVmZTUxYmZhOGJkN2E0ZTQ1MTgvYmU1MjI2MmM5Mjg5MzA0ZjNlODI0MGUwNzQ5YmYyNTdiYzA0MjY0NDA1YTg2Y2Q0ZGUzOGVmYjkwNjg3MjRlZT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=IroiN6zXZ5iOHhJDLMhkzINjI11juBcZpCX0B6Q4iBrlcWwJ2oXA6~hKRp0uqo34u3AHE1LPI7sxss3HV8ICqNUtKJ9~5u0bWjoqSh7eqn1xqJ77Drg5BmnCKYSB2sF-5QBC2tMM~PKfaE7AeieeFD73Pz3JQomD7EnFe5veAxHKQxGT8WD2bMMy4lx5r5", "url": "https://github.com/pytorch/torchchat/issues/561", "state": "closed", "labels": [], "created_at": "2024-04-29T21:36:59Z", "updated_at": "2024-05-12T20:45:02Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/data", "number": 1247, "title": "[StatefulDataLoader] macOS tests are too slow", "body": "### \ud83d\udc1b Describe the bug\r\n\r\ntest_state_dict is very slow on macOS (and slows down CI), likely because of macOS default multiprocessing_context being spawn instead of fork. The StatefulDataLoader tests on macOS take ~1.5 hours, vs 10 minutes on Linux and Windows. \r\n\r\nExample of test-runtimes on my local mac:\r\n\r\n<img width=\"870\" alt=\"image\" src=\"https://github.com/pytorch/data/assets/5349063/8f881702-e812-4e2c-b61e-efac8596054b\">\r\n\r\nWe should a) update CI to log test times, b) for macOS, drop some of the tests. Each test_mp* test runs 6x, and if we have coverage from Linux + Win then we probably don't need all of them for mac\r\n\r\n### Versions\r\n\r\nNightly", "url": "https://github.com/meta-pytorch/data/issues/1247", "state": "closed", "labels": [ "stateful_dataloader" ], "created_at": "2024-04-29T18:10:35Z", "updated_at": "2024-04-30T19:11:57Z", "comments": 0, "user": "andrewkho" }, { "repo": "pytorch/torchchat", "number": 549, "title": "[CI] add dtype tests for runner-aoti and runner-et", "body": "\r\nWe are reverting ##539 which added more dtype tests for runner-aoti + runner-et,\r\nbecause of fails - there's no point in having failing tests. That being said, we should figure out which ones should work, and if they don't today, how to make them work.", "url": "https://github.com/pytorch/torchchat/issues/549", "state": "open", "labels": [], "created_at": "2024-04-29T16:42:19Z", "updated_at": "2024-04-29T18:01:09Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 547, "title": "Can we make sure native runner binary commands in README work directly as written?", "body": "It would be great if\r\n\r\n```\r\ncmake-out/aoti_run model.so -z tokenizer.model -l 3 -i \"Once upon a time\"\r\n```\r\n\r\nand\r\n\r\n```\r\ncmake-out/et_run llama3.pte -z tokenizer.model -l 3 -i \"Once upon a time\"\r\n```\r\n\r\nwere changed to include a known location of a model.so and tokenizer.model file. For example, include download and export instructions directly before it or those downloaded before in the README file.\r\n\r\ncc @byjlw @mikekgfb ", "url": "https://github.com/pytorch/torchchat/issues/547", "state": "closed", "labels": [], "created_at": "2024-04-29T15:33:15Z", "updated_at": "2024-05-12T21:03:08Z", "comments": 1, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 546, "title": "Move legal disclaimer down to license section?", "body": "I think we can move\r\n\r\nDisclaimer: The torchchat Repository Content is provided without any guarantees about performance or compatibility. In particular, torchchat makes available model architectures written in Python for PyTorch that may not perform in the same manner or meet the same standards as the original versions of those models. When using the torchchat Repository Content, including any model architectures, you are solely responsible for determining the appropriateness of using or redistributing the torchchat Repository Content and assume any risks associated with your use of the torchchat Repository Content or any models, outputs, or results, both alone and in combination with any other technologies. Additionally, you may have other legal obligations that govern your use of other content, such as the terms of service for third-party models, weights, data, or other technologies, and you are solely responsible for complying with all such obligations.\r\n\r\ndown to the bottom of the license section? Having it at so close to the top is likely not required? Check with others, though. Thanks\r\n\r\ncc @mikekgfb ", "url": "https://github.com/pytorch/torchchat/issues/546", "state": "closed", "labels": [], "created_at": "2024-04-29T15:29:37Z", "updated_at": "2024-05-12T21:06:46Z", "comments": 1, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 544, "title": "[DOCS, TESTS] quantization option table & quantization option table testing", "body": "can we pin down the details for this, because this update is too generous and doesn't represent the swiss cheese that is the support matrix?\r\n\r\nI seem to recall some operators didn't have the full set of group sizes - the group sizes are just an enumeration of powers of 2, did we test them? (I can't say the other table was useful w.r.t to what to expect in eager, compile, AOTI, ET. (We list compile as a separate category for eager, not withstanding torch.compile is supported by a bunch of different compilers all of which may have a different answer...I guess much like ET ~ XNNPACK, compile ~ Inductor...)\r\n\r\nWe should also ensure that we have a test for each claimed supported config in periodic.yml\r\n", "url": "https://github.com/pytorch/torchchat/issues/544", "state": "closed", "labels": [], "created_at": "2024-04-29T03:37:26Z", "updated_at": "2024-05-12T22:58:14Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 543, "title": "[PAPERCUTS] error message repeated ad nauseam", "body": "I get it -- maybe the error is `aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'`\r\n\r\nSeriously, though - I got it after the first error meesage about that, and certainly after the 5th? I'll assume it's for each call site? It's probably onerus to keep track of every error, especially at the point where the error is emitted. But I presume the message goes thru a common reporting site... maybe we can just keep track of the previous error message, and if it's the same as the immediately precedcing, we start a counter and emit \r\n[repeated n times] when the error changes.\r\n\r\nOr we compute a hash of each message, and add up counts for all messages after the first error, dumping a second instance at the end with a count?\r\n\r\nOne more thing -- can we put a filename and an error line? I recall that Soumith said in another meeting that we have that info for IR traces?\r\n\r\n```\r\n(py311) mikekg@mikekg-mbp torchchat % python export.py --checkpoint-path ${MODEL_PATH} --temperature 0 --quantize '{\"linear:int4\": {\"groupsize\": 128}}' --output-pte mode.pte\r\n[...]\r\n %aten__weight_int4pack_mm_default_42 : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten._weight_int4pack_mm.default](args = (%aten_view_copy_default_144, %b_output_weight, 128, %b_output_scales_and_zeros), kwargs = {})\r\n %aten_view_copy_default_145 : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.view_copy.default](args = (%aten__weight_int4pack_mm_default_42, [1, 1, 32000]), kwargs = {})\r\n return (getitem_1, getitem_2, getitem_4, getitem_5, getitem_7, getitem_8, getitem_10, getitem_11, getitem_13, getitem_14, getitem_16, getitem_17, aten_view_copy_default_145)\r\nWARNING:executorch.backends.xnnpack.partition.xnnpack_partitioner:Nothing can be partitioned!\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'\r\nINFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor q", "url": "https://github.com/pytorch/torchchat/issues/543", "state": "closed", "labels": [], "created_at": "2024-04-29T03:22:01Z", "updated_at": "2024-08-30T15:19:47Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 542, "title": "linear:int4 issues - RuntimeError: Missing out variants: {'aten::_weight_int4pack_mm'}", "body": "```\r\n(py311) mikekg@mikekg-mbp torchchat % python export.py --checkpoint-path ${MODEL_PATH} --temperature 0 --quantize '{\"linear:int4\": {\"groupsize\": 128}}' --output-pte mode.pte\r\n[...]\r\nTraceback (most recent call last):\r\n File \"/Users/mikekg/qops/torchchat/export.py\", line 111, in <module>\r\n main(args)\r\n File \"/Users/mikekg/qops/torchchat/export.py\", line 91, in main\r\n export_model_et(\r\n File \"/Users/mikekg/qops/torchchat/export_et.py\", line 98, in export_model\r\n export_program = edge_manager.to_executorch(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/executorch/exir/program/_program.py\", line 899, in to_executorch\r\n new_gm_res = p(new_gm)\r\n ^^^^^^^^^\r\n File \"/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/torch/fx/passes/infra/pass_base.py\", line 40, in __call__\r\n res = self.call(graph_module)\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/executorch/exir/passes/__init__.py\", line 423, in call\r\n raise RuntimeError(f\"Missing out variants: {missing_out_vars}\")\r\nRuntimeError: Missing out variants: {'aten::_weight_int4pack_mm'}\r\n```\r\n\r\nCurrent fail is expected -- somewhat anyway after adding the packed call to the _weight_int4pack_mm but documented incorrectly in docs/quantization.md. I think @lucylq most recently updated the specs to streamline them but that glossed over the reality that we have a bit of a swiss cheese situation. That's sad and not pretty to show, but sadly our current reality\r\n\r\nI'll try to patch up most execution modes, but we really do need tests. And for performance, maybe the plan should be to hook up _weight_int4pack_mm to an asymmetric version of a8w4dq (as per https://github.com/pytorch/torchchat/issues/541). \r\nOf course that's also not quite \"correct\", but how many modes and operators can we put with how much documentation? FP operators already have a bit of a spread in terms of accruacy based on rounding effects, so maybe that's justifiable...\r\n", "url": "https://github.com/pytorch/torchchat/issues/542", "state": "open", "labels": [], "created_at": "2024-04-29T03:03:40Z", "updated_at": "2024-07-30T17:36:20Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/serve", "number": 3120, "title": "If micro_batch_size of micro-batch is set to 1, then model inference is still batch processing?", "body": "### \ud83d\udcda The doc issue\n\nI set the batchSize of the registered model to 10, and then set the micro_batch_size to 1. So for model inference, will it wait for 10 requests to complete preprocessing in parallel before aggregating them for inference?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3120", "state": "open", "labels": [], "created_at": "2024-04-29T02:59:58Z", "updated_at": "2024-04-29T18:48:28Z", "comments": 1, "user": "pengxin233" }, { "repo": "pytorch/torchchat", "number": 533, "title": "[FEATURE REQUEST] 8b weight quantization on ET", "body": "\r\nWhat is the best we can do for int8 channel-wise quantization in XNNPACK (and elsewhere in ET) today? I see ATM we use` F.linear(x, weight.to(dtype=x.dtype)) * scales` as implementation in [ET examples](https://www.internalfb.com/code/fbsource/[7e7c1690e5ac43a50e5e17e41321005d126e3faf]/fbcode/executorch/examples/models/llama2/source_transformation/quantize.py?lines=374) and [torchchat](https://github.com/pytorch/torchchat/blob/main/quantize.py#L401). \r\n\r\nThis function works well for CUDA using AOTI (because AOTI + Triton merge the conversion into the operation), but not so much for CPUs where this forces allocation of a full buffer of float weights. Do we recognize this for XNNPACK and convert into a more efficient primitive? If not, what should we do to do this?\r\n\r\nOn PT CPU, we now have [torch.ops.aten._weight_int8pack_mm](https://github.com/pytorch/torchchat/blob/main/quantize.py#L403). If we don't already, can we recognize the idiom and convert it? Or should we generate an executorch op during quantization that is more efficient? ", "url": "https://github.com/pytorch/torchchat/issues/533", "state": "closed", "labels": [], "created_at": "2024-04-28T17:02:49Z", "updated_at": "2024-07-21T22:14:01Z", "comments": 7, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 528, "title": "[DOCS] runner documentation", "body": "\r\n1 - Add llama2/3 options to docs/runner from https://github.com/pytorch/torchchat/pull/486\r\n\r\n2 - Also does the file need a name change because it covers both build and run for the runners?\r\n\r\n3 - Do we have the necessary documentation - how to build the tokenizer.bin?\r\n That we have to use a different tokenizer for SentencePiece than the Python runners? We can grab some of that from docs/ADVANCED-USERS.md and move it here.\r\n\r\n4 - should we actually split this file?\r\n\r\n5 - we're using stories15M here, should we upgrade to llama3 .\r\n\r\n", "url": "https://github.com/pytorch/torchchat/issues/528", "state": "closed", "labels": [], "created_at": "2024-04-27T22:24:30Z", "updated_at": "2024-07-21T21:38:37Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 526, "title": "[Better Engineering] Is no KV cache still a thing?", "body": "\r\nI put the code there originally, but... wondering whether running models without KV cache is still a thing?\r\nWe don't really offer a way to build it without KV Cache...\r\n\r\nhttps://github.com/pytorch/torchchat/blame/e26c5289453ccac7f4b600babcb40e30634bdeb2/runner/run.cpp#L175-L185\r\n\r\n```\r\n#ifndef __KV_CACHE__\r\n // @lint-ignore CLANGTIDY facebook-hte-LocalUncheckedArrayBounds\r\n ManagedTensor tokens_managed(\r\n &(s->toks[pos]),\r\n /*ignored*/ sizeof(int64_t) * (pos + 1),\r\n {1, 1},\r\n ScalarType::Long);\r\n#else // __KV_CACHE__\r\n ManagedTensor tokens_managed(\r\n token_buffer, sizeof(int64_t), {1, 1}, ScalarType::Long);\r\n#endif\r\n```", "url": "https://github.com/pytorch/torchchat/issues/526", "state": "closed", "labels": [], "created_at": "2024-04-27T22:07:53Z", "updated_at": "2024-04-28T14:30:48Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/tutorials", "number": 2849, "title": "Transformer tutorial multiplying with sqrt(d_model)", "body": "https://github.com/pytorch/tutorials/blob/5e772fa2bf406598103e61e628a0ca0b8e471bfa/beginner_source/translation_transformer.py#L135\r\n\r\nsrc = self.embedding(src) * math.sqrt(self.d_model)\r\n\r\nshouln't this be\r\n\r\nsrc = self.embedding(src) / math.sqrt(self.d_model)\r\n\r\nat least that is the impression I got when reading the \"Attention is all you need\" paper.\r\nOr is there some new research finding that multiplying is better?\r\n\r\n\r\n\n\ncc @sekyondaMeta @svekars @kit1980 @subramen @albanD", "url": "https://github.com/pytorch/tutorials/issues/2849", "state": "closed", "labels": [ "easy", "docathon-h1-2024" ], "created_at": "2024-04-27T07:45:10Z", "updated_at": "2024-06-11T09:15:26Z", "comments": 3, "user": "RogerJL" }, { "repo": "pytorch/TensorRT", "number": 2782, "title": "\u2753 [Question] Unexpected exception _Map_base::at during PTQ", "body": "## \u2753 Question\r\n\r\nI am attempting to execute [PTQ](https://pytorch.org/TensorRT/user_guide/ptq.html). During the compiling process, I get the following exception: \r\n\r\n```\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Finalize: %142 : Tensor = aten::matmul(%x, %143) # /fsx_home/homes/srdecny/meaning/vocoder/hifigan/hifigan/vec2enc.py:84:0 Set kernel index: 5\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Total number of generated kernels selected for the engine: 7\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 0 CASK_STATIC\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 1 CASK_STATIC\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 2 CASK_STATIC\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 3 TRT_SERIALIZABLE:generatedNativePointwise\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 4 TRT_SERIALIZABLE:generatedNativePointwise\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 5 CASK_STATIC\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 6 CASK_STATIC\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Disabling unused tactic source: EDGE_MASK_CONVOLUTIONS\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Disabling unused tactic source: JIT_CONVOLUTIONS\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Engine generation completed in 1.64955 seconds.\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Total per-runner device persistent memory is 0\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Total per-runner host persistent memory is 73616\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Allocated activation device memory of size 33692160\r\nINFO: [Torch-TensorRT TorchScript Conversion Context] - [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +32, now: CPU 0, GPU 888 (MiB)\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - CUDA lazy loading is enabled.\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Calculating Maxima\r\nINFO: [Torch-TensorRT TorchScript Conversion Context] - Starting Calibration.\r\nINFO: [Torch-TensorRT TorchScript Conversion Context] - Post Processing Calibration data in 8.6e-07 seconds.\r\nDEBUG: [Torch-TensorRT TorchScript Conversion Context] - Assigning tensor scales: (Unnamed Layer* 164) [Concatenation]_output using (Unnamed Layer* 164) [Concatenation]_output [\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 1: Unexpected exception _Map_base::at\r\nTraceback (most recent call last):\r\n File \"/fsx_home/homes/srdecny/meaning/vojta_notebooks/trt_quant_single_v1.py\", line 435, in <module>\r\n quanted = trt_decoder = torch_tensorrt.compile(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/fsx_home/homes/srdecny/meaning/env_bender6_3.11/lib/python3.11/site-packages/torch_tensorrt/_compile.py\", line 185, in compile\r\n compiled_ts_module: torch.jit.ScriptModule = torchscript_compile(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/fsx_home/homes/srdecny/meaning/env_bender6_3.11/lib/python3.11/site-packages/torch_tensorrt/ts/_compiler.py\", line 151, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: [Error thrown at core/conversion/conversionctx/ConversionCtx.cpp:169] Building serialized network failed in TensorRT\r\n```\r\n\r\nI don't really know how to proceed from here. What does this exception indicate?\r\n\r\nThe compiling code is roughly this:\r\n\r\n```\r\ncalibrator = torch_tensorrt.ptq.DataLoaderCalibrator(\r\n dloader,\r\n cache_file=\"./encoder_calibrator.cache\",\r\n use_cache=False,\r\n algo_type=torch_tensorrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,\r\n device=DEVICE\r\n)\r\n\r\ninputs = model.dummy_inputs()\r\ntrace = torch.jit.trace(model, inputs, check_trace=False, strict=False)\r\nsignature = torch_tensorrt.Input(shape=inputs.shape, dtype=inputs.dtype)\r\n\r\ntorch_tensorrt.compile(\r\n trace,\r\n input_signature=signature,\r\n enabled_precisions={torch.float, torch.int8, torch.half},\r\n calibrator=calibrator,\r\n truncate_long_and_double=True,\r\n)\r\n```\r\n\r\n`inputs` is a single float `Tensor` (although very large). Unfortunately, I can't share the model.\r\n\r\n\r\n## What you have already tried\r\n\r\nAll I managed to find online was [this](https://forums.developer.nvidia.com/t/tensorrt-int8-calibration-error-indexerror-map-base-at/169511/5) issue where somone indicates that the calibration dataloader might be empty. However, the following runs without any exception:\r\n\r\n```\r\ndummy_inputs = model.dummy_inputs()\r\ntrace = torch.jit.trace(model, inputs, check_trace=False, strict=False)\r\n\r\ntrace(dummy_inputs) # the traced model still works\r\nfor input in dloader:\r\n trace(input) # the model also works with batches from the calibration dataloader\r\n```\r\n\r\nAdditonally, running the c", "url": "https://github.com/pytorch/TensorRT/issues/2782", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-26T18:29:58Z", "updated_at": "2025-03-27T12:42:10Z", "user": "srdecny" }, { "repo": "pytorch/xla", "number": 6979, "title": "Support non-traceable Custom Ops", "body": "## \ud83d\ude80 Feature\r\n`torch.export` supports exporting blackbox custom ops, however, we fails to export it to StableHLO using `exported_program_to_stablehlo` API \r\nhttps://pytorch.org/tutorials/intermediate/torch_export_tutorial.html#custom-ops\r\n\r\n## Motivation\r\nif we have non-traceable python codes in the custom ops, we can't export it to stablehlo program. This means we won't be able to cover as much of the model when exporting through StableHLO.\r\n\r\n## Pitch\r\n\r\nHere is the example pytorch codes\r\n```\r\nimport torch\r\nfrom torch.library import Library, impl, impl_abstract\r\n\r\nm = Library(\"my_custom_library\", \"DEF\")\r\nm.define(\"custom_op(Tensor input) -> Tensor\")\r\n\r\n@impl(m, \"custom_op\", \"CompositeExplicitAutograd\")\r\ndef custom_op(x):\r\n raise Exception(\"DON'T GO HERE\")\r\n return torch.relu(x)\r\n\r\n@impl_abstract(\"my_custom_library::custom_op\")\r\ndef custom_op_meta(x):\r\n return torch.empty_like(x)\r\n\r\nclass CustomOpExample(torch.nn.Module):\r\n def forward(self, x):\r\n x = torch.sin(x)\r\n x = torch.ops.my_custom_library.custom_op(x)\r\n x = torch.cos(x)\r\n return x\r\n\r\nem = torch.export.export(CustomOpExample(), (torch.randn(3, 3),))\r\nem.graph_module.graph.print_tabular()\r\n\r\nfrom torch_xla.stablehlo import exported_program_to_stablehlo\r\nstablehlo_program = exported_program_to_stablehlo(em)\r\nprint(stablehlo_program.get_stablehlo_text())\r\n```\r\nAs you can see, `torch.export` runs fine and give us this fx graph, without caring what is inside `custom_op` impl.\r\n\r\n```\r\nopcode name target args kwargs\r\n------------- --------- ----------------------------------- ------------ --------\r\nplaceholder arg0_1 arg0_1 () {}\r\ncall_function sin aten.sin.default (arg0_1,) {}\r\ncall_function custom_op my_custom_library.custom_op.default (sin,) {}\r\ncall_function cos aten.cos.default (custom_op,) {}\r\noutput output output ((cos,),) {}\r\n```\r\n\r\n`exported_program_to_stablehlo` fails because it runs the `custom_op` and hits `Exception`.\r\n\r\nWhen I comment out the line `raise Exception(\"DON'T GO HERE\")`, `exported_program_to_stablehlo` works fine, however it traces into `custom_op` by converting `relu` to `stablehlo.maximum`,\r\n\r\n```\r\nmodule @IrToHlo.8 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\r\n func.func @main(%arg0: tensor<3x3xf32>) -> tensor<3x3xf32> {\r\n %0 = stablehlo.constant dense<0.000000e+00> : tensor<3x3xf32>\r\n %1 = stablehlo.sine %arg0 : tensor<3x3xf32>\r\n %2 = stablehlo.maximum %1, %0 : tensor<3x3xf32>\r\n %3 = stablehlo.cosine %2 : tensor<3x3xf32>\r\n return %3 : tensor<3x3xf32>\r\n }\r\n}\r\n```\r\n\r\nI wonder if we can support exporting blackbox custom ops all the way to StableHLO without executing the op. We want to see something like this in the output,\r\n\r\n```\r\nmodule @IrToHlo.8 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\r\n func.func @main(%arg0: tensor<3x3xf32>) -> tensor<3x3xf32> {\r\n %0 = stablehlo.constant dense<0.000000e+00> : tensor<3x3xf32>\r\n %1 = stablehlo.sine %arg0 : tensor<3x3xf32>\r\n %2 = stablehlo.custom_call {name = \"my_custom_library.custom_op\"}} : (tensor<3x3xf32>) -> tensor<3x3xf32>\r\n %3 = stablehlo.cosine %2 : tensor<3x3xf32>\r\n return %3 : tensor<3x3xf32>\r\n }\r\n}\r\n```\r\n", "url": "https://github.com/pytorch/xla/issues/6979", "state": "closed", "labels": [ "stablehlo" ], "created_at": "2024-04-26T16:53:16Z", "updated_at": "2024-09-03T04:13:05Z", "comments": 4, "user": "thong3le" }, { "repo": "pytorch/pytorch", "number": 124887, "title": "How to catch NCCL collective timeout in Python", "body": "## Issue description\r\nCurrently, there are several error handling modes ([link](https://github.com/pytorch/pytorch/blob/bc117898f18e8a698b00823f57c19b2d874b93ba/torch/csrc/distributed/c10d/ProcessGroupNCCL.hpp#L114-L126)) for when NCCL collectives timeout. These error handling modes can be set via `TORCH_NCCL_ASYNC_ERROR_HANDLING`/`NCCL_ASYNC_ERROR_HANDLING`. My current observation on single/multi-host CUDA environments using NCCL distributed backend is that when a timeout exception is raised at the C++ level (when `TORCH_NCCL_ASYNC_ERROR_HANDLING=1`), this exception propagates through a few try/catch blocks, but eventually is left unhandled, resulting in the Python processes terminating via SIGABRT/SEGFAULT.\r\n\r\nQuestion: Is it possible without many any modifications to torch to catch the error raised at the C++ level within my Python torch script?\r\n\r\nBased on digging around, I don't think it is possible (open to any suggestions). I've done some experimentation to the PyTorch source code locally by adding some logic based on [Python docs](https://docs.python.org/3.10/extending/extending.html#intermezzo-errors-and-exceptions) to [ProcessGroupNCCL.cpp](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp) such that the exception can be caught in Python. However, `#include <Python.h>` in [ProcessGroupNCCL.cpp](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp) results in `fatal error: Python.h: No such file or directory` when building torch from source. \r\n\r\nQuestion: Are there explicit reasons why I shouldn't add python to NCCL logic?\r\n\r\n## Code example\r\nTODO if needed\r\n\r\n## System Info\r\nTODO if needed\r\n\r\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @d4l3k", "url": "https://github.com/pytorch/pytorch/issues/124887", "state": "closed", "labels": [ "needs reproduction", "oncall: distributed" ], "created_at": "2024-04-24T22:27:43Z", "updated_at": "2024-05-01T06:16:25Z", "user": "gkroiz" }, { "repo": "pytorch/torchchat", "number": 460, "title": "First generated token not being displayed in chat mode sometimes.", "body": "What is your system prompt? \r\nI am superman\r\nWhat is your prompt? \r\nHow can i save the world?\r\n, up, and away! As Superman, you're uniquely equipped\r\n\r\nSeems like 'up' in up, up, and away is being lost. This happens with most responses. ", "url": "https://github.com/pytorch/torchchat/issues/460", "state": "closed", "labels": [], "created_at": "2024-04-24T19:00:40Z", "updated_at": "2024-04-24T22:13:06Z", "comments": 0, "user": "JacobSzwejbka" }, { "repo": "pytorch/executorch", "number": 3303, "title": "How can I convert llama3 safetensors to the pth file needed to use with executorch?", "body": "Fine-tunes of Llama3 usually only have safetensors uploaded. In order to compile a Llama3 model following the tutorial, I need the original pth checkpoint file.\r\n\r\nIs there a way to convert the safetensors to the checkpoint file?", "url": "https://github.com/pytorch/executorch/issues/3303", "state": "closed", "labels": [ "enhancement", "help wanted", "high priority", "triage review" ], "created_at": "2024-04-24T14:20:17Z", "updated_at": "2024-05-30T03:29:23Z", "user": "l3utterfly" }, { "repo": "pytorch/torchchat", "number": 450, "title": "[Feature Request] Support for delegate information in torchchat", "body": "@lucylq can you please add the delegate summary info you added to ET's llama2/export_llama_lib to export_et.py?\r\nCan you add a line or two about XNNPACK delegate (probably just a link to some text on the ET website?) and how to interpret the operator stats in docs/ADVANCED-USERS.md as well?\r\n\r\nThanks so much!\r\n\r\ncc: @iseeyuan ", "url": "https://github.com/pytorch/torchchat/issues/450", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-04-24T06:19:02Z", "updated_at": "2024-04-30T00:29:06Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/vision", "number": 8394, "title": "Run all torchvision models in one script.", "body": "### \ud83d\ude80 The feature\r\n\r\nIs there a test script that can run models.\r\n\r\n### Motivation, pitch\r\n\r\nHl, i am testing a model migration script from cuda to sycl and i would like to test it on torch vision model set, i would like to know do we have a test script that can run all models in torchvision? like run.py [code](https://github.com/pytorch/benchmark/blob/main/run.py) in torchbenchmark, thanks.\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_", "url": "https://github.com/pytorch/vision/issues/8394", "state": "closed", "labels": [], "created_at": "2024-04-24T01:39:23Z", "updated_at": "2024-04-29T10:18:17Z", "comments": 1, "user": "leizhenyuan" }, { "repo": "pytorch/torchchat", "number": 430, "title": "[Feature Request] centralize measurement code", "body": "\r\n@malfet said in https://github.com/pytorch/torchchat/pull/426\r\n\r\nThis code is repeated thrice in this PR. Can we have something like\r\n```\r\nwith report_block-time(\"Time to load model\"):\r\n model = _load_model(builder_args, only_config=True)\r\n device_sync(device=builder_args.device)\r\n```\r\n\r\nMight be a good component for build/utils.py - item for post-release.\r\n\r\ncc: @metascroy ", "url": "https://github.com/pytorch/torchchat/issues/430", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-04-23T22:26:16Z", "updated_at": "2024-05-12T21:32:58Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/serve", "number": 3103, "title": "How to pass parameters from preprocessing to postprocessing when using micro-batch operations", "body": "### \ud83d\udcda The doc issue\n\nI have a variable that is obtained by parsing the image data in pre-processing, but it is not an input to the model. I want to pass it to post-processing and return it together with the results. Like knowing how to pass it from pre-processing to post-processing\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3103", "state": "closed", "labels": [ "triaged" ], "created_at": "2024-04-23T03:17:05Z", "updated_at": "2024-04-29T02:49:49Z", "user": "pengxin233" }, { "repo": "pytorch/torchchat", "number": 372, "title": "[Release] Documentation is sparse.", "body": "What does \"the following models are supported\" mean? Ostensibly you can load other models like language llama, as long as you have a params.json and they fit into the architectural parameters ? \r\n\r\nthe preamble explains it supports \"Android (Devices that support XNNPACK)\" - how do I know that as a user?\r\n\r\n\"Supporting both GGUF fp32/16 \" - also Q4_0 and Q6_0\r\n\r\n\"Export\r\nCompiles a model and saves it to run later.\" - and how do I do this? it's just presented as here's export, no go figure out what to do with a DSO or a PTE?\r\n\r\nShould we say tested - where do we discuss how to add new models? ", "url": "https://github.com/pytorch/torchchat/issues/372", "state": "closed", "labels": [], "created_at": "2024-04-22T08:17:04Z", "updated_at": "2024-04-25T18:47:09Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 364, "title": "[Release][documentation] Docs Regression: documentation for export_et / install_et broken", "body": "From chat:\r\n\r\n@iseeyuan \r\n> Separate question: When I tried python torchchat.py export stories15M --output-pte-path stories15M.pte, I got Export with executorch requested but ExecuTorch could not be loaded. \r\nIf I run the culprit line, from export_et import export_model as export_model_et, I got this stack, [P1219614729](https://www.internalfb.com/intern/paste/P1219614729/)\r\nIs it a known issue?\r\n\r\n@kimishpatel \r\n> Might be unrelated but did you run scripts/install_et.sh?\r\n\r\n@iseeyuan \r\n> I got \"scripts/install_et.sh: line 61: TORCHCHAT_ROOT: unbound variable\". Should I run it with any argument?\r\n> nvm, I should add prefix of TORCHCHAT_ROOT\r\n\r\n@kimishpatel \r\n> Yeah just export TORCHCHAT_ROOT={pwd} or something. it used to be in readme at https://github.com/pytorch/torchchat/. but dont see it anymore\r\n\r\n@kimishpatel Should this go in Executorch documentation or in Torchchat docs?\r\n\r\ncc: @GregoryComer @byjlw @orionr ", "url": "https://github.com/pytorch/torchchat/issues/364", "state": "closed", "labels": [], "created_at": "2024-04-22T04:01:22Z", "updated_at": "2024-04-24T02:32:50Z", "comments": 1, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 357, "title": "runner-et build documentation broken", "body": "\r\nThe runner build information in our documentation is in even worse shape than the ci.\r\n\r\n@shoumikhin \r\n\r\n> anyhow just followed the readme and then tried that cmake command, got [P1219498869 (https://www.internalfb.com/intern/paste/P1219498869/)\r\n\r\n```\r\ncmake -S ./runner-et -B et-build/cmake-out -G Ninja\r\n-- Using ET BUILD DIR: --[et-build]--\r\n-- Using ET BUILD DIR: --[et-build]--\r\n-- The C compiler identification is AppleClang 15.0.0.15000309\r\n-- The CXX compiler identification is AppleClang 15.0.0.15000309\r\n-- Detecting C compiler ABI info\r\n-- Detecting C compiler ABI info - done\r\n-- Check for working C compiler: /Applications/Xcode_15.3.0_15E204a_fb.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped\r\n-- Detecting C compile features\r\n-- Detecting C compile features - done\r\n-- Detecting CXX compiler ABI info\r\n-- Detecting CXX compiler ABI info - done\r\n-- Check for working CXX compiler: /Applications/Xcode_15.3.0_15E204a_fb.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped\r\n-- Detecting CXX compile features\r\n-- Detecting CXX compile features - done\r\n-- TORCHCHAT_ROOT=\"\"\r\n-- Looking for excutorch in /et-build/install/lib/cmake/ExecuTorch\r\nCMake Error at CMakeLists.txt:29 (find_package):\r\n Could not find a package configuration file provided by \"executorch\" with\r\n any of the following names:\r\n executorchConfig.cmake\r\n executorch-config.cmake\r\n Add the installation prefix of \"executorch\" to CMAKE_PREFIX_PATH or set\r\n \"executorch_DIR\" to a directory containing one of the above files. If\r\n \"executorch\" provides a separate development package or SDK, be sure it has\r\n been installed.\r\n-- Configuring incomplete, errors occurred!\r\n```", "url": "https://github.com/pytorch/torchchat/issues/357", "state": "closed", "labels": [], "created_at": "2024-04-21T21:37:26Z", "updated_at": "2024-05-12T21:38:59Z", "comments": 4, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 356, "title": "runner, runner-et and runner-aoti documentation", "body": "Add a description of the runner/run.cpp\r\nhighlight that it's only a few lines of C++ code that need to be different for PyTorch AOTI and PyTorch ET.\r\nMight also check how many lines of llama2.c we avoid having to write by autogenerating llama.{pte,so}\r\n\r\nmaybe @shoumikhin and Hansong (@cbilgin can you put the right git reference for him) can add some text on how to\r\nadapt / re-use the code for integerating LLMs into an app (using their iOS/Android as an example)\r\n\r\ncc: @orionr @metascroy @larryliu0820 @shoumikhin @cbilgin ", "url": "https://github.com/pytorch/torchchat/issues/356", "state": "closed", "labels": [], "created_at": "2024-04-21T21:30:59Z", "updated_at": "2024-04-25T07:57:47Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 354, "title": "[Feature Request] add Dr. CI (when this repository goes public)", "body": "Now that we're a real pytorch project and in the pytorch repo, can we have @pytorch-bot build the same summaries for pytorch/torchchat as it does for pytorch/pytorch? I find those exceedingly helpful to navigate.\r\n\r\nhttps://github.com/pytorch/pytorch/pull/124570#issuecomment-2068152908\r\n\r\n\ud83d\udd17 Helpful Links\r\n\ud83e\uddea See artifacts and rendered test results at [hud.pytorch.org/pr/124570](https://hud.pytorch.org/pr/124570)\r\n\ud83d\udcc4 Preview [Python docs built from this PR](https://docs-preview.pytorch.org/pytorch/pytorch/124570/index.html)\r\n\ud83d\udcc4 Preview [C++ docs built from this PR](https://docs-preview.pytorch.org/pytorch/pytorch/124570/cppdocs/index.html)\r\n\u2753 Need help or want to give feedback on the CI? Visit the [bot commands wiki](https://github.com/pytorch/pytorch/wiki/Bot-commands) or our [office hours](https://github.com/pytorch/pytorch/wiki/Dev-Infra-Office-Hours)\r\nNote: Links to docs will display an error until the docs builds have been completed.\r\n\r\n\u23f3 33 Pending, 2 Unrelated Failures\r\nAs of commit https://github.com/pytorch/pytorch/commit/2fe671d38ce391f8de80611f1ccbcf6f3e912faf with merge base https://github.com/pytorch/pytorch/commit/fd90991790b4cdf66a076711844ca620669dcc04 (image):\r\n\r\nFLAKY - The following jobs failed but were likely due to flakiness present on trunk:\r\nThis comment was automatically generated by Dr. CI and updates every 15 minutes.\r\n\r\ncc: @seemethere @malfet ", "url": "https://github.com/pytorch/torchchat/issues/354", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-04-21T21:01:47Z", "updated_at": "2024-05-13T17:29:28Z", "comments": 4, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 347, "title": "[Release] Seems like we get a bit of a garbage output?", "body": "Maybe this has to do with how we leverage the start and end tokens for prompt and response, but I feel like I'm getting garbage output?\r\n\r\nSteps to reproduce:\r\n1. Run `python torchchat.py chat stories15M`\r\n2. Enter `Can you tell me about your day?` as the prompt\r\n3. I then see the following result\r\n```\r\nWhat is your prompt?\r\nCan you tell me about your day?\r\n was very tired and needed to sleep. No matter how hard you tried, I couldn't keep up with you,\" said the voice.\r\nLily was surprised. She had never heard such a voice before. She asked, \"What is the song?\"\r\nThe voice replied, \"It brings you joy. It brings you a star.\"\r\nLily was very excited and wanted to know more about the star. So, she asked the voice, \"What is the song?\"\r\nThe voice said, \"The song might bring you something special. You should hope and you will remember to dream. I will always remember the beautiful song you had heard and tell you.\"\r\nLily smiled in understanding. She thanked the voice and went back to sleep.\r\nThe next morning, Lily woke up and found the beautiful song she had heard earlier. She was so happy and thankful to the friendly voice. Once upon a time, there was a\r\n```\r\n4. Note that the result is clipped (` was...` as the start) and also didn't go to the end token\r\n5. Also wasn't interactive, which I called out in https://github.com/pytorch/torchchat/issues/346\r\n\r\nExpected:\r\n1. A reasonable chat with the LLM\r\n\r\ncc @byjlw @mikekgfb \r\n", "url": "https://github.com/pytorch/torchchat/issues/347", "state": "closed", "labels": [], "created_at": "2024-04-21T18:19:45Z", "updated_at": "2024-04-22T21:13:17Z", "comments": 1, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 346, "title": "[Release] Chat only responds to one line of text?", "body": "I would expect chat to be interactive, but it isn't for me right now.\r\n\r\nSteps to reproduce:\r\n1. Run `python torchchat.py chat stories15M`\r\n2. Enter some text like \"Hello\"\r\n3. Notice that you get a response, but then the command exits\r\n\r\nExpected:\r\n1. I'd be able to continue chatting with the model until I hit Ctrl-C or something\r\n\r\ncc @byjlw @mikekgfb \r\n\r\n", "url": "https://github.com/pytorch/torchchat/issues/346", "state": "closed", "labels": [], "created_at": "2024-04-21T18:16:01Z", "updated_at": "2024-04-25T07:58:45Z", "comments": 2, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 345, "title": "[Feature request] Allow for GPU and MPS as defaults on machines that support it?", "body": "Given that we won't see good performance without GPU enabled for machines that support CUDA, should we make sure we select `gpu`, `mps` and then `cpu` in that order for `chat` and `generate` commands?\r\n\r\nIs this potentially a blocker for full launch?\r\n\r\ncc @malfet @mikekgfb @dbort @byjlw ", "url": "https://github.com/pytorch/torchchat/issues/345", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-04-21T17:54:06Z", "updated_at": "2024-04-30T06:31:55Z", "comments": 2, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 344, "title": "[Resolve] Force requirements.txt or README.md to install PyTorch nightlies?", "body": "Given that we won't see good performance with the release version of PyTorch, should we update requirements.txt and/or README.md to have people install nightlies?\r\n\r\nIs this potentially a blocker for full launch?\r\n\r\ncc @malfet @mikekgfb @dbort @byjlw ", "url": "https://github.com/pytorch/torchchat/issues/344", "state": "closed", "labels": [], "created_at": "2024-04-21T17:52:19Z", "updated_at": "2024-04-22T13:58:30Z", "comments": 4, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 336, "title": "[Mitigated, pending confirmation/closure] Review update documentation for GPTQ", "body": "https://github.com/pytorch/torchchat/edit/main/docs/quantization.md\r\n\r\nPlease update the documentation to include all necessary options and information to use GPTQ with eager execution and export .\r\n\r\ncc: @jerryzh168 @HDCharles ", "url": "https://github.com/pytorch/torchchat/issues/336", "state": "closed", "labels": [], "created_at": "2024-04-21T08:19:38Z", "updated_at": "2024-04-25T17:13:46Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/pytorch", "number": 124452, "title": "How to use system cuda/cudnn", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nI have a machine with cuda/cudnn compatible rocm device.\r\n\r\n```\r\n$ nvcc --version\r\nHIPHSA: Author SUGON\r\nHIP version: 5.4.23453\r\nCuda compilation tools, release 11.8, V11.8.89\r\nclang version 15.0.0 (http://10.15.3.7/dcutoolkit/driverruntime/llvm-project.git 1be90618e508074abc746ab4963d7ad92710d6c5)\r\nTarget: x86_64-unknown-linux-gnu\r\nThread model: posix\r\nInstalledDir: /public/software/compiler/dtk-23.10.1/llvm/bin\r\n```\r\n\r\nThe cuda/cudnn is installed:\r\n\r\n```\r\ncuda]$ ll\r\n\u603b\u7528\u91cf 30\r\ndrwxr-xr-x 3 root root 4096 12\u6708 19 14:20 bin\r\n-rw-r--r-- 1 root root 634 12\u6708 6 20:31 env.sh\r\ndrwxr-xr-x 3 root root 4096 12\u6708 19 14:20 extras\r\nlrwxrwxrwx 1 root root 28 12\u6708 19 14:21 include -> targets/x86_64-linux/include\r\nlrwxrwxrwx 1 root root 24 12\u6708 19 14:21 lib64 -> targets/x86_64-linux/lib\r\ndrwxr-xr-x 3 root root 4096 12\u6708 19 14:20 nvvm\r\ndrwxr-xr-x 5 root root 4096 12\u6708 19 14:21 samples\r\ndrwxr-xr-x 3 root root 4096 12\u6708 19 14:21 src\r\ndrwxr-xr-x 3 root root 4096 12\u6708 19 14:21 targets\r\ndrwxr-xr-x 2 root root 4096 12\u6708 19 14:21 tools\r\n-rw-r--r-- 1 root root 20 12\u6708 6 20:31 version.txt\r\n```\r\n\r\nI then install pytorch 2.2 with cuda 11.8 by:\r\n```\r\npip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118\r\n```\r\n\r\nBut when I import torch, it can\u2019t find cuda device:\r\n\r\n```\r\n$ python\r\nPython 3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import torch\r\n>>> torch.cuda.device_count()\r\n0\r\n```\r\nI think the problem is pytorch use the cuda/cudnn runtime lib of its own. But I want it to use system cuda.\r\n\r\nI have set CUDA_HOME and LD_LIBRARY_PATH. But it seems not work.\r\n ", "url": "https://github.com/pytorch/pytorch/issues/124452", "state": "closed", "labels": [], "created_at": "2024-04-19T03:23:41Z", "updated_at": "2024-04-19T15:13:57Z", "user": "fancyerii" }, { "repo": "pytorch/vision", "number": 8382, "title": "Regarding IMAGENET1K_V1 and IMAGENET1K_V2 weights", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI found a very strange \"bug\" while I was trying to find similiar instances in a vector database of pictures. The model I used is ResNet50. The problem occurs only when using the` IMAGENET1K_V2` weights, but does not appear when using the legacy `V1` weights (referring to https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/).\r\n\r\nWhen I calculate the **cosine similarity** with `V1` weights for two almost identical pictures I get `values > 0.95`, however when I use `V2` weights with the same pictures I get `values < 0.7`. In layman terms with `V2` identical pictures are not recognized as such anymore. I gave you two example pictures below and the code to reproduce the problem. Does somebody have a concise explanation for this behaviour?\r\n\r\nWhen you increase the size in your `transform.resize((x, y))` the problem gradually begins to vanish, however this is not really a good solution since it produces overhead during inference.\r\n\r\nWould be happy for any insights on this topic :)\r\n\r\n\r\n```\r\nfrom torchvision import models\r\nfrom torchvision.models import ResNet50_Weights\r\nimport torchvision.io\r\nfrom torch import nn\r\nimport numpy as np\r\nfrom numpy.linalg import norm\r\n\r\nclass Identity(nn.Module):\r\n def __init__(self):\r\n super(Identity, self).__init__()\r\n\r\n def forward(self, x):\r\n return x\r\n\r\n# Get weights\r\nweights = ResNet50_Weights.IMAGENET1K_V1\r\npreprocess = weights.transforms()\r\n\r\nmodel = models.resnet50(weights=ResNet50_Weights.IMAGENET1K_V1).to(\"cuda:0\")\r\nmodel.fc = Identity()\r\n\r\na = model(preprocess(torchvision.io.read_image(\"/raid/..../datasets/lion/lion_ori_small.jpg\").unsqueeze(dim=0).to(\"cuda:0\"))).cpu().detach().numpy().squeeze()\r\nb = model(preprocess(torchvision.io.read_image(\"/raid/.../datasets/lion/lion_fake_small.jpg\").unsqueeze(dim=0).to(\"cuda:0\"))).cpu().detach().numpy().squeeze()\r\ncosine = np.dot(a,b)/(norm(a)*norm(b))\r\n```\r\n\r\n\r\n![lion_fake](https://github.com/pytorch/vision/assets/138434950/36983e9d-61af-41bf-9e88-793d149c0188)\r\n![lion_ori](https://github.com/pytorch/vision/assets/138434950/095e9a5b-0fbe-49eb-820b-41b500f116a8)\r\n\r\n### Versions\r\n\r\ntorchvision 0.19", "url": "https://github.com/pytorch/vision/issues/8382", "state": "open", "labels": [], "created_at": "2024-04-17T09:30:50Z", "updated_at": "2024-04-17T09:33:44Z", "comments": 0, "user": "asusdisciple" }, { "repo": "pytorch/TensorRT", "number": 2759, "title": "\u2753 [Question] How should the CMakeLists look like for running .ts files in C++? ", "body": "## \u2753 Question\r\n\r\nI am trying to load a .ts model in C++ on Jetson Orin NX. I am running on this container [https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torch_tensorrt](), version:[r35.3.1].\r\n```#include <torch/script.h> // One-stop header.\r\n #include <torch_tensorrt/torch_tensorrt.h>\r\n \r\n #include <iostream>\r\n #include <memory>\r\n \r\n int main(int argc, const char* argv[]) {\r\n torch::jit::Module module;\r\n try {\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n module = torch::jit::load(\"classificator_float.ts\");\r\n }\r\n catch (const c10::Error& e) {\r\n std::cerr << \"error loading the model\\n\";\r\n return -1;\r\n }\r\n std::cout << \"ok\\n\";\r\n }\r\n ```\r\n However, I am struggling to make CMakeLists.txt which would properly include the tensorrt runtime. This is what I currently have:\r\n \r\n ```\r\n cmake_minimum_required(VERSION 3.12 FATAL_ERROR)\r\nproject(custom_ops)\r\n\r\n\r\nexecute_process(\r\n COMMAND python3 -c \"import torch; print(torch.utils.cmake_prefix_path)\"\r\n OUTPUT_VARIABLE PYTORCH_CMAKE_PREFIX_PATH\r\n OUTPUT_STRIP_TRAILING_WHITESPACE\r\n)\r\n\r\nset(CMAKE_PREFIX_PATH \"${PYTORCH_CMAKE_PREFIX_PATH}\")\r\n\r\nfind_package(Torch REQUIRED)\r\n\r\nadd_executable(example-app example-app.cpp)\r\ntarget_include_directories(example-app PRIVATE \"/usr/local/lib/python3.8/dist-packages/torch_tensorrt/include\")\r\ntarget_link_libraries(example-app torch)\r\n\r\nset_property(TARGET example-app PROPERTY CXX_STANDARD 17)\r\n```\r\n \r\nIt builds without issues, however when I try to execute it I get:\r\n![image](https://github.com/pytorch/TensorRT/assets/25930120/cce8809e-49b8-4b8d-9c96-1ec81a7daf15)\r\n\r\nHow should I modify CMakeLists.txt?\r\n\r\n## What you have already tried\r\n\r\nI have looked at these tutorials, but they do not have CMakeLists for running models compiled with TensorRT:\r\n\r\nhttps://pytorch.org/tutorials/advanced/cpp_export.html\r\nhttps://pytorch.org/TensorRT/getting_started/getting_started_with_cpp_api.html\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.0.0+nv23.5\r\n - TensorRT: 8.5.2.2-1\r\n - CPU Architecture: arm64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): jetson-container\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8.10\r\n - CUDA version: 11.4\r\n - GPU models and configuration: Jetson Orin NX\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2759", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-17T09:15:23Z", "updated_at": "2024-04-24T05:39:27Z", "user": "DmytroIvakhnenkov" }, { "repo": "pytorch/torchchat", "number": 211, "title": "[Feature request] Support more GGUF tensor formats", "body": "Today we support parsing for F16, F32, Q4_0, and Q6_K GGUF tensors (see gguf_util.py). We'd like to add support for more GGUF quantization formats in https://github.com/ggerganov/llama.cpp/blob/master/ggml-quants.c.\r\n\r\nAdding support for a new format should be straightforward, using Q4_0 and Q6_K as guides.\r\n\r\nFor Q4_0 and Q6_K, we convert GGUF tensors with a class that represents groupwise quantization, e.g., for Q4_0, we have a class as follows:\r\n\r\n```\r\nclass Q4_0:\r\n groupsize = 32\r\n n_bit = 4\r\n\r\n @staticmethod\r\n def unpack(gguf_tensor: gguf.gguf_reader.ReaderTensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]\r\n```\r\n\r\nThe unpack method parses the gguf tensor and returns a tuple of tensors q, s, and z, where\r\n\r\n* q is an tensor of shape (nr, nc) and of type torch.int32, with values in [0, 2^(n_bit)-1] that represents the unsigned quantized values. It has the shape of the input GGUF tensor, but its shape is reversed to align with how torch stores weights in a state_dict.\r\n\r\n* s is a tensor of shape (nr, ng) and of type torch.float32, where ng = nc // groupsize is the number of groups per row. It represents the scale per group.\r\n\r\n* z is a tensor of shape (nr, ng) and of type torch.float32, where ng = nc // groupsize is the number of groups per row. It represents the zero per group.\r\n\r\nTo convert q, s, and z to a float, we do the following calculation:\r\n\r\n```\r\nq_grouped = q.reshape(-1, groupsize)\r\ns = s.reshape(-1, 1) # one per group\r\nz = z.reshape(-1, 1) # one per group\r\n\r\nfloat = q_grouped.sub(2 ** (n_bit - 1)).mul(s).add(z).reshape_as(q)\r\n```\r\n\r\nNote that for Q4_0 and Q6_K, z is a zero vector because these are scale-only quantization schemes.\r\n\r\nTo add a new scheme like Q4_1, we could copy the recipe for Q4_0 nearly exactly. We need to parse the GGUF block and translate the dequantization logic from https://github.com/ggerganov/llama.cpp/blob/master/ggml-quants.c to python using the bit functions in torch.", "url": "https://github.com/pytorch/torchchat/issues/211", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-04-16T01:57:25Z", "updated_at": "2024-04-25T18:13:44Z", "comments": 0, "user": "metascroy" }, { "repo": "pytorch/pytorch", "number": 124090, "title": "Fakeifying a non-leaf subclass where inner tensor is noncontiguous incorrectly produces contiguous tensor.", "body": "Minified repro from internal:\r\n```\r\n def test_dtensor_tensor_is_not_autograd_leaf_but_local_is_noncontiguous(self):\r\n\r\n # Temporarily ignore setUp(), and use rank3 graphs during tracing\r\n dist.destroy_process_group()\r\n fake_store = FakeStore()\r\n dist.init_process_group(\r\n \"fake\", store=fake_store, rank=3, world_size=2\r\n )\r\n mesh = DeviceMesh(self.device_type, [1, 3])\r\n\r\n x = torch.randn(10, 257, 160, requires_grad=True)\r\n x_dt = DTensor.from_local(x, mesh, [_Partial()], run_check=False, shape=(10, 257, 160), stride=(41120, 160, 1))\r\n tmp_dt = x_dt.redistribute(mesh, (Shard(1),))\r\n\r\n from torch._subclasses import FakeTensorMode\r\n m = FakeTensorMode()\r\n tmp_dt_fake = m.from_tensor(tmp_dt)\r\n self.assertEqual(tmp_dt.shape, tmp_dt_fake.shape)\r\n self.assertEqual(tmp_dt.stride(), tmp_dt_fake.stride())\r\n self.assertEqual(tmp_dt._local_tensor.shape, tmp_dt_fake._local_tensor.shape)\r\n # This assert **fails**\r\n # tmp_dt._local_tensor is not contiguous, but tmp_dt_fake._local_tensor advertises as contiguous\r\n self.assertEqual(tmp_dt._local_tensor.stride(), tmp_dt_fake._local_tensor.stride())\r\n```\n\ncc @ezyang @gchanan @zou3519 @kadeng @msaroufim @anijain2305 @chauhang", "url": "https://github.com/pytorch/pytorch/issues/124090", "state": "closed", "labels": [ "high priority", "triaged", "oncall: pt2", "module: pt2-dispatcher" ], "created_at": "2024-04-15T19:11:01Z", "updated_at": "2024-05-01T21:56:06Z", "user": "bdhirsh" }, { "repo": "pytorch/serve", "number": 3086, "title": "How to modify torchserve\u2019s Python runtime from 3.8.0 to 3.10", "body": "### \ud83d\udcda The doc issue\n\nMy handle uses the syntax of Python 3.10, but the log shows Python runtime: 3.8.0. causing the model to fail to run. I would like to ask how to convert its environment to Python 3.10. I have introduced the dependencies of the Python 3.10 version into the corresponding dockerfile.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3086", "state": "closed", "labels": [ "triaged" ], "created_at": "2024-04-15T05:39:53Z", "updated_at": "2024-04-23T17:26:08Z", "user": "pengxin233" }, { "repo": "pytorch/torchchat", "number": 174, "title": "core dump in ci", "body": "\r\nWe get quite repeatable core dumps with a segmentation fault, e.g., here https://github.com/pytorch/torchat/actions/runs/8676531709/job/23791140949?pr=171\r\n\r\n/home/runner/work/_temp/aa3d75e7-8cff-4789-ba8a-71b211235396.sh: line 4: 2369 Segmentation fault (core dumped) python generate.py --dtype ${DTYPE} --checkpoint-path ${MODEL_PATH} --temperature 0 --dso-path ${MODEL_DIR}/${MODEL_NAME}.so > ./output_aoti\r\n\r\nThis is Python so even if the input programs are broken, it should not core dump but report an error. \r\n\r\nIn terms of actionabile next steps, how do we get the core dump and debug this?\r\n\r\ncc: @malfet @guangy10 @seemethere ", "url": "https://github.com/pytorch/torchchat/issues/174", "state": "closed", "labels": [], "created_at": "2024-04-14T07:39:12Z", "updated_at": "2024-04-25T08:07:14Z", "comments": 2, "user": "mikekgfb" }, { "repo": "pytorch/audio", "number": 3773, "title": "DEVICE AV-ASR WITH EMFORMER RNN-T tutorial : avsr not found", "body": "### \ud83d\udc1b Describe the bug\n\nHi, I am trying the device av-asr tutorial (https://pytorch.org/audio/stable/tutorials/device_avsr.html). When I trying to run the codes in the tutorial, it shows \"no module named avsr\" when executing the following code:\r\n\r\n`from avsr.data_prep.detectors.mediapipe.detector import LandmarksDetector`.\r\n\r\n**I have tried to locate the avsr library, but seems there is no related repository to install or include. Would like to know where can I find this avsr library? Seems it is already been removed from pip / conda?** \r\n\r\nPlus, there is a line of code : `sys.path.insert(0,\u201c/../../examples)`, I would also like to know what is the purpose of this directory? Is it OK to change it with other directory?\r\n\r\n\n\n### Versions\n\n2024-04-13 22:29:37 (1.54 MB/s) - \u2018collect_env.py\u2019 saved [22068/22068]\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.2.1+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 12.4.99\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080 SUPER\r\nNvidia driver version: 550.54.14\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 32\r\nOn-line CPU(s) list: 0-31\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Core(TM) i9-14900K\r\nCPU family: 6\r\nModel: 183\r\nThread(s) per core: 2\r\nCore(s) per socket: 24\r\nSocket(s): 1\r\nStepping: 1\r\nCPU max MHz: 6000.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 6374.40\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities\r\nVirtualization: VT-x\r\nL1d cache: 896 KiB (24 instances)\r\nL1i cache: 1.3 MiB (24 instances)\r\nL2 cache: 32 MiB (12 instances)\r\nL3 cache: 36 MiB (1 instance)\r\nNUMA node(s): 1\r\nNUMA node0 CPU(s): 0-31\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] flake8==6.", "url": "https://github.com/pytorch/audio/issues/3773", "state": "closed", "labels": [], "created_at": "2024-04-13T14:31:19Z", "updated_at": "2024-04-13T14:37:11Z", "comments": 0, "user": "sfcgta4794" }, { "repo": "pytorch/xla", "number": 6916, "title": "SPMD + Dynamo", "body": "## \u2753 Questions and Help\r\n\r\nIs there a way to get SPMD working with Dynamo/`torch.compile` to reduce the overhead of Pytorch re-tracing the module every time it gets called?", "url": "https://github.com/pytorch/xla/issues/6916", "state": "closed", "labels": [], "created_at": "2024-04-11T01:50:44Z", "updated_at": "2024-04-12T19:50:56Z", "comments": 4, "user": "BitPhinix" }, { "repo": "pytorch/vision", "number": 8372, "title": "Nightly build flaky pytorch/vision / conda-py3_11-cpu builds", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nFlaky issue on pytorch/vision / conda-py3_11-cpu builds. Has been happening for a while now.\r\nMost likely due to corrupt worker environment:\r\n\r\n```\r\n+ __conda_exe run -p /Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke python3 pytorch/vision/test/smoke_test.py\r\n+ /opt/homebrew/Caskroom/miniconda/base/bin/conda run -p /Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke python3 pytorch/vision/test/smoke_test.py\r\n/Users/ec2-user/.local/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?\r\n warn(\r\ntorchvision: 0.19.0a0+480eec2\r\nTraceback (most recent call last):\r\n File \"/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/test/smoke_test.py\", line 103, in <module>\r\n main()\r\n File \"/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/test/smoke_test.py\", line 83, in main\r\n print(f\"{torch.ops.image._jpeg_version() = }\")\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke/lib/python3.11/site-packages/torch/_ops.py\", line 927, in __getattr__\r\n raise AttributeError(\r\nAttributeError: '_OpNamespace' 'image' object has no attribute '_jpeg_version'\r\n\r\nERROR conda.cli.main_run:execute(124): `conda run python3 pytorch/vision/test/smoke_test.py` failed. (See above for error)\r\ntorch.cuda.is_available: False\r\n```\r\n\r\nRerun is ususally successful \r\n\r\n### Versions\r\n\r\n0.19.0", "url": "https://github.com/pytorch/vision/issues/8372", "state": "open", "labels": [], "created_at": "2024-04-10T15:48:12Z", "updated_at": "2024-04-10T15:49:09Z", "comments": 1, "user": "atalman" }, { "repo": "pytorch/serve", "number": 3078, "title": "Serve multiple models with both CPU and GPU", "body": "Hi guys, I have a question: Can I serve several models (about 5 - 6 models) using both CPU and GPU inference?", "url": "https://github.com/pytorch/serve/issues/3078", "state": "open", "labels": [ "question", "triaged" ], "created_at": "2024-04-10T15:03:35Z", "updated_at": "2025-01-12T06:29:51Z", "user": "hungtrieu07" }, { "repo": "pytorch/torchx", "number": 875, "title": "Fix Nightly push permissions", "body": "## \u2753 Questions and Help\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\n<!-- your question here -->\r\n\r\nIs it possible to fix the nightly push permissions? Many pr have been merged into main, but last nightly release was from 2024.2.12 (https://pypi.org/project/torchx-nightly/)\r\n\r\nCurrently nightly push is failing due to:\r\n\r\n```\r\nERROR HTTPError: 403 Forbidden from https://upload.pypi.org/legacy/ \r\n The user 'd4l3k' isn't allowed to upload to project 'torchx-nightly'. \r\n See https://pypi.org/help/#project-name for more information. \r\n```\r\n(https://github.com/pytorch/torchx/actions/runs/8614806013/job/23608993087#step:6:473)\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/875", "state": "closed", "labels": [], "created_at": "2024-04-09T19:38:04Z", "updated_at": "2024-04-10T18:26:16Z", "comments": 6, "user": "ryxli" }, { "repo": "pytorch/torchchat", "number": 77, "title": "[Feature request] Need a format for test reports and how we might track them?", "body": "Maybe we build a table, with something like\r\n\r\n| Model. | Target tested | Platform tested (*) | submitter | test date | link to test transcript |\r\n|--|--|--|--|--|--|\r\n| stories15M | generate, AOTI CPU | Ubuntu x86 24.04 | mikekgfb | 2024-04-06 | [test transcript](https://github.com/pytorch-labs/llama-fast/actions/runs/8586564185/job/23529165773?pr=74) |\r\n\r\n* may need a script to capture system info?", "url": "https://github.com/pytorch/torchchat/issues/77", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-04-07T06:04:24Z", "updated_at": "2024-04-25T18:14:04Z", "comments": 0, "user": "mikekgfb" }, { "repo": "pytorch/torchchat", "number": 70, "title": "[Usability] Clean installation and first example steps in README to standardize on stories15M?", "body": "Looking great! However, I went through the README steps on a new M1 and hit a few issues. It would be ideal if we can make this a clean list of commands that a person could cut and paste all the way through. Here are some thoughts:\r\n\r\nCan we move \"The model definition (and much more!) is adopted from gpt-fast, so we support the same models. To download llama models, go to https://huggingface.co/meta-llama/Llama-2-7b and go through steps to obtain access. Then, login with huggingface-cli login\" and those below into the dedicated `Installation` section referenced at https://github.com/pytorch-labs/llama-fast?tab=readme-ov-file#installation and also move it to the top?\r\n\r\nThat section (somewhat matching to https://pytorch.org/executorch/stable/getting-started-setup.html) could include:\r\n\r\n```\r\npython3 -m pip install --user virtualenv\r\npython3 -m virtualenv .llama-fast\r\nsource .llama-fast/bin/activate\r\ngit clone https://github.com/pytorch-labs/llama-fast.git\r\ncd llama-fast\r\ngit submodule sync\r\ngit submodule update --init\r\n\r\n# If we need PyTorch nightlies\r\npip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu\r\n# Otherwise\r\n# pip install torch torchvision\r\n\r\npip install sentencepiece huggingface_hub\r\n# Eventually should be (when Dave has the PyPI packages)\r\n# pip install sentencepiece huggingface_hub executorch\r\n# I had some issues with the pytorch submodule not downloading from ExecuTorch - not sure why\r\n\r\n# To download Llama 2 models, go to https://huggingface.co/meta-llama/Llama-2-7b and go through steps to obtain access.\r\n\r\n# Once approved, login with\r\nhuggingface-cli login\r\n# You will be asked for a token from https://huggingface.co/settings/tokens\r\n\r\n# Set the model and paths for stories15M as an example to test things on desktop and mobile\r\nMODEL_NAME=stories15M\r\nMODEL_PATH=checkpoints/${MODEL_NAME}/stories15M.pt\r\nMODEL_DIR=~/llama-fast-exports\r\n\r\n# Could we make this stories15 instead?\r\nexport MODEL_DOWNLOAD=meta-llama/Llama-2-7b-chat-hf\r\n./scripts/prepare.sh $MODEL_DOWNLOAD\r\npython generate.py --compile --checkpoint-path ${MODEL_PATH} --prompt \"Hello, my name is\" --device {cuda,cpu,mps}\r\n\r\n... Steps for running with AOTI and then ExecuTorch ...\r\n\r\n```\r\n\r\nUnfortunate I get the following error when trying to run generate:\r\n\r\n```\r\ngenerate.py: error: unrecognized arguments: cpu mps\r\n```\r\n\r\nTagging @mikekgfb @byjlw @GregoryComer @cbilgin @dbort @mergennachin \r\n\r\nThank you!\r\n", "url": "https://github.com/pytorch/torchchat/issues/70", "state": "closed", "labels": [], "created_at": "2024-04-06T22:13:18Z", "updated_at": "2024-04-20T01:35:39Z", "comments": 6, "user": "orionr" }, { "repo": "pytorch/torchchat", "number": 69, "title": "[Feature request] Torchchat performance comparison to gpt-fast", "body": "At present, llama-fast is 2x slower than gpt-fast when run out of the box. The root cause is we default to fp32 rather than bf16 (reducing our peak perf potential in a major way).\r\n\r\nI changed the default to fp32 because some mobile targets do not support FP16 (and not at all bfloat16), so this was the least common denominator to run out of the box. \r\n\r\nWill add additional controls for fp data width setting, beyond that we need to decide how to set this up. Do we want the default to run well everywhere, or do we optimize the default for one particular target family. \r\n\r\nAlternative might be different defaults for different targets but that too is confusing.\r\n\r\ncc: @chauhang @malfet @guangy10 ", "url": "https://github.com/pytorch/torchchat/issues/69", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-04-06T16:36:03Z", "updated_at": "2024-05-12T21:36:56Z", "comments": 3, "user": "mikekgfb" }, { "repo": "pytorch/tutorials", "number": 2827, "title": "Misleading example for per-sample gradient", "body": "In the example of per-sample gradient, the following line can be misleading since the `predictions` of a net are logits: \r\nhttps://github.com/pytorch/tutorials/blob/08a61b7cae9d00312d0029b1f86a248ec1253a83/intermediate_source/per_sample_grads.py#L49\r\n\r\nThe correct way should be: \r\n``` python\r\nreturn F.nll_loss(F.log_softmax(predictions, dim=-1), targets) \r\n```\r\n\r\nWould appreciate if this can be corrected. ", "url": "https://github.com/pytorch/tutorials/issues/2827", "state": "closed", "labels": [], "created_at": "2024-04-06T00:27:51Z", "updated_at": "2024-04-24T17:52:48Z", "comments": 3, "user": "mingfeisun" }, { "repo": "pytorch/TensorRT", "number": 2730, "title": "\u2753 [Question] Running LayerNorm in fp16", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\nI am trying to convert a transformer model to TRT in fp16 (fp32 works fine \ud83d\ude42). It includes bunch of LayerNorms, all of them have explicit casting of inputs to fp32, i.e:\r\n``` python\r\nclass LayerNormFP32(nn.LayerNorm):\r\n def forward(self, x):\r\n return super().forward(x.float()).type(x.dtype)\r\n``` \r\nI am getting warnings about precisions of the layers:\r\n```\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Detected layernorm nodes in FP16: %126 : Tensor = aten::layer_norm(%input.9, %127, %self.decoder.layers.0.attn_ln.weight.1, %370, %129, %130), scope: __module.decoder/__module.decoder.layers.0/__module.decoder.layers.0.attn_ln\r\n...\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Running layernorm after self-attention in FP16 may cause overflow. Exporting the model to the latest available ONNX opset (later than opset 17) to use the INormalizationLayer, or forcing layernorm layers to run in FP32 precision can help with preserving accuracy.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT encountered issues when converting weights between types and that could affect accuracy.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Check verbose logs for the list of affected weights.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - - 2 weights are affected by this issue: Detected FP32 infinity values and converted them to corresponding FP16 infinity.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - - 27 weights are affected by this issue: Detected subnormal FP16 values.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - - 3 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.\r\n```\r\nI checked dtype of the mentioned weights in the trace that I pass to `torch_tensorrt.compile` and they are correctly in fp32, even though the warnings state the opposite.\r\n\r\nThe warning suggets two solutions (use INormalizationLayer or force FP32 precisions) but I have no idea ho to achieve it.\r\nThis might be a related: https://github.com/pytorch/TensorRT/pull/2509 (or https://github.com/NVIDIA/TensorRT/issues/3101)\r\n\r\nAny ideas how to resolve or debug this issue?\r\n\r\n## Environment\r\n\r\n- Python 3.11.8\r\n- torch 2.2.1\r\n- torch_tensorrt 2.2.0\r\n- a100\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2730", "state": "open", "labels": [ "question" ], "created_at": "2024-04-05T09:06:28Z", "updated_at": "2025-04-25T12:01:41Z", "user": "Tomiinek" }, { "repo": "pytorch/TensorRT", "number": 2724, "title": "[Question] Model converted using TensorRT is slower than native Pytorch", "body": "Hi All,\r\nWe try to run `resent18` model faster than just running the torchvision version on GPU, therefore we planned to convert and quantize the model using TensorRT. However, we did not witness a performance boost after the conversion. \r\nWe tried to play with the `ir` mode using both `torch_compile` and `dynamo` in addition we tried varying values of `optimization_level` which also did not help. \r\nAdding here code snip:\r\n```python\r\nimport logging\r\nimport time\r\n\r\nimport torch_tensorrt\r\nimport torchvision\r\nimport torch\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom src.utils.utils import set_logger\r\n\r\nset_logger()\r\n\r\n\r\n@torch.no_grad()\r\ndef benchmark(model, inputs):\r\n times = list()\r\n for i in range(100):\r\n t = time.time()\r\n model(inputs)\r\n torch.cuda.synchronize()\r\n times.append(time.time() - t)\r\n return sum(times) / len(times), times\r\n\r\n\r\nif __name__ == '__main__':\r\n # dataset = torchvision.datasets.STL10(\r\n # root='/tmp/data',\r\n # split='train',\r\n # download=True,\r\n # transform=torchvision.transforms.ToTensor()\r\n # )\r\n # loader = DataLoader(dataset, batch_size=2)\r\n bs = 128\r\n dummy_input = torch.rand(bs, 3, 96, 96).cuda()\r\n model = torchvision.models.resnet18(pretrained=True)\r\n model.fc = torch.nn.Linear(512, 10) # Change the output layer to have 10 classes\r\n model.cuda()\r\n model.eval()\r\n\r\n ir_mode = \"dynamo\"\r\n # ir_mode = \"torch_compile\"\r\n\r\n trt_mod = torch_tensorrt.compile(\r\n model,\r\n ir=ir_mode,\r\n inputs=[torch_tensorrt.Input((bs, 3, 96, 96))],\r\n enabled_precisions={torch.float32},\r\n device=torch.device('cuda:0'),\r\n optimization_level=5,\r\n )\r\n avg_time, times = benchmark(model, dummy_input)\r\n logging.info(f\"Model pytorch 32fp: {avg_time}\")\r\n\r\n avg_time, times = benchmark(trt_mod, dummy_input)\r\n logging.info(f\"Model compiled to TensorRT 32fp: {avg_time}\")\r\n\r\n avg_time, times = benchmark(model.half(), dummy_input.half())\r\n logging.info(f\"Model 16fp: {avg_time}\")\r\n\r\n\r\n trt_mod = torch_tensorrt.compile(\r\n model.half(),\r\n ir=ir_mode,\r\n inputs=[torch_tensorrt.Input((bs, 3, 96, 96), dtype=torch.half)],\r\n enabled_precisions={torch.float16},\r\n device=torch.device('cuda:0'),\r\n optimization_level=5,\r\n )\r\n avg_time, times = benchmark(trt_mod, dummy_input.half())\r\n logging.info(f\"Model compiled to TensorRT 16fp: {avg_time}\")\r\n\r\n```\r\n\r\n**Adding Logs for running with `dynamo`**:\r\n```\r\ntorch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.\r\n _torch_pytree._register_pytree_node(\r\n/opt/conda/envs/faster-whisper/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\r\n warnings.warn(\r\n/opt/conda/envs/faster-whisper/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.\r\n warnings.warn(msg)\r\nINFO:torch_tensorrt.dynamo._compiler:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=0, min_block_size=5, torch_executed_ops=set(), pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=5, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False, device=Device(type=DeviceType.GPU, gpu_id=0), require_full_compilation=False, disable_tf32=False, sparse_weights=False, refit=False, engine_capability=<EngineCapability.DEFAULT: 0>, num_avg_timing_iters=1, dla_sram_size=1048576, dla_local_dram_size=1073741824, dla_global_dram_size=536870912, output_format='exported_program')\r\n\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT INetwork construction elapsed time: 0:00:00.304469\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:Using optimization level 5\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:Build TRT engine elapsed time: 0:00:17.935200\r\nINFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT Engine uses: 113246208 bytes of Memory\r\n/opt/conda/envs/faster-whisper/lib/python3.9/site-packages/torch/_export/exported_program.py:333: UserWarning: Unable to execute the generated python source code from the graph. The graph module will no longer be directly callable, but you can still run the ExportedProgram, and if needed, you can run the graph module eagerly using torch.fx.Interpreter.\r\n warnings.warn(\r\nINFO:root:Model pytorch 32fp: 0.021189916133880615\r\nINFO:root:Model compiled to TensorRT 32fp: 0.02402569055557251\r\nINFO:root:Model ", "url": "https://github.com/pytorch/TensorRT/issues/2724", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-03T18:28:20Z", "updated_at": "2024-04-23T18:41:05Z", "user": "AvivSham" }, { "repo": "pytorch/xla", "number": 6880, "title": "test_train_mp_mnist.py failing for CUDA when GPU_NUM_DEVICES=1", "body": "## \ud83d\udc1b Bug\r\n\r\nFollowing [How to run with PyTorch/XLA:GPU](https://github.com/pytorch/xla/blob/master/docs/gpu.md#how-to-run-with-pytorchxlagpu) to test CUDA PJRT plugin. Running a model hangs when GPU_NUM_DEVICES is set to 1. For >1 values works as expected.\r\n\r\n## To Reproduce\r\n\r\n<!--\r\nIt is really important for the team to have a quick repro, which requires no setup work.\r\n\r\nThe quicker is the repro to be run, the higher the chances the bug will be addressed sooner.\r\n\r\nThe best way to create quick repros is to create a Colab based on the following template:\r\n\r\nhttps://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information\r\n\r\nThings to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.\r\n\r\nAnother example are Colab which mount user's Google Drive storages.\r\n\r\nUsing a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:\r\n\r\nhttps://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65\r\n\r\n-->\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. GPU_NUM_DEVICES=1 python test/test_train_mp_mnist.py --fake_data\r\n\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->\r\n```\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nI0000 00:00:1712043952.582653 14258 service.cc:145] XLA service 0x556cec57f460 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\nI0000 00:00:1712043952.582772 14258 service.cc:153] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0\r\nI0000 00:00:1712043952.582792 14258 service.cc:153] StreamExecutor device (1): Tesla V100-SXM2-32GB, Compute Capability 7.0\r\nI0000 00:00:1712043952.586167 14258 se_gpu_pjrt_client.cc:853] Using BFC allocator.\r\nI0000 00:00:1712043952.586310 14258 gpu_helpers.cc:107] XLA backend allocating 25559924736 bytes on device 0 for BFCAllocator.\r\nI0000 00:00:1712043952.586418 14258 gpu_helpers.cc:107] XLA backend allocating 25559924736 bytes on device 1 for BFCAllocator.\r\nI0000 00:00:1712043952.586488 14258 gpu_helpers.cc:147] XLA backend will use up to 8519974912 bytes on device 0 for CollectiveBFCAllocator.\r\nI0000 00:00:1712043952.586563 14258 gpu_helpers.cc:147] XLA backend will use up to 8519974912 bytes on device 1 for CollectiveBFCAllocator.\r\n/usr/local/lib/python3.8/site-packages/torch_xla/core/xla_model.py:105: UserWarning: `devkind` argument is deprecated and will be removed in a future release.\r\n warnings.warn(\"`devkind` argument is deprecated and will be removed in a \"\r\nEpoch 1 train begin 07:45:53\r\n2024-04-02 07:46:03.713411: E external/xla/xla/service/rendezvous.cc:38] This thread has been waiting for `acquire clique for rank 0; clique=devices=[0,1]; stream=0; run_id=0` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.\r\n2024-04-02 07:46:03.713778: E external/xla/xla/service/rendezvous.cc:38] This thread has been waiting for `acquire clique for rank 1; clique=devices=[0,1]; stream=0; run_id=1` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.\r\n```\r\n\r\n## Environment\r\n\r\n - Reproducible on XLA backend [CPU/TPU/CUDA]: CUDA\r\n - Image: us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_12.1\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/6880", "state": "closed", "labels": [], "created_at": "2024-04-03T09:58:30Z", "updated_at": "2024-04-08T11:27:27Z", "comments": 3, "user": "mmakevic-amd" }, { "repo": "pytorch/serve", "number": 3065, "title": "improve security doc for model security check ", "body": "### \ud83d\udcda The doc issue\n\nThe model url provided by cx potentially can contain unsafe content. Existing security lacks the summary of guidance to cx to overcome this issue.\n\n### Suggest a potential alternative/fix\n\nTorchServe provides 3 different levels security check to address this issue. TorchServe Security doc can be updated to provide guidance for cx.\r\n- option1: allowed urls\r\n- option2: cx plugin is a flexible solution which allows cx to add the security check they prefer. \r\n- option3: prod infra (cloud service or internal company infra) provide AOT security check.", "url": "https://github.com/pytorch/serve/issues/3065", "state": "closed", "labels": [ "documentation", "security" ], "created_at": "2024-04-02T19:14:36Z", "updated_at": "2024-04-17T18:25:42Z", "comments": 0, "user": "lxning" }, { "repo": "pytorch/rl", "number": 2053, "title": "[QUESTION] How to reset only certain nested parts of a key with TensorDictPrimer?", "body": "Hi, I have an observation spec for a multi-agent environment which looks like this:\r\n```\r\nCompositeSpec(\r\n agents: CompositeSpec(\r\n observation: UnboundedContinuousTensorSpec(\r\n shape=torch.Size([100, 2, 14]),\r\n space=None,\r\n device=cuda:0,\r\n dtype=torch.float32,\r\n domain=continuous),\r\n episode_reward: UnboundedContinuousTensorSpec(\r\n shape=torch.Size([100, 2, 1]),\r\n space=None,\r\n device=cuda:0,\r\n dtype=torch.float32,\r\n domain=continuous),\r\n edge_index: UnboundedContinuousTensorSpec(\r\n shape=torch.Size([100, 2, 2, 2]),\r\n space=None,\r\n device=cuda:0,\r\n dtype=torch.float32,\r\n domain=continuous), device=cuda:0, shape=torch.Size([100, 2])),\r\n...\r\n```\r\n\r\nHere, the key (\"agents\", \"edge_index\") is a special field that I populate once upon creating the env and never want to change.\r\n\r\nMy problem is that I would like to add a recurrent policy, which requires tracking the hidden state for each agent. I read the Recurrent DQN [tutorial](https://pytorch.org/rl/tutorials/dqn_with_rnn.html#policy), but the LSTMModule's make_tensordict_primer() does not quite work for me as it is designed for the single-agent case.\r\n\r\nThus I have tried to write a custom TensorDictPrimer transform, like so:\r\n```\r\nexisting_obs_spec = env.observation_spec\r\nhidden_state_spec = UnboundedContinuousTensorSpec(shape=(*env.observation_spec[\"agents\"].shape[:2], cfg.actor.gru.num_layers, cfg.actor.gru.hidden_size), device=cfg.env.device)\r\nexisting_obs_spec[(\"agents\", \"hidden_state\")] = hidden_state_spec\r\nenv.append_transform(TensorDictPrimer(existing_obs_spec))\r\n```\r\n\r\nHowever I notice that on environment resets, this TensorDictPrimer now overwrites all the fields in this spec with 0s. I have attempted to specify the TensorDictPrimer's input keys as solely the (\"agents\", \"hidden_state\") key I want to zero-out, but when I do so, I end up losing the other nested keys under \"agents\" on reset. \r\n\r\nAm I misunderstanding the usage of TensorDictPrimer? Any help would be appreciated.", "url": "https://github.com/pytorch/rl/issues/2053", "state": "closed", "labels": [], "created_at": "2024-04-02T02:53:19Z", "updated_at": "2024-04-18T15:04:25Z", "user": "kfu02" }, { "repo": "pytorch/TensorRT", "number": 2723, "title": "\u2753 [Question] Output shape error in deconvolution layer when model is quantized with pytorch-quantization and using torch-tensorrt via torchscript", "body": "## \u2753 Question\r\n\r\nWhile using a simple model with int8 quantization (pytorch-quantization) when the output layer is deconvolution, torchscript to torch-tensorrt conversion fails with wrong number of output channels. If a conv layer is used instead of deconv, it works without an error.\r\n\r\n## What you have already tried\r\n```ruby\r\nimport torch_tensorrt\r\nimport torch\r\nimport torch.nn as nn\r\nimport torchvision\r\nfrom tqdm import tqdm\r\nfrom torchvision import transforms\r\nfrom pytorch_quantization.tensor_quant import QuantDescriptor\r\nfrom pytorch_quantization import quant_modules\r\nfrom pytorch_quantization import nn as quant_nn\r\nfrom pytorch_quantization import calib\r\nimport torch.nn.functional as F\r\n\r\nclass customodel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.e11 = nn.Conv2d(3, 64, kernel_size=3, padding=1) \r\n self.e12 = nn.Conv2d(64, 64, kernel_size=3, padding=1) \r\n self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) \r\n self.upconv4 = nn.ConvTranspose2d(64,64, kernel_size=2, stride=2)\r\n self.d41 = nn.Conv2d(128, 64, kernel_size=3, padding=1)\r\n self.d42 = nn.Conv2d(64, 64, kernel_size=3, padding=1)\r\n self.outconv = nn.ConvTranspose2d(64,10, kernel_size=1) \r\n \r\n def forward(self, x):\r\n x1 = F.relu(self.e11(x))\r\n x2 = F.relu(self.e12(x1))\r\n pool1 = self.pool1(x2)\r\n up4 = self.upconv4(pool1)\r\n merge4 = torch.cat([up4, x2], dim=1) \r\n y = F.relu(self.d41(merge4))\r\n y = F.relu(self.d42(y)) \r\n y = self.outconv(y) \r\n return y\r\n\r\ndef collect_stats(model, data_loader, num_batches):\r\n for name, module in model.named_modules():\r\n if isinstance(module, quant_nn.TensorQuantizer):\r\n if module._calibrator is not None:\r\n module.disable_quant()\r\n module.enable_calib()\r\n else:\r\n module.disable()\r\n for i, (image, _) in tqdm(enumerate(data_loader), total=num_batches):\r\n model(image.cuda())\r\n if i >= num_batches:\r\n break\r\n for name, module in model.named_modules():\r\n if isinstance(module, quant_nn.TensorQuantizer):\r\n if module._calibrator is not None:\r\n module.enable_quant()\r\n module.disable_calib()\r\n else:\r\n module.enable()\r\n\r\ndef compute_amax(model, **kwargs):\r\n for name, module in model.named_modules():\r\n if isinstance(module, quant_nn.TensorQuantizer):\r\n if module._calibrator is not None:\r\n if isinstance(module._calibrator, calib.MaxCalibrator):\r\n module.load_calib_amax()\r\n else:\r\n module.load_calib_amax(**kwargs)\r\n\r\n\r\ndef main():\r\n quant_modules.initialize()\r\n quant_desc_input = QuantDescriptor(calib_method='histogram')\r\n quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)\r\n quant_nn.QuantConvTranspose2d.set_default_quant_desc_input(quant_desc_input)\r\n quant_nn.QuantLinear.set_default_quant_desc_input(quant_desc_input)\r\n model = customodel().cuda()\r\n train_dataset = torchvision.datasets.CIFAR10(root = './data',\r\n train = True,\r\n transform = transforms.Compose([\r\n transforms.Resize((572,572)),\r\n transforms.ToTensor(),\r\n transforms.Normalize(mean = (0.1307,), std = (0.3081,))]),download = True)\r\n num_samples = int(0.03 * len(train_dataset))\r\n train_dataset_subset = torch.utils.data.Subset(train_dataset, range(num_samples))\r\n train_loader = torch.utils.data.DataLoader(dataset=train_dataset_subset,\r\n batch_size = 12,\r\n shuffle = True)\r\n with torch.no_grad():\r\n collect_stats(model,train_loader, num_batches=10)\r\n compute_amax(model, method=\"percentile\", percentile=99.99)\r\n\r\n quant_nn.TensorQuantizer.use_fb_fake_quant = True\r\n with torch.no_grad():\r\n data = iter(train_loader)\r\n images, _ = next(data)\r\n jit_model = torch.jit.trace(model, images.to(\"cuda\"))\r\n torch.jit.save(jit_model, \"custom.pt\")\r\ndef main2():\r\n model = torch.jit.load('/content/custom.pt').eval()\r\n compile_spec = {\"inputs\": [torch_tensorrt.Input([2,3,572,572])],\r\n \"enabled_precisions\":torch.int8,\r\n }\r\n\r\n trt_mod = torch_tensorrt.compile(model, **compile_spec,ir='torchscript')\r\nif __name__ == '__main__':\r\n main()\r\n main2()\r\n```\r\n\r\n```\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: (Unnamed Layer* 53) [Deconvolution]: weight input tensor shape not consistent with the nbOutputMaps in addConvolutionNd/addDeconvolutionNd API. Expected output channels 64 kernel spatial dims [1,1]. But got output channels 10 kernel spatial dims [1,1]\r\nERROR: [Torch-", "url": "https://github.com/pytorch/TensorRT/issues/2723", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-01T15:39:16Z", "updated_at": "2024-05-22T18:51:32Z", "user": "oazeybekoglu" }, { "repo": "pytorch/rl", "number": 2052, "title": "[BUG?] How to handle next with custom environment and check_env_specs()", "body": "I recently starting learning TorchRL so it's possible that this is a misunderstanding on my part and not an actual bug.\r\n\r\n## Describe the bug\r\n\r\nI'm trying to setup a simple spatial arrangement problem using a custom environment. There are N blocks each with an x, y position and a size. My action consists of a block index and x and y deltas. The observation spec is setup to hold the updated positions and sizes of the blocks. The action spec is setup to hold the index and delta. For now, reward is just the distance from center for each block so the network is only trying to learn to move blocks to the center of the space. For state, I include distance from center.\r\n\r\nWhen check_env_specs() is run it fails indicating that the real tensor contains the next state but the fake tensor does not.\r\n\r\n>.venv/lib/python3.11/site-packages/torchrl/envs/utils.py:160: UserWarning: The expected key set and actual key set differ. This will work but with a slower thr$\r\nActual - Expected keys={('next', 'state', 'distance_from_center')}.\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"spatial-arrangement/mwe.py\", line 115, in <module>\r\n check_env_specs(env)\r\n File \".venv/lib/python3.11/site-packages/torchrl/envs/utils.py\", line 634, in check_env_specs\r\n raise AssertionError(\r\nAssertionError: The keys of the specs and data do not match:\r\n - List of keys present in real but not in fake: {('next', 'state', 'distance_from_center')},\r\n - List of keys present in fake but not in real: set().\r\n\r\n\r\n \r\n`check_env_specs` calls `env.fake_tensordict()` to create `fake_tensordict` whose keys are later compared to `real_tensordict` with keys obtained from a rollout. Unless \"next\" is explicitly added the key check will not pass because the created fake_tensordict will only contain observation, reward and done but not next. \r\n\r\nhttps://github.com/pytorch/rl/blob/cd540bf96a9c998e89a59382b1961fd8a2bc57f0/torchrl/envs/common.py#L2840-L2845\r\n\r\n## To Reproduce\r\n\r\nThe following is an MWE that shows the failure.\r\n\r\n```python\r\nimport torch\r\nfrom torchrl.envs import EnvBase\r\nfrom torchrl.envs.utils import check_env_specs\r\nfrom torchrl.data import BoundedTensorSpec, CompositeSpec, UnboundedContinuousTensorSpec\r\nfrom tensordict import TensorDict\r\n\r\n\r\nNUM_BLOCKS = 4\r\n\r\nclass BlockArrangementEnv(EnvBase):\r\n\r\n def __init__(self):\r\n super().__init__()\r\n\r\n self.observation_spec = CompositeSpec({\r\n \"observation\": CompositeSpec({\r\n \"positions\": BoundedTensorSpec(\r\n low=0.0,\r\n high=1.0,\r\n shape=torch.Size([NUM_BLOCKS, 2]),\r\n dtype=torch.float32\r\n ),\r\n \"sizes\": BoundedTensorSpec(\r\n low=0.1,\r\n high=1.0,\r\n shape=torch.Size([NUM_BLOCKS, 2]),\r\n dtype=torch.float32\r\n )\r\n }),\r\n })\r\n\r\n self.state_spec = CompositeSpec({\r\n \"state\": CompositeSpec({\r\n \"distance_from_center\": UnboundedContinuousTensorSpec(\r\n shape=torch.Size([NUM_BLOCKS]),\r\n dtype=torch.float32\r\n ),\r\n })\r\n })\r\n\r\n self.action_spec = CompositeSpec({\r\n \"action\": CompositeSpec({\r\n \"index\": BoundedTensorSpec(\r\n low=0,\r\n high=NUM_BLOCKS - 1,\r\n shape=torch.Size([1]),\r\n dtype=torch.int\r\n ),\r\n \"delta\": BoundedTensorSpec(\r\n low=-1.0,\r\n high=1.0,\r\n shape=torch.Size([2]),\r\n dtype=torch.float32\r\n )\r\n })\r\n })\r\n\r\n self.reward_spec = UnboundedContinuousTensorSpec(\r\n shape=torch.Size([NUM_BLOCKS]),\r\n dtype=torch.float32\r\n )\r\n\r\n\r\n def _reset(self, td):\r\n return TensorDict({\r\n \"observation\": {\r\n \"positions\": torch.rand([NUM_BLOCKS, 2]),\r\n \"sizes\": torch.FloatTensor(NUM_BLOCKS, 2).uniform_(0.1, 1.0),\r\n },\r\n \"state\": {\r\n \"distance_from_center\": torch.rand([NUM_BLOCKS]),\r\n }\r\n }, batch_size=[])\r\n\r\n def _step(self, td, **kwargs):\r\n return TensorDict({\r\n \"observation\": {\r\n \"positions\": torch.rand([NUM_BLOCKS, 2]),\r\n \"sizes\": torch.FloatTensor(NUM_BLOCKS, 2).uniform_(0.1, 1.0),\r\n },\r\n \"state\": {\r\n \"distance_from_center\": torch.rand([NUM_BLOCKS]),\r\n },\r\n \"reward\": torch.rand([NUM_BLOCKS]),\r\n \"done\": torch.tensor(False)\r\n }, batch_size=[])\r\n\r\n def _set_seed(self, seed):\r\n pass\r\n\r\n\r\nenv = BlockArrangementEnv()\r\ncheck_env_specs(env)\r\n```\r\n\r\n## Expected behavior\r\n\r\nI'm not expecting that I need to add next explicitly anywhere since it seems", "url": "https://github.com/pytorch/rl/issues/2052", "state": "closed", "labels": [ "bug" ], "created_at": "2024-03-31T22:10:49Z", "updated_at": "2024-04-02T12:00:35Z", "user": "mneilly" }, { "repo": "pytorch/text", "number": 2253, "title": "PyTorch 2.4 is not supported by TorchText", "body": "Working on this for days trying to install torchtext with pytorch 2.4 and no luck. \r\nThe error message I receive:\r\n```\r\n torchtext 0.17.2 depends on torch==2.2.2\r\n The user requested (constraint) torch==2.4.0.dev20240324+cu121\r\n```\r\n\r\nSo it seems impossible to use torchtext with the latest version of pytorch. \r\n\r\nIs there any way to solve this issue without having to downgrade to pytorch 2.2.2?", "url": "https://github.com/pytorch/text/issues/2253", "state": "open", "labels": [], "created_at": "2024-03-31T05:07:53Z", "updated_at": "2025-08-11T14:46:49Z", "comments": 2, "user": "grant541" }, { "repo": "pytorch/serve", "number": 3054, "title": "Building frontend from source in docker", "body": "### \ud83d\udcda The doc issue\r\n\r\nNot able to find a way to add frontend modelserver jar as part of docker image to host a torchserve model\r\nI was trying to learn making changes to frontend for a small fix in customizedMetadata on management api. the metadata is not json parsed. Adding the changes did not surface when i hosted the model. \r\n```\r\n[\r\n{\r\n\"modelName\": \"toy-ranker\",\r\n\"modelVersion\": \"2024-03-29-10:36\",\r\n\"modelUrl\": \"toy-ranker.mar\",\r\n\"runtime\": \"python\",\r\n\"minWorkers\": 4,\r\n\"maxWorkers\": 4,\r\n\"batchSize\": 1,\r\n\"maxBatchDelay\": 100,\r\n\"loadedAtStartup\": true,\r\n\"workers\": [\r\n.\r\n.\r\n.\r\n],\r\n\"jobQueueStatus\": {\r\n\"remainingCapacity\": 1000,\r\n\"pendingRequests\": 0\r\n},\r\n\"customizedMetadata\": \"{\\n \\\"input1-name\\\": \\\"something\\\",\\n \\\"input2-name\\\": \\\"something2\\\"\\n}\"\r\n}\r\n]\r\n```\r\n\r\n### Suggest a potential alternative/fix\r\n\r\nDocumentation on docker/ on how to build the frontend from source.\n\ncc @agunapal", "url": "https://github.com/pytorch/serve/issues/3054", "state": "closed", "labels": [ "triaged", "docker" ], "created_at": "2024-03-29T15:54:29Z", "updated_at": "2024-04-04T16:54:06Z", "comments": 0, "user": "harshita-meena" }, { "repo": "pytorch/pytorch", "number": 122959, "title": "RuntimeError with PyTorch's MultiheadAttention: How to resolve shape mismatch?", "body": "### \ud83d\udc1b Describe the bug\n\nI'm encountering an issue regarding the input shape for PyTorch's MultiheadAttention. I have initialized MultiheadAttention as follows: \r\n`attention = MultiheadAttention(embed_dim=1536, num_heads=4)`\r\n\r\nThe input tensors have the following shapes:\r\n- query.shape is torch.Size([1, 1, 1536])\r\n- Both key.shape and value.shape are torch.Size([1, 23, 1536])\r\n\r\nHowever, when attempting to use these inputs, I encounter the following error:\r\n`RuntimeError Traceback (most recent call last)\r\nCell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)\r\n----> [1](vscode-notebook-cell:?execution_count=15&line=1) _ = cal_attn_weight_embedding(attention, top_j_sim_video_embeddings_list)\r\n\r\nFile [~/main/reproduct/choi/make_embedding.py:384](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:384), in cal_attn_weight_embedding(attention, top_j_sim_video_embeddings_list)\r\n [381](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:381) print(embedding.shape)\r\n [383](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:383) # attention\u3092\u8a08\u7b97\r\n--> [384](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:384) output, attn_weights = attention(thumbnail, embedding, embedding)\r\n [385](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:385) # attn_weight shape: (1, 1, j+1)\r\n [387](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:387) attn_weights = attn_weights.squeeze(0).unsqueeze(-1) # shape: (j+1, 1)\r\n\r\nFile [~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)\r\n [1496](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1496) # If we don't have any hooks, we want to skip the rest of the logic in\r\n [1497](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1497) # this function, and just call forward.\r\n [1498](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1498) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n [1499](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1499) or _global_backward_pre_hooks or _global_backward_hooks\r\n [1500](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1500) or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> [1501](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501) return forward_call(*args, **kwargs)\r\n [1502](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1502) # Do not call functions when jit is used\r\n [1503](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1503) full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile [~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/activation.py:1205](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/activation.py:1205), in MultiheadAttention.forward(self, query, key, value, key_padding_mask, need_weights, attn_mask, average_attn_weights, is_causal)\r\n [1191](https://vscode-remote+ssh", "url": "https://github.com/pytorch/pytorch/issues/122959", "state": "closed", "labels": [], "created_at": "2024-03-29T09:19:45Z", "updated_at": "2025-01-22T12:08:21Z", "user": "YuyaWake" }, { "repo": "pytorch/pytorch", "number": 122957, "title": "How to export torch.optim.LBFGS using torch.onnx.export", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nI have a python code that solve linear equations with torch.optim.LBFGS. And I want to make it work in C++. One posible way is to use libtorch. But I wander if I can export it like nn.Module with torch.onnx.export.\r\nHere is my python code:\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nimport onnxruntime as rt\r\nfrom torch.autograd import Variable\r\n\r\ndef test(jac_t, state):\r\n n_actions = 5\r\n dt = 0.01\r\n\r\n # target = torch.randn(n_actions, 1)\r\n target = torch.tensor([[ 0.0754],\r\n [ 1.2151],\r\n [-1.4920],\r\n [ 1.1642],\r\n [ 0.2289]])\r\n mat = torch.matmul(jac_t, target) * dt + state\r\n\r\n init = torch.randn(n_actions, 1)\r\n init = torch.tensor([[-0.3018],\r\n [ 1.1070],\r\n [-1.4571],\r\n [ 1.0705],\r\n [-0.8479]])\r\n q_dot = Variable(init, requires_grad=True)\r\n v = [q_dot]\r\n optimizer = torch.optim.LBFGS(v)#, lr=0.1)\r\n for i in range(0, 10):\r\n def cost():\r\n optimizer.zero_grad()\r\n next_state = torch.matmul(jac_t, q_dot) * dt + state\r\n d = torch.pow(next_state - mat, 2).sum()\r\n d.backward()\r\n return d\r\n optimizer.step(cost)\r\n d = cost()\r\n if d < 1e-3:\r\n break\r\n return init\r\n\r\nclass Test(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.linear1 = nn.Linear(5, 1)\r\n self.linear2 = nn.Linear(1, 1)\r\n\r\n def forward(self, jac_t, state):\r\n out = self.linear1(jac_t), self.linear2(state)\r\n c = test(jac_t, state)\r\n return out, c\r\n\r\nif __name__ == '__main__':\r\n ttt = Test()\r\n ttt.eval()\r\n n_actions = 5\r\n # jac_t = torch.randn(6, n_actions)\r\n jac_t = torch.tensor([[ 2.0041, 2.2399, -0.0553, 1.4054, 0.2301],\r\n [ 1.4019, -2.3094, -1.0461, 0.7753, 1.0787],\r\n [-0.6338, 0.1553, -1.1531, 1.0613, -0.2952],\r\n [ 0.0541, -0.3652, -0.5361, 2.0200, 0.9431],\r\n [ 0.4075, 1.4435, -1.5067, -0.5096, 0.7448],\r\n [-0.6440, -0.6492, 0.3728, -2.8277, -1.1983]])\r\n # state = torch.randn(6, 1)\r\n state = torch.tensor([[-1.1193],\r\n [ 0.2084],\r\n [-1.4547],\r\n [-1.2416],\r\n [ 0.9738],\r\n [ 1.6379]])\r\n torch.onnx.export(ttt, (jac_t, state), 'ttt.onnx')\r\n\r\n a = ttt.forward(jac_t, state)\r\n print('a', a[-1])\r\n\r\n sess = rt.InferenceSession('ttt.onnx')\r\n b = sess.run(None, {\"jac_t\": jac_t.numpy(), \"state\": state.numpy()})\r\n print('b', b[-1])\r\n```\r\nThe outputs of a and b are the same (both close to the value of target), which meas that the inference with the exported onnx file do some calculation like python code. But if I comment out `init = torch.tensor([[-0.3018]...` and use `init = torch.randn(n_actions, 1)`, the output of b will be wrong.\r\nSo I guess the calculation of the exported onnx module is not dynamic. It records the way to add/multiply to the result, something like Computational Graphs. In fact I have to use `out = self.linear1(jac_t), self.linear2(state)` to put jac_t, state into Computational Graphs.\r\nWhat's the proper way to export torch.optim.LBFGS?\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n\ncc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar", "url": "https://github.com/pytorch/pytorch/issues/122957", "state": "open", "labels": [ "module: onnx", "module: optimizer", "triaged" ], "created_at": "2024-03-29T08:42:49Z", "updated_at": "2024-07-22T09:48:29Z", "user": "shekmun" }, { "repo": "pytorch/pytorch", "number": 122916, "title": "MPS torch.where() is giving objectively incorrect results, leading to critical calculation errors", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n\r\nI think I have an example of how MPS can get completely different results from CPU. Hopefully the simplicity of this example will be clear and helpful. This may be related to a previous issue noted on this forum (#84936).\r\n\r\n```python\r\nimport numpy as np\r\nimport torch\r\nmps_device = torch.device(\"mps\")\r\n\r\n## Create a numpy matrix with many zeros\r\nnp.random.seed(0)\r\nNumpy_Test = np.random.random(200000000)\r\nindices = np.random.choice(np.arange(Numpy_Test.size), replace=False,size=int(Numpy_Test.size * 0.6))\r\nNumpy_Test[indices] = 0\r\nNumpy_Matrix = Numpy_Test.reshape((20000,10000))\r\n\r\n## Get the indices of non-zero values in the matrix, and convert these indices into a numpy array\r\nindices = np.where(Numpy_Matrix != 0)\r\nindices = np.asarray(indices)\r\n\r\n## Use numpy, torch, or a torch.mps object to find where indices[1] == 8000\r\n# Using np.where\r\nnp.where(indices[1] == 8000)[0]\r\narray([ 19165, 27061, 39165, ..., 79979029, 79987021, 79995171])\r\n\r\n# Using torch.where\r\ntorch.where(torch.from_numpy(indices)[1] == 8000)[0]\r\ntensor([ 19165, 27061, 39165, ..., 79979029, 79987021, 79995171])\r\n\r\n# Using torch.where with an NPS object\r\ntorch.where(torch.from_numpy(indices)[1].to(mps_device) == 8000)[0]\r\ntensor([ 19165, 27061, 39165, ..., 79979032, 79987024, 79995168], device='mps:0')\r\n\r\n```\r\nNotice how the first two np.where and torch.where examples give them same results, but when using the tensor converted to MPS we get different results?\r\n\r\nIf I've not made an obvious mistake, this is a clear example of how MPS completely ruins calculations, because in this case, the indexes change, and all downstream calculations become meaningless. \r\n\r\n### Versions\r\n\r\ntorch version v0.2.1 and v0.2.0\r\n\r\ncc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr", "url": "https://github.com/pytorch/pytorch/issues/122916", "state": "closed", "labels": [ "triaged", "module: 64-bit", "module: correctness (silent)", "module: mps" ], "created_at": "2024-03-28T19:56:17Z", "updated_at": "2025-03-01T16:19:53Z", "user": "aradley" }, { "repo": "pytorch/serve", "number": 3051, "title": "Can torchserve return image data?", "body": "### \ud83d\udcda The doc issue\n\nI have a model that outputs byte data of an image. I would like to ask how torchserve should return this type of data?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/3051", "state": "closed", "labels": [ "triaged" ], "created_at": "2024-03-28T07:24:56Z", "updated_at": "2024-04-02T22:53:39Z", "comments": 1, "user": "pengxin233" }, { "repo": "pytorch/TensorRT", "number": 2720, "title": "\u2753 [Question] compiled ExportedProgram is slower than uncompiled model", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI tried compiling a few models with `torch_tensorrt.compile(model, inputs, ir='dynamo', ...)` and each one of them was slower than the respective uncompiled model. I was wondering if I was using torch_tensorrt incorrectly.\r\n\r\n## What you have already tried\r\nA minimum example:\r\n```\r\nimport torch\r\nimport torch_tensorrt\r\nimport time\r\n\r\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'mobilenet_v2', pretrained=True)\r\nmodel.eval().cuda()\r\n\r\ninputs = [\r\n torch_tensorrt.Input(\r\n shape=torch.Size((1, 3, 480, 640)),\r\n dtype=torch.float,\r\n )\r\n]\r\ntrt_model = torch_tensorrt.compile(model, inputs=inputs, ir='dynamo', truncate_long_and_double=True, enabled_precisions={torch.half}, opt_level='max')\r\n```\r\n\r\nThe inference time was measured as below:\r\n```\r\nx = torch.rand((1, 3, 480, 640)).cuda() - 0.5\r\n\r\n# warm up \r\nfor _ in range(10):\r\n trt_model(x)\r\n\r\ntotal_time = 0\r\nfor _ in range(20):\r\n start = time.time()\r\n out = trt_model(x)\r\n total_time += time.time() - start\r\nprint(total_time / 20)\r\n\r\n```\r\n\r\nOn average the uncompiled model inference time is 4ms and compiled model 9ms.\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.2.1\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip intall torch torch_tensorrt\r\n - Build command you used (if compiling from source): \r\n - Are you using local sources or building from archives:\r\n - Python version: 3.11\r\n - CUDA version: 12.3\r\n - GPU models and configuration: NVIDIA GeForce RTX 4050\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2720", "state": "open", "labels": [ "question" ], "created_at": "2024-03-28T06:08:21Z", "updated_at": "2024-04-02T22:02:01Z", "user": "Qi-Zha0" }, { "repo": "pytorch/text", "number": 2249, "title": "Why torchtext needs to reinstall torch", "body": "Hi team, I am trying to install torchtext with torch 2.2.1-cu121 installed. But once I run `pip install torchtext` the pip will install torch 2.2.1 cpu version for me, is there any way to avoid this?\r\n\r\nThe output log:\r\n```bash\r\nSuccessfully installed torch-2.2.2+cu121 torchaudio-2.2.2+cu121 torchvision-0.17.2+cu121\r\nPS :/scratch/github/scgpt$ pip uninstall torchtext\r\nFound existing installation: torchtext 0.17.1\r\nUninstalling torchtext-0.17.1:\r\n Would remove:\r\n /anaconda/envs/scgpt/lib/python3.11/site-packages/torchtext-0.17.1.dist-info/*\r\n /anaconda/envs/scgpt/lib/python3.11/site-packages/torchtext/*\r\nProceed (Y/n)? \r\n Successfully uninstalled torchtext-0.17.1\r\nPS :/scratch/github/scgpt$ pip install -U torchtext --no-cache\r\nCollecting torchtext\r\n Downloading torchtext-0.17.1-cp311-cp311-manylinux1_x86_64.whl.metadata (7.6 kB)\r\nRequirement already satisfied: tqdm in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (4.66.2)\r\nRequirement already satisfied: requests in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (2.28.1)\r\nCollecting torch==2.2.1 (from torchtext)\r\n Downloading torch-2.2.1-cp311-cp311-manylinux1_x86_64.whl.metadata (26 kB)\r\nRequirement already satisfied: numpy in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (1.26.3)\r\nRequirement already satisfied: torchdata==0.7.1 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (0.7.1)\r\nRequirement already satisfied: filelock in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (3.9.0)\r\nRequirement already satisfied: typing-extensions>=4.8.0 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (4.8.0)\r\nRequirement already satisfied: sympy in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (1.12)\r\nRequirement already satisfied: networkx in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (3.2.1)\r\nRequirement already satisfied: jinja2 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (3.1.2)\r\nRequirement already satisfied: fsspec in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (2024.2.0)\r\nRequirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)\r\nRequirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)\r\nRequirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)\r\nRequirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (8.9.2.26)\r\nRequirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.3.1)\r\nRequirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (11.0.2.54)\r\nRequirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (10.3.2.106)\r\nRequirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (11.4.5.107)\r\nRequirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.0.106)\r\nRequirement already satisfied: nvidia-nccl-cu12==2.19.3 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (2.19.3)\r\nRequirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)\r\nRequirement already satisfied: triton==2.2.0 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (2.2.0)\r\nRequirement already satisfied: urllib3>=1.25 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchdata==0.7.1->torchtext) (1.26.13)\r\nRequirement already satisfied: nvidia-nvjitlink-cu12 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch==2.2.1->torchtext) (12.4.99)\r\nRequirement already satisfied: charset-normalizer<3,>=2 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from requests->torchtext) (2.1.1)\r\nRequirement already satisfied: idna<4,>=2.5 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from requests->torchtext) (3.4)\r\nRequirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from requests->torchtext) (2022.12.7)\r\nRequirement already satisfied: MarkupSafe>=2.0 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from jinja2->torch=", "url": "https://github.com/pytorch/text/issues/2249", "state": "open", "labels": [], "created_at": "2024-03-27T11:19:41Z", "updated_at": "2024-03-27T11:23:04Z", "comments": 0, "user": "WhenMelancholy" }, { "repo": "pytorch/TensorRT", "number": 2718, "title": "\u2753 [Question] Can TensorRT load and run torch_tensorrt models directly? ", "body": "Can TensorRT load and run torch_tensorrt models directly? I want to export my pytorch model and deploy it with TensorRT.", "url": "https://github.com/pytorch/TensorRT/issues/2718", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-27T07:46:57Z", "updated_at": "2024-06-07T01:10:43Z", "user": "theNefelibata" }, { "repo": "pytorch/pytorch", "number": 122756, "title": "How to reduce memory usage for large matrix calculations\uff1f", "body": "\r\nA_ = torch.sigmoid(torch.matmul(x, x.t()))\r\nx is the feature of tens of thousands of nodes, the shape is 700,000*8, 8 is the number of features extracted from each node.\r\nCalculation requires several t of memory. How to reduce memory overhead?", "url": "https://github.com/pytorch/pytorch/issues/122756", "state": "open", "labels": [ "triaged" ], "created_at": "2024-03-27T02:06:03Z", "updated_at": "2024-04-01T15:59:16Z", "user": "bowensuuu" }, { "repo": "pytorch/serve", "number": 3045, "title": "gRPC Model Metadata using Open Inference Protocol", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nConsider a system where a feature service fetches model metadata that has information on what feature to fetch and finally infer from the model. In order for me fetch this metadata regarding inputs and outputs I am trying to use the recently added [Open inference protocol](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/resources/proto/open_inference_grpc.proto). \r\nwhile trying to infer using grpcurl, it shows me the name and version of the model. \r\n\r\n```\r\n grpcurl -plaintext -d '{\"name\": \"toy-ranker\"}' -proto serve/frontend/server/src/main/resources/proto/open_inference_grpc.proto localhost:79 org.pytorch.serve.grpc.openinference.GRPCInferenceService/ModelMetadata\r\n{\r\n \"name\": \"toy-ranker\",\r\n \"versions\": [\r\n \"2024-03-26-15:33\"\r\n ]\r\n}\r\n```\r\nwith simple curl, the output is REST API does not add anything[ model custom](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/java/org/pytorch/serve/http/api/rest/OpenInferenceProtocolRequestHandler.java#L58-L66) to it. \r\n```\r\n$ curl http://localhost:80/v2\r\n{\r\n \"name\": \"Torchserve\",\r\n \"version\": \"0.10.0\",\r\n \"extenstion\": [\r\n \"kserve\",\r\n \"kubeflow\"\r\n ]\r\n}\r\n\r\n```\r\n\r\nI was trying to understand where it sets this metadata so i can impute it accordingly. I could not find a way for it to set [inputs and outputs](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/java/org/pytorch/serve/grpcimpl/OpenInferenceProtocolImpl.java#L155-L180). \r\n\r\nDo you know of how the metadata is set if so in torchserve.\r\n\r\n### Error logs\r\n\r\nn/a\r\n\r\n### Installation instructions\r\n\r\nDockerfile on top of latest torchserve image\r\n```\r\nfrom pytorch/torchserve-nightly:latest-gpu\r\nENV TS_OPEN_INFERENCE_PROTOCOL oip\r\n\r\n```\r\n\r\n### Model Packaing\r\n\r\nmnist model can be used, independent of model type.\r\n\r\n### config.properties\r\n\r\ninference_address=http://0.0.0.0:8080\r\nmanagement_address=http://0.0.0.0:8081\r\nmetrics_address=http://0.0.0.0:8082\r\nenable_metrics_api=true\r\nmodel_metrics_auto_detect=true\r\nmetrics_mode=prometheus\r\nnumber_of_netty_threads=32\r\njob_queue_size=1000\r\nenable_envvars_config=true\r\nmodel_store=/home/model-server/model-store\r\nworkflow_store=/home/model-server/wf-store\r\nload_models=all\r\n\r\n### Versions\r\n\r\n------------------------------------------------------------------------------------------\r\nEnvironment headers\r\n------------------------------------------------------------------------------------------\r\nTorchserve branch:\r\n\r\n**Warning: torchserve not installed ..\r\n**Warning: torch-model-archiver not installed ..\r\n\r\nPython version: 3.11 (64-bit runtime)\r\nPython executable: /home/hmeena/.pyenv/versions/airflow/bin/python\r\n\r\nVersions of relevant python libraries:\r\nrequests==2.31.0\r\n**Warning: torch not present ..\r\n**Warning: torchtext not present ..\r\n**Warning: torchvision not present ..\r\n**Warning: torchaudio not present ..\r\n\r\nJava Version:\r\n\r\n\r\nOS: CentOS Linux release 7.5.1804 (Core)\r\nGCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)\r\nClang version: 3.4.2 (tags/RELEASE_34/dot2-final)\r\nCMake version: N/A\r\n\r\nEnvironment:\r\nlibrary_path (LD_/DYLD_): :/search/dist/bin:/search/dist/bin\r\n\r\n\r\n### Repro instructions\r\n\r\n[Model from old issue ](https://github.com/pytorch/serve/issues/2951#issuecomment-1984168898)i created can be used. \r\n\r\n### Possible Solution\r\n\r\nTake an input metadata file that can be exposed on both gRPC and [REST](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/java/org/pytorch/serve/http/api/rest/OpenInferenceProtocolRequestHandler.java#L58-L66) metadata endpoints. One example is on the lines of seldon metadata ep that exposes [this information](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/metadata/models/init-metadata/Model.py#L15-L28). ", "url": "https://github.com/pytorch/serve/issues/3045", "state": "open", "labels": [ "OIP" ], "created_at": "2024-03-26T20:52:16Z", "updated_at": "2024-04-02T22:54:39Z", "comments": 1, "user": "harshita-meena" }, { "repo": "pytorch/xla", "number": 6822, "title": "Loading large model (e.g. LLMs)", "body": "## \u2753 Questions and Help\r\n\r\nHi, I'm trying to load large models on TPU-V4 Pod. I saw the discussions in the issues about torchdistX and meta devices. I'm wondering is there any good or recommended solution now?\r\n\r\nI am having trouble installing torchdistX with torch/torchXLA 2.2.0 and the LLaMA model I'm loading doesn't have reset_params as well. There are also some discussions on about reset_params in the issues. ", "url": "https://github.com/pytorch/xla/issues/6822", "state": "closed", "labels": [ "question", "dataloading" ], "created_at": "2024-03-26T18:20:16Z", "updated_at": "2025-04-18T12:43:08Z", "user": "tsb0601" }, { "repo": "pytorch/xla", "number": 6820, "title": "Help RoPE fusion ", "body": "## \u2753 Questions and Help\r\nI use the set of tools pytorch/torch xla/openxla, and I want to fuse the operator RoPE into a custom operator, so that the hardware can operate directly. Do you think which layer I should do this better? In the xla pass? Define a RoPE operator in the python layer? Or has the existing framework already implemented this problem of mine?", "url": "https://github.com/pytorch/xla/issues/6820", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-26T11:54:24Z", "updated_at": "2025-04-18T12:45:22Z", "user": "ckfgihub" }, { "repo": "pytorch/serve", "number": 3042, "title": "Custom class handler missing BaseHandler ", "body": "### \ud83d\udcda The doc issue\n\nI believe the docs for a custom class level entry point are missing the base-class `BaseHandler`. If i'm mistaken, please close this issue.\r\n\r\nLink: https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handler-with-class-level-entry-point\n\n### Suggest a potential alternative/fix\n\nReplace `class ModelHandler(object):` with `class ModelHandler(BaseHandler):`\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/3042", "state": "open", "labels": [ "documentation" ], "created_at": "2024-03-26T06:54:31Z", "updated_at": "2024-03-26T20:41:02Z", "comments": 0, "user": "swstack" }, { "repo": "pytorch/examples", "number": 1242, "title": "Pytorch is insufficiently opinionated ", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n## Context\r\nMachine learning models can be trained on secret, synthetic, or biased data to create seemingly authoritative probability estimates used for abusive purposes in legal contexts. In Jessica Logan's case, her 911 call was used as \"evidence\" [(when interpreted by a poorly trained and overly confident detective)](https://www.propublica.org/article/911-call-analysis-jessica-logan-evidence) that she had killed her baby.\r\n\r\nAs an example, an affiliate of Tracy Harpster [1] currently conspires to create an AI startup (Deceptio AI) to launder voodoo (via hidden datasets) to increase the odds of a wrongful conviction based on 911 call audio. Deceptio AI's website is excluded from the Internet Archive; this is a clear indication that Deceptio AI believes itself better off hidden.\r\n\r\nThis practice is [spreading throughout the law enforcement system](https://www.propublica.org/article/911-call-analysis-fbi-police-courts) faster than judges and investigators grounded in reality can possibly counter it. \r\n\r\nThe emergence of a webpage where a LEO can anonymously upload an audio clip and receive a \"guilty\" or \"not guilty\" certificate will crystallize the cost of this issue. \r\n\r\nFrom [3]:\r\n> A couple of years ago, he and his two business partners, including one who has decades of experience in statement analysis, decided to join forces and create software that essentially has the brain of a veteran analyst.\r\n> \r\n> \u201cWe've come up with an AI now that can detect deception in a person's written or spoken words,\u201d Carson said.\r\n> \r\n> In simple terms, Carson said a client of the business would go on their website, Deceptio.AI, and companies that purchase the software can input statements and the program determines how truthful the statement is and why it may or may not be the whole truth.\r\n> \r\n> \u201cThen we're going to simply click analyze statement and then what the section does is it gives you a probability of truthfulness,\u201d Carson said when demonstrating how Deceptio works. \u201cNow, what we see is anything that falls 85% and under means it's a highly deceptive statement.\u201d \r\n\r\nFrom [4]:\r\n> He designed the platform for widespread usage and said it requires no training. The co-founders have bootstrapped the startup over the past two years and recently opened their first funding round.\r\n> \r\n> \u201cWe\u2019re seeking the right investors,\u201d Carson explained. \u201cThose are the ones that understand the goal and vision of what a true AI tool like this will mean long-term. We\u2019ve turned down a couple already.\u201d\r\n> \r\n> Carson said the company has also collected and stored a massive amount of human behavioral data, called [pattern of life analysis](https://cambridge-intelligence.com/pattern-of-life-analysis/#:~:text=Pattern%20of%20life%20analysis%20is,large%20quantities%20of%20observed%20data.). He said Deceptio\u2019s database \u201cliterally maps\u201d deceptiveness in the human psyche.\r\n> \r\n> He noted that Cathie Wood, CEO of St. Petersburg-based ARK Invest, frequently mentions [the value of AI entrepreneurs amassing proprietary data](https://stpetecatalyst.com/generative-ai-takes-center-stage-at-synapse-summit/). Carson called Deceptio\u2019s information, which does not include personal information, \u201cexceptionally proprietary.\u201d\r\n> \r\n> \u201cTo our knowledge, there isn\u2019t anyone else on the planet doing what we\u2019re doing,\u201d he added. \u201cLet alone amassing the type of life intelligence data we\u2019re collecting.\u201d\r\n\r\n[1] Statement Analysis by Mark McClish and Tracy Harpster: https://web.archive.org/web/20231004064240/https://www.statementanalysis.com/bio/\r\n[2] Deceptio AI: https://www.deceptio.ai/\r\n[3] https://web.archive.org/web/20240325092556/https://baynews9.com/fl/tampa/news/2023/09/14/deceptio-ai-detects-lies\r\n[4] https://web.archive.org/web/20231002083318/https://stpetecatalyst.com/st-pete-startup-uses-ai-to-detect-lies/\r\n\r\nIf you believe similarly useless discrimators and their corporate reproductive organs will not be created for other abusive purposes, e.g. by banks, landlords, insurance firms, school administrators, university regents, forensic investigators, farmers, miners, doctors, or nurses, you are simply not paying attention.\r\n\r\n* Pytorch version: 2.2.1\r\n* Operating System and version: Ubuntu 20.04\r\n\r\n## Your Environment\r\n* Installed using source? [yes/no]: yes\r\n* Are you planning to deploy it using docker container? [yes/no]: no\r\n* Is it a CPU or GPU environment?: Both\r\n* Which example are you using: all\r\n* Link to code or data to repro [if any]:\r\n\r\n## Expected Behavior\r\nPyTorch should prohibit users from creating discriminators or generators intended for use on the real world which are trained with data not representative of the real world.\r\n\r\n## Current Behavior\r\nAnyone with an NVIDIA GPU can download PyTorch and train a model on fake datasets, then re-sell access to the model as an \"investigative service.\"\r\n\r\n## Possible Solution\r\nDestroy PyTorch.\r\n\r\n## Steps to Reproduce\r\nDeceptio.AI\r\n1. https://www.propublica.org/article/911-call-analy", "url": "https://github.com/pytorch/examples/issues/1242", "state": "closed", "labels": [], "created_at": "2024-03-25T09:13:15Z", "updated_at": "2024-03-26T07:17:14Z", "comments": 0, "user": "ghost" }, { "repo": "pytorch/examples", "number": 1241, "title": "RuntimeError in Partialconv-master", "body": "## \ud83d\udcda Documentation\r\n\r\nI am getting this error in signal_handling.py file\r\n<img width=\"426\" alt=\"image\" src=\"https://github.com/pytorch/examples/assets/126889261/0881dd8e-abb2-467f-bab4-818f3f856418\">\r\nthat is in miniconda3/lib/python3.12/site-packages/torch/utils/data/_utils/signal_handling.py\r\n\r\nHow can I fix this?\r\n", "url": "https://github.com/pytorch/examples/issues/1241", "state": "open", "labels": [], "created_at": "2024-03-24T21:37:03Z", "updated_at": "2024-03-26T07:17:49Z", "comments": 1, "user": "shaSaaliha" }, { "repo": "pytorch/PiPPy", "number": 988, "title": "How to use PiPPy for large models that won't fit on one GPU", "body": "Hello, I was wondering If someone could provide an example or some guidance on how to use PiPPy for models, that will not fit on one GPU. I want to run pipeline parallelism with Llama2 70B on a node with multiple a100 gpus. However, if I run the pippy_llama.py example, every process will just try to load the whole model on the GPU corresponding to its local rank, which will cause a CUDA out of memory error. ", "url": "https://github.com/pytorch/PiPPy/issues/988", "state": "open", "labels": [ "high-pri" ], "created_at": "2024-03-23T15:49:18Z", "updated_at": "2024-03-30T00:08:01Z", "user": "aspiridon0v" }, { "repo": "pytorch/hub", "number": 343, "title": "How to load a custom YOLOv9 model using torch.hub.load()?", "body": "Hi,\r\n\r\nI have trained a YOLOV9-e model on a custom dataset from this repo: [https://github.com/WongKinYiu/yolov9](url)\r\n\r\nNow I tried to load it as below-\r\n![image](https://github.com/pytorch/hub/assets/30830541/cc93dba0-e4be-4ebf-a9f4-b486f3209510)\r\n\r\nBut getting the following error-\r\n![image](https://github.com/pytorch/hub/assets/30830541/b5ab21dd-f839-4b54-bf0f-c01ecbcf5ae2)\r\n\r\nIt says- `RuntimeError: Cannot find callable best.pt in hubconf`\r\n\r\nPlease share the correct way to load the model.", "url": "https://github.com/pytorch/hub/issues/343", "state": "closed", "labels": [], "created_at": "2024-03-22T10:05:20Z", "updated_at": "2024-03-22T10:28:26Z", "user": "dsbyprateekg" }, { "repo": "pytorch/pytorch", "number": 122414, "title": "`torch.compile` should result in an optimized module where `module.training` is the same as in the unoptimized module", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi, basically what the title says.\r\nThe current behavior of `torch.compile` is imo quite unexpected and can lead users to the false belief that a model is in eval mode.\n\n### Alternatives\n\nAlternatively, it would be a good idea to add to the documentation of `torch.compile` that the resulting optimized module always is in train mode.\n\n### Additional context\n\n_No response_\n\ncc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng", "url": "https://github.com/pytorch/pytorch/issues/122414", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo", "dynamo-triage-june2024" ], "created_at": "2024-03-21T15:45:52Z", "updated_at": "2024-07-25T17:43:12Z", "user": "uwu-420" }, { "repo": "pytorch/pytorch", "number": 122303, "title": "How to exclude some modules from quantization?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nHi there, I am newcomer to model quantization. I have some problems and hope to get some advice and help from community. Thanks in advance!\r\n\r\nHere is a demo model:\r\n\r\n```python\r\nclass DemoModel(nn.Module):\r\n def __init__(self):\r\n super(DemoModel, self).__init__()\r\n self.conv = nn.Conv2d(3, 3, kernel_size=(3, 3))\r\n self.bn = nn.BatchNorm2d(3)\r\n self.fc = nn.Linear(3 * 26 * 26, 10)\r\n # comment following code if we use fx mode\r\n self.quant = torch.quantization.QuantStub()\r\n self.dequant = torch.quantization.DeQuantStub()\r\n\r\n def forward(self, x):\r\n x = self.quant(x)\r\n x = self.conv(x)\r\n x = self.bn(x)\r\n x = torch.reshape(x, (1, -1))\r\n output = self.fc(torch.relu(x))\r\n output = self.dequant(output)\r\n return output\r\n```\r\nI want to quantize it and export it as onnx format I got error messge:\r\n\r\n```\r\nExporting the operator 'quantized::batch_norm2d' to ONNX opset version 17 is not supported\r\n```\r\nSo there are compatibility issues between onnx and quantized op in pytroch. Is there any way to exclude some modules from quanzation and let rest modules be quantized ? `nn.batchNorm2d` here is a example. I found one solution is to fuse `nn.Conv2d` and `nn.BatchNorm2D`, like:\r\n\r\n```\r\ntorch.quantization.fuse_modules(model, ['conv', 'bn'], inplace=True)\r\n```\r\nOr try to use FX mode intsead because it fuses conv and bn automatically. Howerver, I encountered another problem, FX mode would also fuse Linear and relu and then similiar error comes again :(. \r\n```\r\nExporting the operator 'quantized::linear_relu' to ONNX opset version 17 is not supported.\r\n```\r\n\r\nThe demo model **not shown here**, just add linear and relu, quantize it in FX model then export to onnx format should reproduce it. \r\n\r\nIn all, my question are:\r\n1. **Is there any way to exclude some modules from quanzation and let rest modules be quantized?** \r\n \r\n` torch.quantization.prepare` provides a `allow_list` , if I filter out `nn.BatchNorm2d` , I got error message:\r\n\r\n```\r\nAttributeError: 'BatchNorm2d' object has no attribute 'activation_post_process'\r\n```\r\n2. **How to set some modules in FX mode not be fused?** \r\n3. **why `object_type` in qconfig_dict does not work?**\r\nAccording to a SO [answer](https://stackoverflow.com/questions/72730969/pytorch-eager-quantization-skipping-modules#comment128471477_72733206), using dict like :\r\n```\r\n qconfig_dict = {\"\": torch.quantization.get_default_qconfig(backend),\r\n \"object_type\": [\r\n (torch.nn.Linear, None),\r\n (torch.nn.ReLU, None)\r\n ]}\r\n```\r\ncould skip quantize `torch.nn.Linear` and `torch.nn.ReLU` but seems does not work , I still got `Exporting the operator 'quantized::linear_relu' to ONNX`.\r\n\r\n4. **How to quantize a sophiscated model (e.g. using other model as backbone)?**\r\nIn this scenario, if I choose eager mode , do I need to insert `quantStub` and `deQuantStub` to every backbone modules? If it's true, so FX mode is a better choose to quantize complex model, right ?\r\n### Versions\r\n\r\nversion: 2.1.1+cu118\r\n\r\n\r\ncc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel", "url": "https://github.com/pytorch/pytorch/issues/122303", "state": "open", "labels": [ "oncall: quantization" ], "created_at": "2024-03-20T12:26:33Z", "updated_at": "2024-03-27T08:22:57Z", "user": "stricklandye" }, { "repo": "pytorch/xla", "number": 6778, "title": "Spmd pre-training llama2 multi-machine training so slow?", "body": "spmd has a normal training speed using eight blocks on a single machine, but the communication overhead increases rapidly in the case of multiple machines\r\ndevice is\uff1a\r\ngpu\uff1aA100 * 8 * 2\r\nspmd strategy is:\r\n```\r\nfor name, param in model.named_parameters():\r\n shape = (num_devices,) + (1,) * (len(param.shape) - 1)\r\n mesh = xs.Mesh(device_ids, shape)\r\n xs.mark_sharding(param, mesh, range(len(param.shape)))\r\n```\r\nprofile result is\uff1a\r\n \r\n![image](https://github.com/pytorch/xla/assets/62137145/6cba5403-e5ae-44ba-9554-acfa922a2549)\r\n", "url": "https://github.com/pytorch/xla/issues/6778", "state": "closed", "labels": [ "performance", "xla:gpu", "distributed" ], "created_at": "2024-03-20T03:31:29Z", "updated_at": "2025-04-18T12:49:34Z", "comments": 23, "user": "mars1248" }, { "repo": "pytorch/torchx", "number": 849, "title": "Missing quotes on torchx install command.", "body": "## \ud83d\udcda Documentation\r\n\r\nI was running the [TorchX Quickstart](https://pytorch.org/torchx/latest/quickstart.html) tutorial and I would get a message saying that the package couldn't be found.\r\n\r\n![image](https://github.com/pytorch/pytorch/assets/19861348/04af51ca-945f-4bc0-ae9a-a098ade6ddf3)\r\n\r\nAfter looking around, I realized the command would only work with quotes. I'll be opening a PR to add the quotes to the documentation.\r\n\r\n## Link\r\n[<!-- link to the problematic documentation -->](https://pytorch.org/torchx/latest/quickstart.html)\r\n\r\n## What does it currently say?\r\n`pip install torchx[dev]`\r\n\r\n## What should it say?\r\n`pip install \"torchx[dev]\"`\r\n\r\n## Why?\r\nBecause, otherwise, it says the package cannot be found.\r\n\r\n\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/849", "state": "closed", "labels": [], "created_at": "2024-03-18T23:56:44Z", "updated_at": "2024-03-20T15:06:34Z", "comments": 2, "user": "mdevino" }, { "repo": "pytorch/pytorch", "number": 122079, "title": "how to find the source code of the torch.linalg.eigh", "body": "### \ud83d\udcda The doc issue\n\nwhat is the iteration process of the torch.linalg.eigh\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/122079", "state": "closed", "labels": [], "created_at": "2024-03-18T07:50:05Z", "updated_at": "2024-03-19T02:27:30Z", "user": "liweiyangv" }, { "repo": "pytorch/xla", "number": 6766, "title": "How to implement parrallel training across TPU device with XLA 2.X", "body": "I found the latest opensource LLM from google: Gemma has two version of model structure.\r\n\r\n1. https://github.com/google/gemma_pytorch/blob/main/gemma/model_xla.py\r\n2. https://github.com/google/gemma_pytorch/blob/main/gemma/model.py\r\n\r\nwhere the `model_xla` version with `run_xla.sh` and `xla_model_parallel.py` seems used `XLA` 1.X version with modified Transformer network. \r\n\r\nBeside, I found the main modified part is related to replace official `nn.Linear` part with:\r\n\r\n```\r\nColumnParallelLinear\r\nParallelEmbedding\r\nRowParallelLinear\r\n```\r\n\r\nDo we still need to perform such job to fit the our model to be trained on `XLA` device?\r\n\r\nOr there existed such hooks inside the XLA lib and we just do similar thing like [FSDP](https://pytorch.org/xla/release/2.2/index.html#example-training-scripts-on-mnist-and-imagenet) introduced \ud83e\udd17,\r\n\r\n```\r\n fsdp_wrap = lambda m: FSDP(\r\n m,\r\n compute_dtype=getattr(torch, FLAGS.compute_dtype),\r\n fp32_reduce_scatter=FLAGS.fp32_reduce_scatter,\r\n flatten_parameters=FLAGS.flatten_parameters,\r\n shard_param_on_dim_0=FLAGS.shard_param_on_dim_0,\r\n pin_layout_in_collective_ops=FLAGS.pin_layout_in_collective_ops,\r\n auto_wrap_policy=auto_wrap_policy,\r\n auto_wrapper_callable=auto_wrapper_callable)\r\n\r\nmodel = fsdp_wrap(model)\r\n```\r\n\r\nCan we have a doc to have directly implement [Gemma](https://github.com/google/gemma_pytorch/blob/main/gemma/model.py) with XLA `pjrt` feature without heavy modification as [Gemma_XLA](https://github.com/google/gemma_pytorch/blob/main/gemma/xla_model_parallel.py) did?\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/6766", "state": "closed", "labels": [ "question", "distributed", "xla:tpu" ], "created_at": "2024-03-18T06:34:38Z", "updated_at": "2025-04-18T13:50:47Z", "user": "Mon-ius" }, { "repo": "pytorch/xla", "number": 6760, "title": "xla_model.RateTracker doesn't have a docstring and its behavior is subtle and potentially confusing.", "body": "## \ud83d\udcda Documentation\r\n\r\nThe `RateTracker` class in https://github.com/pytorch/xla/blob/fe3f23c62c747da30595cb9906d929b926aae6e4/torch_xla/core/xla_model.py doesn't have a docstring. This class is [used in lots of tests](https://github.com/search?q=repo%3Apytorch%2Fxla%20RateTracker&type=code), including [this one](https://github.com/pytorch/xla/blob/master/test/test_train_mp_mnist.py) that is referenced from the [main documentation](https://pytorch.org/xla/release/2.2/index.html), so new PyTorch/XLA users may see it as a natural and supported way to track and report training efficiency metrics.\r\n\r\n`RateTracker`'s behavior is subtle and potentially confusing, since tracking throughput can involve measuring data at different granularities (e.g. batch, example, or, for LLMs, tokens) and reporting per-accelerator, per-host, or globally. Here is what I think the answers to these are; please correct me.\r\n\r\nFollowing the examples in those tests, (where the batch size is added to the tracker at each training step), I think that `rate` measures the examples (not tokens) per second seen during the last batch (specifically, since the last time `.rate()` was called) and `global_rate` measures the same for the whole training run. Therefore the expectation is that global_rate will be slow in the beginning but after compilation and other one-time costs it will rise and typically approach the per-batch training rate, though the latter may vary.\r\n\r\nIn terms of what granularity of devices the metrics reflect, for SPMD, I think these will be both global metrics (for the whole training job), but for other distribution strategies, I think they're per-device.\r\n\r\nIs that right? \r\n", "url": "https://github.com/pytorch/xla/issues/6760", "state": "closed", "labels": [ "usability" ], "created_at": "2024-03-15T17:23:46Z", "updated_at": "2025-04-18T13:52:01Z", "comments": 10, "user": "ebreck" }, { "repo": "pytorch/xla", "number": 6759, "title": "Do I have to implement PjRtLoadedExecutable::GetHloModules when `XLA_STABLEHLO_COMPILE=1` ?", "body": "## \u2753 Questions and Help\r\n\r\nHi, I'm from a hardware vendor and we want to implement a PJRT plugin for our DSA accelerator. We have our own MLIR-based compiler stack and it takes StableHLO as the input IR. \r\n\r\nI'm new to PJRT, according to the [description](https://opensource.googleblog.com/2024/03/pjrt-plugin-to-accelerate-machine-learning.html), PJRT API is supposed to be compiler-agnostic and should not assume a PJRT plugin's compiler backend must be XLA. However, in `PyTorch/XLA`'s PJRT runtime: `PjRtComputationClient::Compile`, it calls `PjRtLoadedExecutable::GetHloModules` (which we left unimplemented in our `PjRtLoadedExecutable` implementation) and expects returning of valid `xla::HloModule`:\r\n\r\nhttps://github.com/pytorch/xla/blob/19b83830ac4ee3a39d99abaf154f485c2399f47a/torch_xla/csrc/runtime/pjrt_computation_client.cc#L585\r\n\r\nMy question is, does `PyTorch/XLA`'s `PjRtComputationClient` requires these `xla::HloModule` for execution? If not, when user set `XLA_STABLEHLO_COMPILE=1`, `PyTorch/XLA` should not expect the compiled `PjRtLoadedExecutable` has anything to do with XLA/HLO related stuff.\r\n", "url": "https://github.com/pytorch/xla/issues/6759", "state": "open", "labels": [ "question", "stablehlo" ], "created_at": "2024-03-15T10:59:36Z", "updated_at": "2025-04-18T13:58:24Z", "user": "Nullkooland" }, { "repo": "pytorch/vision", "number": 8317, "title": "position, colour, and background colour of text labels in draw_bounding_boxes", "body": "### \ud83d\ude80 The feature\r\n\r\nText labels from `torchvision.utils.draw_bounding_boxes` are currently always inside the box with origin at the top left corner of the box, without a background colour, and the same colour as the bounding box itself. These are three things that would be nice to control.\r\n\r\n### Motivation, pitch\r\n\r\nThe problem with the current implementation is that it makes it hard to read the label, particularly when the bounding box is filled (because the text has the same colour as the filling colour and is placed inside the box.\r\n\r\nFor example, this is the results from the current implementation:\r\n\r\n![intro-detection-R52854-JRL231711104-coco](https://github.com/pytorch/vision/assets/916140/783274e1-080f-45e7-af9a-68051c0f7e68)\r\n\r\nMoving the label to outside the box already makes things better:\r\n\r\n![intro-detection-R52854-JRL231711104](https://github.com/pytorch/vision/assets/916140/28de5dec-55e9-4293-8288-79149b45ea5c)\r\n\r\nBut by controlling those three things (placement of label, background colour behind the label, and text colour) one could fit to whatever they have. For what is worth, in the original issue for this feature, the only example image had labels outside the box, text coloured different from the box (black), and background of the same colour as the box. See https://github.com/pytorch/vision/issues/2556#issuecomment-671344086\r\n\r\nI'm happy to contribute this but want to know if this will be accepted and with what interface.", "url": "https://github.com/pytorch/vision/issues/8317", "state": "open", "labels": [], "created_at": "2024-03-14T13:50:17Z", "updated_at": "2025-04-17T13:28:39Z", "comments": 9, "user": "carandraug" }, { "repo": "pytorch/serve", "number": 3026, "title": "Exception when using torchserve to deploy hugging face model: java.lang.InterruptedException: null", "body": "### \ud83d\udc1b Describe the bug\n\nI followed the tutorial as https://github.com/pytorch/serve/tree/master/examples/Huggingface_Transformers\r\n\r\nFirst,\r\n```\r\npython Download_Transformer_models.py\r\n```\r\n\r\nThen,\r\n```\r\ntorch-model-archiver --model-name BERTSeqClassification --version 1.0 --serialized-file Transformer_model/pytorch_model.bin --handler ./Transformer_handler_generalized.py --extra-files \"Transformer_model/config.json,./setup_config.json,./Seq_classification_artifacts/index_to_name.json\"\r\n```\r\n\r\nFinally,\r\n```\r\n torchserve --start --model-store model_store --models my_tc=BERTSeqClassification.mar --ncs\r\n```\r\n\r\nThe system cannot start as usualy, it gives out the error log, throwing an Exception\r\n```\r\njava.lang.InterruptedException: null\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?]\r\n at java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:515) ~[?:?]\r\n at java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:677) ~[?:?]\r\n at org.pytorch.serve.wlm.Model.pollBatch(Model.java:367) ~[model-server.jar:?]\r\n at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:36) ~[model-server.jar:?]\r\n at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:194) [model-server.jar:?]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]\r\n at java.lang.Thread.run(Thread.java:833) [?:?]\r\n```\r\n\r\nI tried curl to check the model\r\n\r\n```\r\nroot@0510f3693f42:/home/model-server# curl http://127.0.0.1:8081/models \r\n{\r\n \"models\": []\r\n}\r\n```\n\n### Error logs\n\n2024-03-14T07:34:24,938 [INFO ] epollEventLoopGroup-5-17 org.pytorch.serve.wlm.WorkerThread - 9015 Worker disconnected. WORKER_STARTED\r\n2024-03-14T07:34:24,938 [INFO ] W-9015-my_tc_1.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9015.\r\n2024-03-14T07:34:24,938 [DEBUG] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED\r\n2024-03-14T07:34:24,938 [DEBUG] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.\r\njava.lang.InterruptedException: null\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?]\r\n at java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:515) ~[?:?]\r\n at java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:677) ~[?:?]\r\n at org.pytorch.serve.wlm.Model.pollBatch(Model.java:367) ~[model-server.jar:?]\r\n at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:36) ~[model-server.jar:?]\r\n at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:194) [model-server.jar:?]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]\r\n at java.lang.Thread.run(Thread.java:833) [?:?]\r\n2024-03-14T07:34:24,938 [DEBUG] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-my_tc_1.0 State change WORKER_STARTED -> WORKER_STOPPED\r\n2024-03-14T07:34:24,938 [WARN ] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerThread - Auto recovery failed again\r\n2024-03-14T07:34:24,939 [WARN ] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9015-my_tc_1.0-stderr\r\n2024-03-14T07:34:24,939 [WARN ] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9015-my_tc_1.0-stdout\r\n2024-03-14T07:34:24,939 [INFO ] W-9015-my_tc_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9015 in 3 seconds.\r\n2024-03-14T07:34:24,946 [INFO ] W-9015-my_tc_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9015-my_tc_1.0-stdout\r\n2024-03-14T07:34:24,946 [INFO ] W-9015-my_tc_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9015-my_tc_1.0-stderr\r\n2024-03-14T07:34:27,207 [DEBUG] W-9010-my_tc_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.9/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9010, --metrics-config, /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml]\r\n2024-03-14T07:34:27,489 [DEBUG] W-9012-my_tc_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.9/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9012, --metrics-config, /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml]\r\n2024-03-14T07:34:27,579 [DEBUG] W-9000-my_tc_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Wo", "url": "https://github.com/pytorch/serve/issues/3026", "state": "open", "labels": [ "help wanted", "triaged", "needs-reproduction" ], "created_at": "2024-03-14T07:56:57Z", "updated_at": "2024-03-19T16:44:51Z", "comments": 4, "user": "yolk-pie-L" }, { "repo": "pytorch/serve", "number": 3025, "title": "torchserve output customization", "body": "Hi team\r\n\r\nTo process a inference request in torchserve, there are stages like initialize, preprocess, inference, postprocess.\r\nIf I want to convert the output format from tensor to my custom textual format, where and how can I carry this out ?\r\n\r\nI am able to receive output in json format. But I need to make some customizations. Is it possible in torchserve ?\r\n\r\nregards\r\n\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/3025", "state": "closed", "labels": [ "triaged" ], "created_at": "2024-03-13T20:37:39Z", "updated_at": "2024-03-14T21:05:42Z", "comments": 3, "user": "advaitraut" }, { "repo": "pytorch/executorch", "number": 2397, "title": "How to perform inference and gathering accuracy metrics on executorch model ", "body": "Hi, I am having trouble finding solid documentation that explains how to do the following with executorch (stable):\r\n- Load in the exported .pte model\r\n- Run inference with images\r\n- Gather accuracy\r\n\r\nI have applied quantization and other optimizations to the original model and exported it to .pte. I'd like to see the accuracy after these techniques were applied. I followed the following tutorial for exporting the model. If we can't do the above items on the directly exported .pte file, then is there a way we can based on the below steps for preparing the model for edge dialect?\r\n\r\nhttps://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html\n\ncc @mergennachin @byjlw", "url": "https://github.com/pytorch/executorch/issues/2397", "state": "open", "labels": [ "module: doc", "need-user-input", "triaged" ], "created_at": "2024-03-13T14:40:01Z", "updated_at": "2025-02-04T20:21:12Z", "user": "mmingo848" }, { "repo": "pytorch/pytorch", "number": 121798, "title": "what is the match numpy verison, can not build from source ", "body": "### \ud83d\udc1b Describe the bug\n\nwhat is the match numpy verison, can not build from source \r\n\r\nafter run ` python3 setup.py develop` \r\n\r\ngot this error\r\n\r\n```\r\nerror: no member named 'elsize' in '_PyArray_Descr'\r\n```\n\n### Versions\n\nOS: macOS 14.4 (arm64)\r\nGCC version: Could not collect\r\nClang version: 15.0.0 (clang-1500.3.9.4)\r\nCMake version: version 3.22.2\r\nLibc version: N/A\r\n\r\nPython version: 3.11.7 (main, Jan 16 2024, 14:42:22) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)\r\nPython platform: macOS-14.4-arm64-arm-64bit\r\nIs CUDA available: N/A\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: N/A\r\n\r\nCPU:\r\nApple M1 Max\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==2.0.0b1\r\n[pip3] torch==2.3.0.dev20240311\r\n[pip3] torchaudio==2.2.0.dev20240311\r\n[pip3] torchvision==0.18.0.dev20240311\r\n[conda] Could not collect\n\ncc @malfet @seemethere @mruberry @rgommers", "url": "https://github.com/pytorch/pytorch/issues/121798", "state": "closed", "labels": [ "module: build", "triaged", "module: numpy" ], "created_at": "2024-03-13T09:52:46Z", "updated_at": "2024-03-14T07:10:15Z", "user": "yourmoonlight" }, { "repo": "pytorch/functorch", "number": 1142, "title": "Swapping 2 columns in a 2d tensor", "body": "I have a function ```tridiagonalization``` to tridiagonalize matrix (2d tensor), and I want to map it to batch. It involves a for loop and on each iteration a permutation of 2 columns and 2 rows inside it. I do not understand how to permute 2 columns without errors. So my code for rows works and looks as follows:\r\n```\r\nrow_temp = matrix_stacked[pivot[None]][0]\r\nmatrix_stacked[[pivot[None]][0]] = matrix_stacked[i+1].clone()\r\nmatrix_stacked[i+1] = row_temp\r\n```\r\nWhere ```pivot``` is a tensor and ```i``` is a Python integer variable. For columns I have something like this:\r\n```\r\ncolumn_temp = matrix_stacked[:, [pivot[None]][0]]\r\nmatrix_stacked[:, [pivot[None]][0]] = matrix_stacked[:, [i+1]].clone()\r\nmatrix_stacked[:, i+1] = column_temp\r\n```\r\nIt does not wotk because of issues with size. What should I do in order to permute ```i+1``` and ```pivot``` columns?", "url": "https://github.com/pytorch/functorch/issues/1142", "state": "open", "labels": [], "created_at": "2024-03-13T09:33:29Z", "updated_at": "2024-03-13T09:33:29Z", "comments": 0, "user": "Kreativshikkk" }, { "repo": "pytorch/xla", "number": 6710, "title": "Does XLA use the Nvidia GPU's tensor cores?", "body": "## \u2753 Questions and Help\r\n1. Does XLA use the Nvidia GPU's tensor cores?\r\n2. Is Pytorch XLA only designed to accelerate neural network training or does it accelerate their inferencing as well?", "url": "https://github.com/pytorch/xla/issues/6710", "state": "closed", "labels": [], "created_at": "2024-03-11T00:55:36Z", "updated_at": "2024-03-15T23:42:26Z", "comments": 2, "user": "Demis6" }, { "repo": "pytorch/tutorials", "number": 2797, "title": "Contradiction in `save_for_backward`, what is permitted to be saved", "body": "https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html\r\n\"ctx is a context object that can be used to stash information for backward computation. You can **cache arbitrary objects** for use in the backward pass using the ctx.save_for_backward method.\"\r\n\r\nhttps://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html\r\n\"save_for_backward should be called at most once, only from inside the forward() method, and **only with tensors**.\"\r\n\r\nMost likely the second is correct, and the first is not. I haven't checked.\r\n\r\nSuggestion: \"You can cache **tensors** for use in the backward pass using the ctx.save_for_backward method. Other miscellaneous objects can be cached using ctx.my_object_name = object.\"\n\ncc @albanD @jbschlosser", "url": "https://github.com/pytorch/tutorials/issues/2797", "state": "closed", "labels": [ "core", "medium", "docathon-h1-2025" ], "created_at": "2024-03-10T19:40:16Z", "updated_at": "2025-06-04T21:11:21Z", "user": "ad8e" }, { "repo": "pytorch/vision", "number": 8305, "title": "aarch64 build for AWS Linux - Failed to load image Python extension", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nBuilt Torch 2.1.2 and TorchVision 0.16.2 from source and running into the following problem:\r\n\r\n/home/ec2-user/conda/envs/textgen/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/ec2-user/conda/envs/textgen/lib/python3.10/site-packages/torchvision/image.so: undefined symbol: _ZNK3c1017SymbolicShapeMeta18init_is_contiguousEv'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?\r\n\r\npreviously the error was about missing libs and not undefined symbol, so I believe the libs are correctly installed now. Building says:\r\n\r\n ```\r\n Compiling extensions with following flags:\r\n FORCE_CUDA: False\r\n FORCE_MPS: False\r\n DEBUG: False\r\n TORCHVISION_USE_PNG: True\r\n TORCHVISION_USE_JPEG: True\r\n TORCHVISION_USE_NVJPEG: True\r\n TORCHVISION_USE_FFMPEG: True\r\n TORCHVISION_USE_VIDEO_CODEC: True\r\n NVCC_FLAGS:\r\n Compiling with debug mode OFF\r\n Found PNG library\r\n Building torchvision with PNG image support\r\n libpng version: 1.6.37\r\n libpng include path: /home/ec2-user/conda/envs/textgen/include/libpng16\r\n Running build on conda-build: False\r\n Running build on conda: True\r\n Building torchvision with JPEG image support\r\n libjpeg include path: /home/ec2-user/conda/envs/textgen/include\r\n libjpeg lib path: /home/ec2-user/conda/envs/textgen/lib\r\n Building torchvision without NVJPEG image support\r\n Building torchvision with ffmpeg support\r\n ffmpeg version: b'ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers\\nbuilt with gcc 10.2.0 (crosstool-NG 1.22.0.1750_510dbc6_dirty)\\nconfiguration: --prefix=/opt/conda/conda-bld/ffmpeg_1622823166193/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeh --cc=/opt/conda/conda-bld/ffmpeg_1622823166193/_build_env/bin/aarch64-conda-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264\\nlibavutil 56. 31.100 / 56. 31.100\\nlibavcodec 58. 54.100 / 58. 54.100\\nlibavformat 58. 29.100 / 58. 29.100\\nlibavdevice 58. 8.100 / 58. 8.100\\nlibavfilter 7. 57.100 / 7. 57.100\\nlibavresample 4. 0. 0 / 4. 0. 0\\nlibswscale 5. 5.100 / 5. 5.100\\nlibswresample 3. 5.100 / 3. 5.100\\nlibpostproc 55. 5.100 / 55. 5.100\\n'\r\n ffmpeg include path: ['/home/ec2-user/conda/envs/textgen/include']\r\n ffmpeg library_dir: ['/home/ec2-user/conda/envs/textgen/lib']\r\n Building torchvision without video codec support\r\n```\r\nSo I believe I do have things set up correctly to be able to do image calls (I don't care about video). Any idea why I would still be getting the undefined symbol warning? Thanks!\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.1.2+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Amazon Linux 2023.3.20240304 (aarch64)\r\nGCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)\r\nClang version: Could not collect\r\nCMake version: version 3.28.3\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.10.9 (main, Mar 8 2023, 10:41:45) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-6.1.79-99.164.amzn2023.aarch64-aarch64-with-glibc2.34\r\nIs CUDA available: True\r\nCUDA runtime version: 12.2.140\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA T4G\r\nNvidia driver version: 550.54.14\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn.so.8.9.4\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_adv_infer.so.8.9.4\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_adv_train.so.8.9.4\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_cnn_infer.so.8.9.4\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_cnn_train.so.8.9.4\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_ops_infer.so.8.9.4\r\n/usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_ops_train.so.8.9.4\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 4\r\nOn-line CPU(s) list: 0-3\r\nVendor ID: ARM\r\nModel name: Neoverse-N1\r\nModel: ", "url": "https://github.com/pytorch/vision/issues/8305", "state": "open", "labels": [], "created_at": "2024-03-09T20:13:46Z", "updated_at": "2024-03-12T18:53:04Z", "comments": 6, "user": "elkay" }, { "repo": "pytorch/serve", "number": 3008, "title": "very high QueueTime", "body": "Hi, I am seeing a very high queue time in my torchserve setup.\r\nif I am considering correctly the `QueueTime.ms:19428` means this particular request had to wait for 19 sec for processing\r\nwhile the QueTime just before that request was `QueueTime.ms:0` so why suddenly 18 sec delay\r\n\r\nIf I am wrong then what does this QueueTime parameter represent?\r\n\r\nmy env torch131+cu117, torchserve 0.7.2, and the model used is yolov5s which is a very small model, in input I am accepting an s3 uri downloading the image internally, and then processing\r\n\r\nattaching the logs here any idea what could be happening here\r\n```\r\n2024-03-08T08:44:35,261 [INFO ] W-9003-vehicledetection TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:ai-gpu-service-5b585f9b9d-x4r7z,timestamp:1709887475\r\n2024-03-08T08:44:35,261 [INFO ] W-9003-vehicledetection TS_METRICS - WorkerThreadTime.ms:0|#Level:Host|#hostname:ai-gpu-service-5b585f9b9d-x4r7z,timestamp:1709887475\r\n2024-03-08T08:44:35,261 [INFO ] W-9003-vehicledetection org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1709887475261\r\n2024-03-08T08:44:35,262 [INFO ] W-9003-vehicledetection-stdout MODEL_LOG - Backend received inference at: 1709887475\r\n2024-03-08T08:44:35,262 [INFO ] W-9003-vehicledetection-stdout MODEL_LOG - Received backend request -> {'image_uri': 's3://mubucket/062c650b3213.jpeg', 'conf_thresh': 0.5}\r\n2024-03-08T08:44:35,282 [INFO ] W-9003-vehicledetection-stdout MODEL_LOG - completed processing results\r\n2024-03-08T08:44:35,283 [INFO ] W-9003-vehicledetection-stdout MODEL_METRICS - HandlerTime.Milliseconds:20.93|#ModelName:vehicledetection,Level:Model|#hostname:ai-gpu-service-5b585f9b9d-x4r7z,requestID:3a44a9f4-4f5d-4ead-8f1f-153ecf6b001f,timestamp:1709887475\r\n2024-03-08T08:44:35,283 [INFO ] W-9003-vehicledetection-stdout MODEL_METRICS - PredictionTime.Milliseconds:21.03|#ModelName:vehicledetection,Level:Model|#hostname:ai-gpu-service-5b585f9b9d-x4r7z,requestID:3a44a9f4-4f5d-4ead-8f1f-153ecf6b001f,timestamp:1709887475\r\n2024-03-08T08:44:35,283 [INFO ] W-9003-vehicledetection org.pytorch.serve.wlm.WorkerThread - Backend response time: 22\r\n2024-03-08T08:44:35,283 [INFO ] W-9003-vehicledetection ACCESS_LOG - /xxx.xx.xxx.xxx:18363 \"POST /predictions/vehicledetection HTTP/1.1\" 200 19450\r\n2024-03-08T08:44:35,283 [INFO ] W-9003-vehicledetection TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:ai-gpu-service-5b585f9b9d-x4r7z,timestamp:1706999000\r\n2024-03-08T08:44:35,283 [DEBUG] W-9003-vehicledetection org.pytorch.serve.job.Job - Waiting time ns: 19428751625, Backend time ns: 21770097\r\n2024-03-08T08:44:35,283 [INFO ] W-9003-vehicledetection TS_METRICS - QueueTime.ms:19428|#Level:Host|#hostname:ai-gpu-service-5b585f9b9d-x4r7z,timestamp:1709887475\r\n```\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/3008", "state": "closed", "labels": [], "created_at": "2024-03-08T14:52:09Z", "updated_at": "2024-03-09T17:12:37Z", "comments": 0, "user": "PushpakBhoge512" }, { "repo": "pytorch/executorch", "number": 2293, "title": "How to analyze executorch .pte file performance?", "body": "I am looking for a way to either benchmark the .pte files performance, the final state of the ExecutorchProgramManager object, or similar after following [this](https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html) tutorial. I used the PyTorch profiler on the model before putting it through executorch. I can\u2019t find a way to use any one of the above on the profiler. I\u2019d like to use the same or similar to compare the original model to the executorch model with quantization to see the performance differences. Thanks!", "url": "https://github.com/pytorch/executorch/issues/2293", "state": "closed", "labels": [ "module: devtools" ], "created_at": "2024-03-07T12:12:41Z", "updated_at": "2025-02-03T22:04:48Z", "user": "mmingo848" }, { "repo": "pytorch/xla", "number": 6674, "title": "How to minimize memory expansion due to padding during sharding", "body": "Hello\r\n\r\nFor a model that can be sharded in model parallelization in TPUv4 (4x32) device, I am getting the error below at the beginning of the training on TPUv3 (8x16) device. There is `4x expansion` with respect to console message. Even if both both TPUv4 and TPUv3 devices have same total memory I cannot run the training on TPUv3 device.\r\n\r\n```\r\nProgram hbm requirement 15.45G:\r\n global 2.36M\r\n scoped 3.88M\r\n HLO temp 15.45G (60.9% utilization: Unpadded (9.40G) Padded (15.44G), 0.0% fragmentation (5.52M))\r\n\r\n Largest program allocations in hbm:\r\n\r\n 1. Size: 4.00G\r\n Shape: bf16[2048,1,2048,128]{0,1,3,2:T(4,128)(2,1)}\r\n Unpadded size: 1.00G\r\n Extra memory due to padding: 3.00G (4.0x expansion)\r\n XLA label: broadcast.6042.remat3 = broadcast(bitcast.26), dimensions={2,3}\r\n Allocation type: HLO temp\r\n ==========================\r\n\r\n 2. Size: 4.00G\r\n Shape: bf16[2048,1,2048,128]{0,1,3,2:T(4,128)(2,1)}\r\n Unpadded size: 1.00G\r\n Extra memory due to padding: 3.00G (4.0x expansion)\r\n XLA label: broadcast.6043.remat3 = broadcast(bitcast.27), dimensions={0,3}\r\n Allocation type: HLO temp\r\n ==========================\r\n```\r\n\r\nThe lines that causes `4x expansion` is below:\r\n\r\n```\r\ndef forward(self, x): # Activation map volume = 1,128,2048,1\r\n ...\r\n ...\r\n x = torch.transpose(x, 1, 3) # Activation map volume = 1,1,2048,128\r\n\r\n x_batch_0 = x.expand(2048, -1, -1, -1) # Activation map volume = 2048,1,2048,128\r\n\r\n x_batch_1 = x.repeat_interleave(2048, dim=2).reshape(2048, 1, 2048, 128) # Activation map volume = 2048,1,2048,128\r\n\r\n x_batch = torch.cat((x_batch_0, x_batch_1), dim=1) # Activation map volume = 2048,2,2048,128\r\n\r\n ...\r\n ...\r\n```\r\n\r\nHere are the sharding properties that I set.\r\n\r\n```\r\nmesh_shape = (num_devices, 1, 1, 1)\r\n\r\nmesh = xs.Mesh(device_ids, mesh_shape, ('w', 'x', 'y', 'z'))\r\npartition_spec = (0, 1, 2, 3) # Apply sharding along all axes\r\n\r\nfor name, layer in model.named_modules():\r\n if ( 'conv2d' in name ):\r\n xs.mark_sharding(layer.weight, mesh, partition_spec)\r\n```\r\n\r\nHow can I prevent `4x expansion`?\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/6674", "state": "open", "labels": [ "performance", "distributed" ], "created_at": "2024-03-06T15:23:31Z", "updated_at": "2025-04-18T18:42:38Z", "user": "mfatih7" }, { "repo": "pytorch/serve", "number": 3004, "title": "How to 'Create model archive pod and run model archive file generation script' in the \u2018User Guide\u2019 ", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI'm reading the User Guide of KServe doc. One part of the 'Deploy a PyTorch Model with TorchServe InferenceService' is hard to understand. \r\n\r\n3 'Create model archive pod and run model archive file generation script' \r\n3.1 Create model archive pod and run model archive file generation script[\u00b6](https://kserve.github.io/website/0.11/modelserving/v1beta1/torchserve/model-archiver/#31-create-model-archive-pod-and-run-model-archive-file-generation-script)\r\nkubectl apply -f model-archiver.yaml -n kserve-test\r\n(https://kserve.github.io/website/0.11/modelserving/v1beta1/torchserve/model-archiver/)\r\n\r\nIdk how to write the model-archiver.yaml and the model archive file generation script. I would be very grateful if anyone can help me\uff01\r\n\r\n### Error logs\r\n\r\nNot yet\r\n\r\n### Installation instructions\r\n\r\nYes \r\nyes\r\n\r\n### Model Packaing\r\n\r\nNot yet\r\n\r\n### config.properties\r\n\r\n_No response_\r\n\r\n### Versions\r\n\r\naiohttp==3.8.6\r\naiohttp-cors==0.7.0\r\naiorwlock==1.3.0\r\naiosignal==1.3.1\r\nanyio==4.0.0\r\nasync-timeout==4.0.3\r\nattrs==23.1.0\r\nazure-core==1.29.5\r\nazure-identity==1.15.0\r\nazure-storage-blob==12.18.3\r\nazure-storage-file-share==12.14.2\r\nblessed==1.20.0\r\n#boto==31.28.73\r\nbotocore==1.31.73\r\ncachetools==5.3.2\r\ncaptum==0.6.0\r\ncertifi==2023.7.22\r\ncffi==1.16.0\r\ncharset-normalizer==3.3.0\r\nclick==8.1.7\r\ncloudevents==1.10.1\r\ncolorful==0.5.5\r\ncontourpy==1.1.1\r\ncryptography==41.0.5\r\ncuda-python==12.3.0\r\ncycler==0.12.1\r\nCython==0.29.34\r\ndeprecation==2.1.0\r\ndistlib==0.3.7\r\nenum-compat==0.0.3\r\nexceptiongroup==1.1.3\r\nfastapi==0.95.2\r\nfilelock==3.12.4\r\nfonttools==4.43.1\r\nfrozenlist==1.4.0\r\nfsspec==2023.9.2\r\ngoogle-api-core==2.12.0\r\ngoogle-auth==2.23.3\r\ngoogle-cloud-core==2.3.3\r\ngoogle-cloud-storage==1.44.0\r\n#google-crc==32c1.5.0\r\ngoogle-resumable-media==2.6.0\r\ngoogleapis-common-protos==1.61.0\r\ngpustat==1.1.1\r\ngrpcio==1.51.3\r\ngrpcio-tools==1.48.2\r\n#h==110.14.0\r\nhttpcore==0.16.3\r\nhttptools==0.6.1\r\nhttpx==0.23.3\r\nhuggingface-hub==0.17.3\r\nidna==3.4\r\nimportlib-resources==6.1.0\r\nisodate==0.6.1\r\n#Jinja==23.1.2\r\njmespath==1.0.1\r\njsonschema==4.19.2\r\njsonschema-specifications==2023.7.1\r\nkiwisolver==1.4.5\r\nkserve==0.11.1\r\nkubernetes==28.1.0\r\nMarkupSafe==2.1.3\r\nmatplotlib==3.8.0\r\nmpmath==1.3.0\r\nmsal==1.24.1\r\nmsal-extensions==1.0.0\r\nmsgpack==1.0.7\r\nmultidict==6.0.4\r\nnetworkx==3.1\r\nnumpy==1.24.3\r\nnvidia-ml-py==12.535.108\r\noauthlib==3.2.2\r\nopencensus==0.11.3\r\nopencensus-context==0.1.3\r\norjson==3.9.10\r\npackaging==23.2\r\npandas==2.1.2\r\nPillow==10.0.1\r\n#pip==23.3.1\r\nplatformdirs==3.11.0\r\nportalocker==2.8.2\r\nprometheus-client==0.13.1\r\nprotobuf==3.20.3\r\npsutil==5.9.5\r\npy-spy==0.3.14\r\n#pyasn==10.5.0\r\n#pyasn==1-modules0.3.0\r\npycparser==2.21\r\npydantic==1.10.13\r\nPyJWT==2.8.0\r\npynvml==11.4.1\r\npyparsing==3.1.1\r\npython-dateutil==2.8.2\r\npython-dotenv==1.0.0\r\npython-rapidjson==1.13\r\npytz==2023.3.post1\r\nPyYAML==6.0\r\nray==2.4.0\r\nreferencing==0.30.2\r\nregex==2023.10.3\r\nrequests==2.31.0\r\nrequests-oauthlib==1.3.1\r\n#rfc==39861.5.0\r\nrpds-py==0.10.6\r\nrsa==4.9\r\n#s==3transfer0.7.0\r\nsafetensors==0.4.0\r\nsetuptools==68.2.2\r\nsix==1.16.0\r\nsmart-open==6.4.0\r\nsniffio==1.3.0\r\nstarlette==0.27.0\r\nsympy==1.12\r\ntabulate==0.9.0\r\ntiming-asgi==0.3.1\r\ntokenizers==0.14.1\r\ntorch==2.1.0\r\ntorch-model-archiver==0.9.0\r\ntorch-workflow-archiver==0.2.11\r\ntorchaudio==2.1.0\r\ntorchdata==0.7.0\r\ntorchserve==0.9.0\r\ntorchtext==0.16.0\r\ntorchvision==0.16.0\r\ntqdm==4.66.1\r\ntransformers==4.34.1\r\ntritonclient==2.39.0\r\ntyping_extensions==4.8.0\r\ntzdata==2023.3\r\n#urllib==31.26.18\r\nuvicorn==0.19.0\r\n#uvloop==0.19.0\r\nvirtualenv==20.21.0\r\nwatchfiles==0.21.0\r\nwcwidth==0.2.8\r\nwebsocket-client==1.6.4\r\nwebsockets==12.0\r\nwheel==0.40.0\r\nyarl==1.9.2\r\nzipp==3.17.0\r\n\r\n\r\n### Repro instructions\r\n\r\nNone\r\n\r\n### Possible Solution\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/3004", "state": "open", "labels": [ "triaged", "kfserving" ], "created_at": "2024-03-06T07:42:50Z", "updated_at": "2024-03-07T07:06:52Z", "user": "Enochlove" }, { "repo": "pytorch/serve", "number": 3001, "title": "Clean up metrics documentation", "body": "### \ud83d\udcda The doc issue\n\nMetrics documentation has a lot of information and information is spread across different subsections finding it difficult to know whats the right way to use metrics\n\n### Suggest a potential alternative/fix\n\nFor older versions of TorchServe, one can always go to the tag and check the Readme.\r\n\r\nClean up the README to show only what is relevant now ", "url": "https://github.com/pytorch/serve/issues/3001", "state": "closed", "labels": [ "documentation", "internal" ], "created_at": "2024-03-05T20:49:32Z", "updated_at": "2024-04-26T21:32:45Z", "comments": 0, "user": "agunapal" }, { "repo": "pytorch/pytorch", "number": 121203, "title": "How to clear GPU memory without restarting kernel when using a PyTorch model", "body": "## Issue description\r\nI am currently using pytorch's model on my windows computer, using python scripts running on vscode. \r\nI want to be able to load and release the model repeatedly in a resident process, where releasing the model requires fully freeing the memory of the currently used GPU, including freeing the cache and cuda context.\r\n\r\nI have now tried to use del xxx, torch.cuda.empty_cache(), but this can only free up the amount of cache memory occupied by models and variables, in fact, there is still cuda context not free, so I also tried to use numba.cuda, pycuda.driver and other third-party libraries to free this part of the memory, the results show that this is effective, it can clean up the GPU memory to a clean state, but when I re-initialize the same model in the process, there is an error, so it seems that the process of freeing the cuda context is irreversible for pytorch. \r\n\r\nNow to fully free the GPU's memory, I can only shut down the current process, which is not what I want. I would like to know what pytorch does to the cuda context when initializing the model, and if there are other ways to meet my requirements?\r\n\r\n## Code example\r\n\r\n## System Info\r\n- PyTorch or Caffe2: PyTorch\r\n- How you installed PyTorch (conda, pip, source): pip\r\n- Build command you used (if compiling from source): pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu117\r\n- OS: Windows 10\r\n- PyTorch version:2.0.1\r\n- Python version:3.8.18\r\n- CUDA/cuDNN version:11.7/8.4\r\n- GPU models and configuration:\r\n- GCC version (if compiling from source):\r\n- CMake version:\r\n- Versions of any other relevant libraries:\r\n\n\ncc @ptrblck", "url": "https://github.com/pytorch/pytorch/issues/121203", "state": "open", "labels": [ "module: cuda", "triaged" ], "created_at": "2024-03-05T05:58:49Z", "updated_at": "2024-03-06T15:21:20Z", "user": "Doctor-Damu" }, { "repo": "pytorch/kineto", "number": 885, "title": "How to add customized metadata with on demand profiling ? ", "body": "When profiling with `torch.profiler.profile` , generated json file has a section called `distributedInfo` shown as below\r\n```json\r\n{\r\n \"distributedInfo\": {\"backend\": \"nccl\", \"rank\": 0, \"world_size\": 2}\r\n}\r\n```\r\nBut there's no such section in generated file when on-demand profiling is triggered. As a result, Holistic Trace Analysis cannot be used to analysis those files. \r\nIs this by design or there's something to do to make those file generated by `kineto` have `distributedInfo` as well? Hoping some one can help. Thanks. \r\n\r\n\r\n", "url": "https://github.com/pytorch/kineto/issues/885", "state": "closed", "labels": [ "bug" ], "created_at": "2024-03-04T09:41:04Z", "updated_at": "2024-07-08T21:53:03Z", "user": "staugust" }, { "repo": "pytorch/executorch", "number": 2226, "title": "How do you get executorch to run within Mbed OS?", "body": "Hi guys,\r\nWe serialized a PyTorch module to a .pte file for Cortex-M architecture by doing this example:\r\nhttps://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Additionally, we have a P-Nucleo-WB55 development platform. We want to run the module on the development platform using Mbed OS. How do we get the following \"torch::executor\"-namespace accessible in Mbed OS before we build the binaries that we flash later on the P-Nucleo-WB55? Following is an example of how we would like to do it in Mbed OS:\r\n ```\r\n using namespace torch::executor;\r\n Result<util::FileDataLoader> loader =\r\n util::FileDataLoader::from(\"/tmp/model.pte\");\r\n assert(loader.ok());\r\n\r\n Result<Program> program =\r\n torch::executor::Program::load(loader.get());\r\n assert(program.ok());\r\n ```\r\nOr is there a better way of integrating the executorch runtime into Mbed OS, or how would you accomplish this task (getting executorch running in Mbed OS on Cortex-M)?\r\nCheers,\r\nChristoph", "url": "https://github.com/pytorch/executorch/issues/2226", "state": "closed", "labels": [], "created_at": "2024-03-04T08:52:58Z", "updated_at": "2024-05-16T11:07:20Z", "user": "ChristophKarlHeck" }, { "repo": "pytorch/test-infra", "number": 4980, "title": "Provide the range of commits where a disabled test is effectively disabled", "body": "In the current implementation, disabling a test or enabling it (via a GitHub issues) take effect globally across all trunk and PR jobs. The good thing about this approach is that disabling a test is trivial. However, enabling them is still a tricky business. A common scenario is that a forward fix will address the issue and close it, but it will cause the test to fail on PRs everywhere unless people do a rebase to pull in the fix. We see this happening many times like the recent https://github.com/pytorch/pytorch/issues/114831, which is directly responsible for a large spike of force merges.\r\n\r\nAfter chatting with @clee2000 on the topic, there are several potential ideas for this:\r\n\r\n* We can provide the range of commits where a disabled test is effectively disabled. If the base commit of a PR is within the range, the test will still be disabled even if the issue has been closed. This seems like the best option.\r\n* At a coarse grain, we might be able to version the entire disabled tests JSON file. For example, a PR that has an older base commit will use an older version of the JSON file with the test still disabled\r\n\r\nThe same solution could also be applied to slow tests.\r\n\r\ncc @clee2000 ", "url": "https://github.com/pytorch/test-infra/issues/4980", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-03-02T06:58:40Z", "updated_at": "2024-03-02T06:58:40Z", "user": "huydhn" }, { "repo": "pytorch/torchx", "number": 834, "title": "HuggingFace accelerate component", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nHuggingFace accelerate is used for some OSS models. It would be great to have support for it as a component in addition to dist.ddp.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/834", "state": "open", "labels": [], "created_at": "2024-02-28T18:33:38Z", "updated_at": "2024-02-28T18:33:38Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/serve", "number": 2978, "title": "Broken example for a custom Counter metrics", "body": "### \ud83d\udcda The doc issue\n\nThe example in the section [Add Counter based metrics](https://github.com/pytorch/serve/blob/18d56ff56e05de48af0dfabe0019f437f332a868/docs/metrics.md#add-counter-based-metrics) shows how to add custom Counter metric:\r\n```\r\n# Create a counter with name 'LoopCount' and dimensions, initial value\r\nmetrics.add_counter('LoopCount', 1, None, dimensions)\r\n\r\n# Increment counter by 2 \r\nmetrics.add_counter('LoopCount', 2 , None, dimensions)\r\n\r\n# Decrement counter by 1\r\nmetrics.add_counter('LoopCount', -1, None, dimensions)\r\n```\r\n\r\nI tried to copy this example to my custom handler:\r\n```\r\ndims = [Dimension('ModelName', 'doc_model')]\r\nself.metrics.add_counter('LoopCount', 1, None, dimensions=dims)\r\n# Increment counter by 2 \r\nself.metrics.add_counter('LoopCount', 2 , None, dimensions=dims)\r\n# Decrement counter by 1\r\nself.metrics.add_counter('LoopCount', -1, None, dimensions=dims)\r\n```\r\n\r\nWhen I call API for inference I got an error in the terminal:\r\n```\r\n2024-02-28T15:23:57,011 [ERROR] W-9000-doc_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Failed to parse metrics line: \"[METRICS]Failed to update metric with name:LoopCount and dimensions: ModelName:doc_model,Level:Model with value: -1: Counter metric update value cannot be negative\".\r\n```\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2978", "state": "closed", "labels": [ "triaged" ], "created_at": "2024-02-28T12:26:30Z", "updated_at": "2024-03-20T21:56:12Z", "comments": 3, "user": "feeeper" }, { "repo": "pytorch/TensorRT", "number": 2665, "title": "\u2753 [Question] operator being decomposed rather than being converted when a corresponding converter exists?", "body": "## \u2753 Question\r\n\r\nFrom the debug log below, it seems that the `aten.grid_sampler_2d` operator gets decomposed into several lower-level operators. But isn't there a corresponding [converter](https://github.com/pytorch/TensorRT/blob/9a100b6414bee175040bcaa275ecb71df54836e4/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L333-L358) which should be used?\r\n\r\n## What you have already tried\r\n\r\n```py\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch_tensorrt\r\n\r\n\r\nclass MyModule(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n \r\n def forward(self, input, grid):\r\n return F.grid_sample(input, grid, mode=\"bilinear\", padding_mode=\"border\", align_corners=True)\r\n \r\nmodel = MyModule().eval().cuda()\r\n\r\ninputs = [\r\n torch.randn((1, 3, 8, 8), dtype=torch.float, device=\"cuda\"),\r\n torch.randn((1, 16, 16, 2), dtype=torch.float, device=\"cuda\")\r\n]\r\n\r\noptimized_model = torch_tensorrt.compile(\r\n model,\r\n ir=\"dynamo\",\r\n inputs=inputs,\r\n enabled_precisions={torch.float},\r\n debug=True,\r\n min_block_size=1,\r\n truncate_long_and_double=True,\r\n output_format=\"fx\",\r\n)\r\n```\r\n\r\n```\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_1 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_1 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_2 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_2 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_3 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_3 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_4 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_4 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_5 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_5 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_6 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_6 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_7 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_7 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.partitioning._global_partitioner:\r\nSupported Nodes:\r\n- torch.ops.aten.reshape.default + Operator Count: 13\r\n- torch.ops.aten.expand.default + Operator Count: 1\r\n- torch.ops.aten.select.int + Operator Count: 2\r\n- torch.ops.aten.mul.Tensor + Operator Count: 10\r\n- torch.ops.aten.add.Tensor + Operator Count: 7\r\n- torch.ops.aten.clamp.default + Operator Count: 2\r\n- torch.ops.aten.floor.default + Operator Count: 2\r\n- torch.ops.aten.sub.Tensor + Operator Count: 8\r\n- torch.ops.aten.ge.Scalar + Operator Count: 8\r\n- torch.ops.aten.lt.Scalar + Operator Count: 8\r\n- torch.ops.aten.logical_and.default + Operator Count: 12\r\n- torch.ops.aten.where.self + Operator Count: 12\r\n- torch.ops.aten.index.Tensor + Operator Count: 4\r\n\r\nDEBUG:torch_tensorrt.dynamo.partitioning._global_partitioner:\r\nUnsupported or Excluded Nodes:\r\n- torch.ops.aten._to_copy.default + Operator Count: 8\r\n\r\nDEBUG:torch_tensorrt.dynamo._compiler:Detected support for 89 operators out of 97 in subgraph.\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_1 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_1 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_2 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten_ops_converters:_to_copy converter rejected node _to_copy_2 with dtype torch.int64\r\nDEBUG:torch_tensorrt.dynamo.conversion.aten", "url": "https://github.com/pytorch/TensorRT/issues/2665", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-28T06:35:20Z", "updated_at": "2024-07-27T08:20:37Z", "user": "HolyWu" }, { "repo": "pytorch/audio", "number": 3750, "title": "I have some questions about RNNT loss.", "body": "\r\nhello\r\nI would like to ask you a question that may be somewhat trivial.\r\nThe shape of logits of RNN T loss is Batch, max_seq_len, max_target_len+1, class.\r\nWhy is max_target_len+1 here?\r\nShouldn't the number of classes be +1 to the size of the total vocab? Because blank is included.\r\nI don't understand at all.\r\nIs there anyone who can help?\r\n\r\nhttps://pytorch.org/audio/main/generated/torchaudio.functional.rnnt_loss.html", "url": "https://github.com/pytorch/audio/issues/3750", "state": "open", "labels": [], "created_at": "2024-02-26T11:39:39Z", "updated_at": "2024-02-26T13:09:30Z", "comments": 6, "user": "girlsending0" }, { "repo": "pytorch/examples", "number": 1235, "title": "Testing a C++ case with MPI failed. ", "body": "### \ud83d\udc1b Describe the bug\n\nI am testing the following example:\r\n\r\nhttps://github.com/pytorch/examples/blob/main/cpp/distributed/dist-mnist.cpp\r\n\r\nI get the following error:\r\n\r\n[ 50%] Building CXX object CMakeFiles/awcm.dir/xdist.cxx.o\r\n/home/alamj/TestCases/tests/xtorch/xdist/xdist.cxx:1:10: fatal error: c10d/ProcessGroupMPI.hpp: No such file or directory\r\n 1 | #include <c10d/ProcessGroupMPI.hpp>\r\n\r\nI changed the top line with full path to ensure that hpp file gets available\r\n#include </project/def-alamj/shared/libtorch/include/torch/csrc/distributed/c10d/ProcessGroupMPI.hpp>\r\n\r\nThe new error indicates something else I need to know, which is given in the tutorial.\r\n\r\n[ 50%] Building CXX object CMakeFiles/awcm.dir/xdist.cxx.o\r\n/home/alamj/TestCases/tests/xtorch/xdist/xdist.cxx:38:21: error: \u2018c10d\u2019 was not declared in this scope; did you mean \u2018c10\u2019?\r\n 38 | std::shared_ptr<c10d::ProcessGroupMPI> pg,\r\n | ^~~~\r\n | c10\r\n\r\n\r\nPlease let me know how do I get a work around to fix this. \n\n### Error logs\n\n_No response_\n\n### Minified repro\n\n_No response_\n\n### Versions\n\nI think this field is not needed as I am running C++ code. \n\ncc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519", "url": "https://github.com/pytorch/examples/issues/1235", "state": "open", "labels": [], "created_at": "2024-02-25T19:34:24Z", "updated_at": "2024-12-04T15:08:51Z", "comments": 1, "user": "alamj" }, { "repo": "pytorch/serve", "number": 2962, "title": "Update documentation on deprecating mac x86 support", "body": "### \ud83d\udc1b Describe the bug\n\nPyTorch is deprecating support for x86 macs. TorchServe will also do the same.\n\n### Error logs\n\nN/A\n\n### Installation instructions\n\nN/A\n\n### Model Packaing\n\nN/A\n\n### config.properties\n\n_No response_\n\n### Versions\n\nN/A\n\n### Repro instructions\n\nN/A\n\n### Possible Solution\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2962", "state": "open", "labels": [ "documentation" ], "created_at": "2024-02-22T22:53:33Z", "updated_at": "2024-03-26T20:58:19Z", "comments": 0, "user": "agunapal" }, { "repo": "pytorch/TensorRT", "number": 2653, "title": "\u2753 [Question] Can torch_tensorRT be used in C++ with multiprocessing using fork?", "body": "## \u2753 Question\r\n\r\nCan torch_tensorRT be used in C++ with multiprocessing using fork?\r\n\r\n## What you have already tried\r\n\r\nI have doubts if this library can be used in C++ multiprocessing (using fork()) where each process loads a TorchScript model compiled for Torch-TensorRT. I have the pipeline that works with no Torch-TensorRT but it fails when I try to load models from it with `torch::jit::load` (with Torch-TensorRT installed). Related issue: https://github.com/pytorch/TensorRT/issues/758. I have not put this as a bug because I have seen in forums that NVIDIA does not recommend using TensorRT with multiprocessing. Mi error is the following on `torch::jit::load`:\r\n\r\n```\r\nterminate called after throwing an instance of 'torch_tensorrt::Error'\r\n what(): [Error thrown at /home/eduardo/project/TensorRT/core/runtime/runtime.cpp:99] Expected (cudaGetDevice(reinterpret_cast<int*>(&device)) == cudaSuccess) to be true but got false\r\nUnable to get current device (runtime.get_current_device)\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.2.0\r\n - CPU Architecture: amd64\r\n - OS (e.g., Linux): Ubuntu 22.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch + Torch-TensorRT (source) compiled on tag v2.2.0\r\n - Build command you used (if compiling from source): on tag v2.2.0: `cmake -S. -Bbuild -DcuDNN_ROOT_DIR=~/Documents/project/deps/cudnn -DCMAKE_MODULE_PATH=cmake/Modules -DTorch_DIR=/usr/local/libtorch/share/cmake/Torch -DTensorRT_ROOT=~/Documents/TensorRT-8.6.1.6/ -DCMAKE_BUILD_TYPE=Debug`\r\n - Are you using local sources or building from archives: \r\n - G++ version: 11.4.0\r\n - CUDA version:12.1\r\n - GPU models and configuration: rtx 4090\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\nSorry but I am new to C++ and I may have made a mistake somewhere in the compilation or in linking the libraries.\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2653", "state": "open", "labels": [ "question" ], "created_at": "2024-02-22T14:10:57Z", "updated_at": "2024-02-23T22:04:21Z", "user": "peduajo" }, { "repo": "pytorch/serve", "number": 2955, "title": "CPP backend debugging and troubleshooting ", "body": "### \ud83d\ude80 The feature\n\nFor ease of debugging and troubleshooting for the CPP backend add following: \r\n\r\n- [ ] In the TS startup logs, add explicit log line for successful startup of CPP backend \r\n- [x] In the TS print environment add details for the CPP backend\r\n- [x] Cleanup steps for the build script\r\n- [x] FAQ page for troubleshooting\r\n- [x] Build scripts for simple example (or option to do selective build for an example only) \n\n### Motivation, pitch\n\nTo simplify the troubleshooting and debugging experience\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2955", "state": "open", "labels": [ "documentation" ], "created_at": "2024-02-22T01:34:36Z", "updated_at": "2024-03-26T20:59:22Z", "comments": 0, "user": "chauhang" }, { "repo": "pytorch/tutorials", "number": 2773, "title": "pipeline_tutorial failing due to dead torchtext link", "body": "Line 55 of https://github.com/pytorch/tutorials/blob/082c8b1bddb48b75f59860db3679d8c439238f10/intermediate_source/pipeline_tutorial.py is using torchtext to download a dataset that can\u2019t be accessed right now (maybe got taken down, I\u2019m looking for an alternative link but torchtext is no longer maintained)\r\n\r\nCan this tutorial be rewritten to use a different dataset? Can the entire tutorial be deprecated?\r\n\r\nEx: https://github.com/pytorch/tutorials/actions/runs/7992713944/job/21826864521\r\n`requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip`\r\n\r\ncc @kwen2501 @H-Huang @wconstab ", "url": "https://github.com/pytorch/tutorials/issues/2773", "state": "closed", "labels": [], "created_at": "2024-02-21T21:02:25Z", "updated_at": "2024-05-15T16:36:22Z", "comments": 3, "user": "clee2000" }, { "repo": "pytorch/TensorRT", "number": 2649, "title": "\u2753 [Question] torch_tensorrt.dynamo.compile hangs indefinitely mid compilation? ", "body": "## \u2753 Question\r\n\r\ntorch_tensorrt.dynamo.compile hangs indefinitely mid compilation cpu usage is through the roof and having debug = True shows that there's a step where it fails\r\n\r\n## What you have already tried\r\n\r\nI tried compiling with torchscript and it works well enough but i wanted to test the dynamo backend\r\n\r\n## Environment\r\nPython 3.9.2\r\ntorch 2.2+cu118\r\ntorch_tensorrt 2.2+cu118\r\ntensorrt 8.6\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.2\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): debian 11\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip install torch torchvision torch_tensorrt --index-url https://download.pytorch.org/whl/cu118\r\n - Build command you used (if compiling from source):\r\n``` python \r\nimport torch\r\nimport torch_tensorrt\r\nfrom gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean\r\n\r\ngfpgan = GFPGANv1Clean(\r\n out_size=512,\r\n num_style_feat=512,\r\n channel_multiplier=2,\r\n decoder_load_path=None,\r\n fix_decoder=False,\r\n num_mlp=8,\r\n input_is_latent=True,\r\n different_w=True,\r\n narrow=1,\r\n sft_half=True)\r\n\r\nmodel_path=\"./experiments/pretrained_models/GFPGANv1.3.pth\"\r\nloadnet = torch.load(model_path)\r\nif 'params_ema' in loadnet:\r\n keyname = 'params_ema'\r\nelse:\r\n keyname = 'params'\r\ngfpgan.load_state_dict(loadnet[keyname], strict=True)\r\ngfpgan = gfpgan.eval()\r\ninputs=[torch.randn([8, 3, 512, 512],dtype=torch.float32).cuda()]\r\n\r\nif torch.cuda.is_available():\r\n gfpgan = gfpgan.cuda().eval()\r\n torch.set_float32_matmul_precision('high')\r\n compiled = torch.compile(gfpgan,\r\n backend=\"aot_torch_tensorrt_aten\",\r\n options={\r\n \"truncate_long_and_double\":True,\r\n \"debug\":True\r\n })\r\n print(\"EXPORTING\")\r\n import time\r\n start= time.time()\r\n print(compiled(*inputs))\r\n print(time.time()-start)\r\n torch.save(compiled, \"compiled.ts\")\r\n\r\n```\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.9.2\r\n - CUDA version: 118 (12.3 installed on OS)\r\n - GPU models and configuration: nvidia A100 80gb and nvidia L4 both have the same behavior\r\n - Any other relevant information:\r\nprivate fork based on https://github.com/TencentARC/GFPGAN\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2649", "state": "open", "labels": [ "question" ], "created_at": "2024-02-21T16:27:28Z", "updated_at": "2024-02-26T18:07:44Z", "user": "Antonyesk601" }, { "repo": "pytorch/TensorRT", "number": 2648, "title": "\u2753 Debugger deactivate", "body": "## \u2753 Question\r\n\r\nHow can I deactivate the debugger?\r\n\r\n## What you have already tried\r\n\r\nWhen I run any executable that uses Torch-TensorRT, I get a lot of debugger messages:\r\n\r\n```log\r\n...\r\nDEBUG: [Torch-TensorRT - Debug Build] - Attempting to run engine (ID: __torch___torchvision_models_resnet_ResNet_trt_engine_)\r\nINFO: [Torch-TensorRT - Debug Build] - Execution profiling is enabled, find results here:\r\nDevice selection profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__device_config_profile.trace \r\nInput packing profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__input_profile.trace \r\nOutput packing profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__output_profile.trace \r\nTRT enqueue profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__enqueue_profile.trace \r\nEngine execution profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__engine_exectuion_profile.trace \r\nDEBUG: [Torch-TensorRT - Debug Build] - Current Device: Device(ID: 0, Name: Xavier, SM Capability: 7.2, Type: GPU) \r\nDEBUG: [Torch-TensorRT - Debug Build] - Requested padding of dimensions to 1 but found 4 dimensions, not going to pad \r\nDEBUG: [Torch-TensorRT - Debug Build] - Input Name: input_0 Shape: [1, 3, 224, 224]\r\nDEBUG: [Torch-TensorRT - Debug Build] - Output Name: output_0 Shape: [1, 1000]\r\nINFO: [Torch-TensorRT - Debug Build] -\r\n...\r\n```\r\n\r\nI think for some reason I am compiling in debug/developer mode (if there is such a thing). I have tried compiling Torch-TensorRT using:\r\n```bash\r\nbazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0 --linkopt=-Wl,--strip-all --copt=-O3\r\n```\r\n\r\nI hoped, with the `--linkopt=-Wl,--strip-all` option to have solved my problem. Is there any way to deactivate the debugger? I am using the C++ API. Is there either anything in the compilation stage, or any routine to integrate in my code that can help me run my code with the logger disabled?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 2.0\r\n - CPU Architecture: x64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): source\r\n - Build command you used (if compiling from source): [build tutorial](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048)\r\n - Are you using local sources or building from archives: [ref tutorial](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048)\r\n - Python version: 3.8\r\n - CUDA version: 11.4\r\n - GPU models and configuration:\r\n - Any other relevant information: Jetson AGX Xavier\r\n\r\n## Additional context\r\n\r\nTensorRT version: 1.4 Release version\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2648", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-20T05:56:41Z", "updated_at": "2024-02-20T06:15:13Z", "user": "AndreasKaratzas" }, { "repo": "pytorch/pytorch", "number": 120194, "title": "model loaded with torch._export.aot_load does not report what file is not found during inference and Cuda driver error.", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nwhen I load a pt2 model exported with torch._export in one Docker container from the image `ghcr.io/pytorch/pytorch-nightly:2.3.0.dev20240211-cuda12.1-cudnn8-devel` I get a working inference. \r\n\r\nBut when I run it in another container derived from the same base image, I get a CUDA driver error. I can't track down the error because the error message doesn't give me anything to go on. I've confirmed that nvidia-smi, nvcc --version, the torch version and all environment variables from `docker inspect` are the same between the two running containers. I can't identify anywhere that another torch version is installed and I can't see any other cuda versions installed in `/usr/local` that might cause a conflict.\r\n\r\n```\r\nimport torch\r\n\r\nmodel = torch._export.aot_load(\"./compiled_model_satlas/satlas_pt2.so\", device=\"cuda\")\r\n\r\ndevice = torch.device(\"cuda:\" + str(torch.cuda.current_device()))\r\ntorch.cuda.set_device(device)\r\n\r\nprint(\"Current device:\", device)\r\n\r\ntest_im_ts = torch.randn((9*4, 256, 256)).to(device)\r\n\r\nx = torch.stack(6*[test_im_ts], dim=0)\r\n\r\noutputs_aot, _ = model(x)\r\n```\r\n\r\nthe error is below\r\n\r\n```\r\nError: CUDA driver error: file not found\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[10], line 3\r\n 1 test_im_ts = torch.randn((9*4,256,256)).to(device)\r\n 2 x = torch.stack(6*[test_im_ts], dim=0)\r\n----> 3 outputs_aot, _ = model(x)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/_export/__init__.py:421, in aot_load.<locals>.optimized(*args, **kwargs)\r\n 419 out_spec = pytree.treespec_loads(call_spec[1])\r\n 420 flat_inputs = pytree.tree_flatten((args, reorder_kwargs(kwargs, in_spec)))[0]\r\n--> 421 flat_outputs = runner.run(flat_inputs) # type: ignore[attr-defined]\r\n 422 return pytree.tree_unflatten(flat_outputs, out_spec)\r\n\r\nRuntimeError: run_func_( container_handle_, input_handles.data(), input_handles.size(), output_handles.data(), output_handles.size(), cuda_stream_handle, proxy_executor_handle_) API call failed at ../torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 75\r\n```\r\n\r\n### Versions\r\n\r\nDetails for the container where inference fails\r\n\r\n```\r\nPyTorch version: 2.3.0.dev20240210\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.26.4\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 12.1.105\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090\r\nNvidia driver version: 545.23.08\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 12\r\nOn-line CPU(s) list: 0-11\r\nVendor ID: GenuineIntel\r\nModel name: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz\r\nCPU family: 6\r\nModel: 158\r\nThread(s) per core: 2\r\nCore(s) per socket: 6\r\nSocket(s): 1\r\nStepping: 10\r\nCPU max MHz: 4600.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 6399.96\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch", "url": "https://github.com/pytorch/pytorch/issues/120194", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: aotinductor" ], "created_at": "2024-02-19T07:12:30Z", "updated_at": "2025-02-07T08:44:15Z", "user": "rbavery" }, { "repo": "pytorch/pytorch", "number": 120079, "title": "Use sys.settrace or torch function mode to compute how much of a model was not covered by Dynamo", "body": "### \ud83d\udc1b Describe the bug\n\nSuppose you have a model with a bunch of graph breaks / WON'T CONVERT. How much of the model have you managed to capture versus not capture? There are two metrics you could use to figure this out:\r\n\r\n* When you run the model in eager mode, it will have run some number calls to torch functions. You can count how many of these calls occur outside of Dynamo compiled regions, compared to those captured in Dynamo regions. This gives you \"missing torch function call captures / total number of torch function calls in torch.compile region\"\r\n* When you run the model in eager mode, you will run some number of bytecodes. You can use sys.settrace to count how many bytecodes are processed in the eager region, and get \"number of bytecodes evaluated outside of Dynamo region / total number of bytecodes\"\r\n\r\nThis can give you a much better idea of how much of the model you've managed to capture, as opposed to just number of graph breaks.\n\n### Versions\n\nmain\n\ncc @chauhang @penguinwu @msaroufim @bdhirsh @anijain2305 @zou3519", "url": "https://github.com/pytorch/pytorch/issues/120079", "state": "open", "labels": [ "feature", "low priority", "module: logging", "triaged", "oncall: pt2" ], "created_at": "2024-02-16T14:54:04Z", "updated_at": "2025-07-11T18:03:17Z", "user": "ezyang" }, { "repo": "pytorch/text", "number": 2230, "title": "how to install libtorchtext for cpp project use? please give some operation .thanks", "body": "## \ud83d\udc1b Bug\r\n\r\n**Describe the bug** A clear and concise description of what the bug is.\r\n\r\n**To Reproduce** Steps to reproduce the behavior:\r\n\r\n1. Go to '...'\r\n2. Click on '....'\r\n3. Scroll down to '....'\r\n4. See error\r\n\r\n**Expected behavior** A clear and concise description of what you expected to happen.\r\n\r\n**Screenshots** If applicable, add screenshots to help explain your problem.\r\n\r\n**Environment**\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or\r\nfill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\npython -c \"import torchtext; print(\\\"torchtext version is \\\", torchtext.__version__)\"\r\n```\r\n\r\n- PyTorch Version (e.g., 1.0):\r\n- OS (e.g., Linux):\r\n- How you installed PyTorch (`conda`, `pip`, source):\r\n- Build command you used (if compiling from source):\r\n- Python version:\r\n- CUDA/cuDNN version:\r\n- GPU models and configuration:\r\n- Any other relevant information:\r\n\r\n**Additional context** Add any other context about the problem here.\r\n", "url": "https://github.com/pytorch/text/issues/2230", "state": "open", "labels": [], "created_at": "2024-02-15T04:01:32Z", "updated_at": "2024-02-15T04:01:32Z", "user": "mullerhai" }, { "repo": "pytorch/audio", "number": 3746, "title": "how to install libtorchaudio for cpp project ?", "body": "### \ud83d\udc1b Describe the bug\n\nHI \uff0cI git clone audio project \uff0cthen add libtorch path to the audio CMakeTxt\uff0c try to make && make install \uff0cbut all finish \uff0cI cannot find libtorchaudio.dylib file on my macos intel, only libtorchaudio.so\tlibtorchaudio_sox.so in /usr/local/torchaudio\n\n### Versions\n\nlatest", "url": "https://github.com/pytorch/audio/issues/3746", "state": "open", "labels": [], "created_at": "2024-02-15T02:28:30Z", "updated_at": "2024-02-15T02:28:30Z", "user": "mullerhai" }, { "repo": "pytorch/torchx", "number": 824, "title": "Determine scheduler from component level", "body": "## \u2753 Questions and Help\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\n<!-- your question here -->\r\nIs it possible to tell or fill in at runtime which scheduler gets used in component logic? For example, if I have a ddp component, within the component, before I return specs.AppDef, can I set for example a macro that would tell me which scheduler this component gets ran with?\r\n\r\nFor example, I want to be setting some environment variables but differentiate based on which scheduler gets used. ", "url": "https://github.com/meta-pytorch/torchx/issues/824", "state": "open", "labels": [], "created_at": "2024-02-14T23:01:27Z", "updated_at": "2024-02-16T01:56:46Z", "comments": 1, "user": "ryxli" }, { "repo": "pytorch/pytorch", "number": 119604, "title": "How to deal with mypy checking fx_node.args[i].meta?", "body": "# Issue\r\nIt's common in Inductor FX passes to do something like this\r\n```\r\nnode: torch.fx.Node = ...\r\narg1: torch.fx.Argument = node.args[0]\r\narg2: torch.fx.Argument = node.args[1]\r\na, b = arg1.meta, arg2.meta\r\n# do something with a & b\r\n```\r\n\r\nHowever, mypy will call this out ([see](https://mypy.readthedocs.io/en/stable/error_code_list.html#check-that-attribute-exists-in-each-union-item-union-attr)). It's checking that each attribute (i.e. `meta`) exists in each of the type listed for [fx.node.Argument](https://github.com/pytorch/pytorch/blob/a7f82b7d628eb2b966bc53e593dcf32049b2b10e/torch/fx/node.py#L26-L34).\r\n```\r\nItem ... of \"tuple[Any, ...] | list[Any] | dict[str, Any] | slice | range | Node | str | int | float | bool | complex | dtype | Tensor | device | memory_format | layout | OpOverload | None\" has no attribute \"meta\" [union-attr]\r\n```\r\n\r\n# Workarounds\r\n1. Do some runtime checks to assure mypy that it's okay. For eg:\r\n - `isinstance(arg1, torch.fx.Node)`\r\n - `if hasattr(arg1, 'meta):`\r\n2. Slap on a # type: ignore[union-attr]\r\n3. Surface a `node.args` getter method in fx/node.py. Illustrated in code below.\r\n```\r\ndef get_arg(arg_type: Type[T], i: int) -> T:\r\n assert(isinstance(self.args[i], arg_type))\r\n return self.args[i]\r\n```\r\n\r\n# Thoughts\r\nAt the moment, (2), the slap on approach, seems to be present in quite a few places. Here's two examples ([1](https://github.com/pytorch/pytorch/pull/119085/files#diff-800bd8ca3e84db0b1988eb1c289bbe892b2acfcd013c2ff04117ce9bd5615480L346), [2](https://github.com/pytorch/pytorch/pull/119422/files#diff-118f7e6a8110f30c6894a530eea254b6cff4338add31d83825365b6cac47bdc5R368-R374)).\r\n```\r\n$ grep -P -rn \"meta.* # type: ignore\\[union-attr\\]\" torch/_inductor | wc -l\r\n22\r\n```\r\n\r\nI think we could follow (1) everytime we want to call `node.args[0].meta` or handle this at a level lower like (3) surfacing a getter method. Or maybe there's a 4th option?", "url": "https://github.com/pytorch/pytorch/issues/119604", "state": "closed", "labels": [], "created_at": "2024-02-09T22:42:44Z", "updated_at": "2024-02-10T00:01:10Z", "user": "ColinPeppler" }, { "repo": "pytorch/pytorch", "number": 119590, "title": "Decide whether / how to ban SAC + inplace ops in eager", "body": "SAC exists as an API today (see [code](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L1256)), but:\r\n\r\n(1) it \"context\" fn has a pt2-specific name\r\n(1) We have a warning in the docs that it should only be used with `torch.compile`\r\n(2) We have no warning or error that gets emitted at runtime if you actually use SAC with eager mode.\r\n\r\nMy understanding is that the main issue with always-allowing SAC to be used in eager has to do with handling for inplace ops. More diagnosis was in this issue: https://github.com/pytorch/pytorch/issues/113737\r\n\r\nI think it can be summarized by this repro, where eager mode vs. \"eager mode + SAC\" produce different outputs, when an inplace op is involved:\r\n```\r\nimport torch\r\nfrom torch._custom_op.functional import register_functional_op\r\nimport torch.utils.checkpoint\r\nfrom torch.utils.checkpoint import checkpoint, _pt2_selective_checkpoint_context_fn_gen\r\n\r\ndef custom_policy(mode, func, *args, **kwargs):\r\n return func in [torch.ops.aten.mm.default]\r\n\r\ndef selective_checkpointing_context_fn():\r\n return _pt2_selective_checkpoint_context_fn_gen(custom_policy)\r\n\r\ndef gn(x, y):\r\n return torch.selu_(torch.matmul(x, y))\r\n\r\ndef fn(x, y):\r\n return torch.utils.checkpoint.checkpoint(\r\n gn,\r\n x,\r\n y,\r\n use_reentrant=False,\r\n context_fn=selective_checkpointing_context_fn,\r\n )\r\n\r\nx = torch.arange(16, dtype=torch.float32, requires_grad=True).reshape(4, 4).detach().requires_grad_(True)\r\ny = torch.arange(16, dtype=torch.float32, requires_grad=True).reshape(4, 4).detach().requires_grad_(True)\r\n\r\nout1 = gn(x, y)\r\nprint(out1)\r\nout1.sum().backward()\r\nprint(out1)\r\n\r\nout2 = fn(x, y)\r\nprint(out2)\r\n# With SAC + eager mode:\r\n# (1) \"out\" is an activation saved for backward\r\n# (2) selu_() is part of the recompute, which mutates out **again**, during the backward pass!\r\n# Invoking the backward will mutate out!\r\nout2.sum().backward()\r\nprint(out2)\r\n# False\r\nprint(torch.allclose(out1, out2))\r\n```\r\n\r\nJust to collect some possible options:\r\n\r\n(1) [easiest] Ban SAC completely in eager\r\n(2) [medium] Ban SAC in eager whenever there are any inplace ops\r\n(3) [hard?] figure out how to detect exactly the case when outputs/gradients would diverge without SAC, and ban those cases\r\n(4) [hard?] figure out how to functionalize away an mutations in an SAC region that would have changed numerics.\r\n\r\ncc @soulitzer @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7", "url": "https://github.com/pytorch/pytorch/issues/119590", "state": "closed", "labels": [ "module: activation checkpointing", "module: autograd", "triaged", "needs design" ], "created_at": "2024-02-09T20:33:05Z", "updated_at": "2024-06-27T20:13:20Z", "user": "bdhirsh" }, { "repo": "pytorch/pytorch", "number": 119479, "title": "torch._constrain_as_value and related APIs accept Tensor, but this is typically not what you want", "body": "### \ud83d\udc1b Describe the bug\n\nInternal xref: https://fb.workplace.com/groups/6829516587176185/posts/6829896033804907/\r\n\r\nBecause we are willing to call item() on scalar Tensor, these APIs will \"work\" but they will keep generating fresh unbacked symbols, so the value range ends up not getting used by anything. Would be good to warn or error if you try to pass in a Tensor to these APIs.\n\n### Versions\n\nmain\n\ncc @msaroufim @bdhirsh @anijain2305 @zou3519", "url": "https://github.com/pytorch/pytorch/issues/119479", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamic shapes" ], "created_at": "2024-02-08T20:13:23Z", "updated_at": "2024-09-13T03:10:12Z", "user": "ezyang" }, { "repo": "pytorch/pytorch", "number": 119473, "title": "Document how to override autocast rules properly", "body": "Since autocast is implemented as a dispatcher feature, and each rule is a relatively simple kernel being registered on the right key for the right kernel.\r\n\r\nOverriding these rules can be done today by replacing the kernel registered by default with a custom one that does the appropriate casting before redispatching down in a similar way as it is done in the generic kernel we use https://github.com/pytorch/pytorch/blob/def572929b2311b769ef79e66aebc70384b0f456/aten/src/ATen/autocast_mode.h#L467-L473 .\r\n\r\nAll the tools are available to do this from a C++ extension via TORCH_LIBRARY* macros\r\nMissing pieces for python when using torch.library:\r\n- A way to call cached_cast from python https://github.com/pytorch/pytorch/blob/def572929b2311b769ef79e66aebc70384b0f456/aten/src/ATen/autocast_mode.cpp#L200C8-L200C19\r\n- A public way to disable keys in python to enable calling down\n\ncc @mcarilli @ptrblck @leslie-fang-intel @jgong5", "url": "https://github.com/pytorch/pytorch/issues/119473", "state": "open", "labels": [ "triaged", "module: amp (automated mixed precision)" ], "created_at": "2024-02-08T19:02:00Z", "updated_at": "2024-02-08T20:43:22Z", "user": "albanD" }, { "repo": "pytorch/serve", "number": 2933, "title": "https://github.com/pytorch/serve/issues/2870 - New Release Required for this Fix", "body": "### \ud83d\udc1b Describe the bug\n\nTeam,\r\n\r\nseems like worker auto recovery fix in this PR. Can we create patch release so that we can proceed with production update?\r\n\r\nThanks\r\n\r\nRegards,\r\nDeepak Kumar A\n\n### Error logs\n\nNA\n\n### Installation instructions\n\nNA\n\n### Model Packaing\n\nNA\n\n### config.properties\n\n_No response_\n\n### Versions\n\n0.8.1\n\n### Repro instructions\n\n0.8.1\n\n### Possible Solution\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2933", "state": "closed", "labels": [], "created_at": "2024-02-08T14:23:49Z", "updated_at": "2024-03-20T21:51:41Z", "comments": 2, "user": "DeepakkumarArumugam" }, { "repo": "pytorch/serve", "number": 2930, "title": "How would you deploy a new model on a torch server running within a container?", "body": "I am looking for options to use torchserve to deploy multiple models at once. However, in the documentation and guides I cannot find examples where it is done. The examples usually describe a scenario of starting a torchserve container for a given model.\r\n\r\nMy question is if I have a torchserve container running, is there a way to deploy a new model to it without restarting the container and without downtime for the models already running on the server?\r\n\r\nI assume I need to copy the model archive in the proper place within the container and register it via the API, although I am not sure this is possible and ok to do?\r\n\r\nWhat would you advise me?\r\n\r\nPerhaps, it would be nice to have some documentation on this.", "url": "https://github.com/pytorch/serve/issues/2930", "state": "closed", "labels": [], "created_at": "2024-02-07T14:51:06Z", "updated_at": "2024-02-07T16:33:20Z", "comments": 1, "user": "mihailyanchev" }, { "repo": "pytorch/vision", "number": 8259, "title": "support for convnextv2", "body": "### \ud83d\ude80 The feature\n\nis there any plan for adding convext-v2\n\n### Motivation, pitch\n\nConvNeXt-V2 introduce FCMAE self sup pretrain and gain the performance for 0.5~1.5% top1 acc.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8259", "state": "open", "labels": [], "created_at": "2024-02-07T01:45:29Z", "updated_at": "2024-02-07T01:45:29Z", "comments": 0, "user": "chaoer" }, { "repo": "pytorch/kineto", "number": 864, "title": "Question about how to run \"make test\" correctly?", "body": "Hi guys,\r\n Follow the steps in [README.md](https://github.com/pytorch/kineto/tree/main/libkineto), I have succeed to build Libkineto. Then, I start to run the tests with the command \"make test\", but it doesn't change anything. In this [CMakeLists.txt](https://github.com/pytorch/kineto/blob/main/libkineto/CMakeLists.txt) file, it seems like that you just add the test folder in this project but do not build anything, so I am very confused about how to \"make test\" and what is the meanning of \r\n\r\n> (if tests are built) \r\n\r\nAnyway... Could somebody tell me how to build and run the code in the test folder? Thanks ", "url": "https://github.com/pytorch/kineto/issues/864", "state": "open", "labels": [ "bug" ], "created_at": "2024-02-06T06:01:11Z", "updated_at": "2024-04-23T15:45:46Z", "user": "PriscillaJCorn" }, { "repo": "pytorch/examples", "number": 1229, "title": "If I am training on a SINGLE GPU, should this \"--dist-backend 'gloo'\" argument be added to the command?", "body": "@Jaiaid \r\n\r\nIs this **\"--dist-backend 'gloo'\"** be included in the terminal command if using a **SINGLE GPU** or having just one GPU on the machine?\r\n\r\nIs the following example command correct for SINGLE GPU?\r\n\r\npython main.py **--dist-backend 'gloo'** -a resnet18 [imagenet-folder with train and val folders]\r\n\r\nIs that what your new committed warning implies?", "url": "https://github.com/pytorch/examples/issues/1229", "state": "closed", "labels": [], "created_at": "2024-02-05T17:11:50Z", "updated_at": "2024-02-07T08:01:12Z", "comments": 10, "user": "HassanBinHaroon" }, { "repo": "pytorch/xla", "number": 6464, "title": "How to benchmark PyTorch XLA code properly", "body": "## \u2753 Questions and Help\r\nHi! I'm trying to benchmark some pytorch XLA code, and can't find a way how to do it correctly.\r\n\r\nFor simplicity what's I'm benchmarking is `torch.matmul(a, b)`. Firstly I created the most straightforward version of benchmarking, inspired by cuda & triton benchmarking code:\r\n```\r\n# create tensors\r\na = torch.randn((N, K), device=device, dtype=dtype)\r\nb = torch.randn((K, M), device=device, dtype=dtype)\r\n\r\ndef fn():\r\n torch.matmul(a, b)\r\n\r\nbenchmark(fn) # here I'm doing warmup runs/multiple fn runs\r\n```\r\nThis way it didn't work, effectively rendering benchmark to be immediate.\r\nI realized that no work is actually happening since tensors are lazy, so I've added `xm.unlazy` calls after `fn` run with `matmul` result tensor. However I still was getting numbers which look like no work is being done.\r\n\r\nMy theory was that since that structure of computation is not changing backend is reusing results. So I tried to regenerate inputs on each iteration. I tried different approaches, with full regenerate, or with some ways so prepare is faster, such as:\r\n```\r\ndef prepare():\r\n a[0, 0] += 1\r\n b[0, 0] += 1\r\n return [a.clone().detach(), b.clone().detach()]\r\n``` \r\n\r\nBut with neither of my attempts I was able to achieve proper measurement of `matmul` function. I feel like I'm either measuring compilation speed, or no-op speed. Any tips on how to write this benchmark / establish better mental model when / how to avoid recompilation of the code, but still execution of it?\r\n\r\nThanks in advance!", "url": "https://github.com/pytorch/xla/issues/6464", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-05T00:55:57Z", "updated_at": "2025-04-21T13:15:33Z", "user": "ttim" }, { "repo": "pytorch/tensordict", "number": 656, "title": "[Feature Request] Docs don't mention how to install tensordict / that it's a seperate package from torch", "body": "## Motivation\r\n\r\nAs a user, the first thing I'd want to see when looking at a docs for a package is something like:\r\n\r\n```\r\npip install <package>\r\n```\r\nOr \r\n```\r\nconda install <package>\r\n```\r\nThis seems like it's currently missing from the docs [here](https://pytorch.org/tensordict). It is included in the Github readme, but when googling \"tensordict\" the docs on pytorch.org come up first.\r\n\r\nReason it would be good to include is that the docs seem to be sub-docs of pytorch, and therefore at first glance it isn't clear that `tensordict` is not included in the `pytorch` package distribution, and needs to be installed separately.\r\n\r\n## Solution\r\n\r\nThis to be added to the docs.\r\n\r\n## Alternatives\r\n\r\n## Additional context\r\n\r\n## Checklist\r\n\r\n- [x] I have checked that there is no similar issue in the repo (**required**)\r\n\r\n(Happy to add this if it seems like a good idea.)", "url": "https://github.com/pytorch/tensordict/issues/656", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-02-02T17:50:58Z", "updated_at": "2024-02-05T13:49:01Z", "user": "sradc" }, { "repo": "pytorch/tutorials", "number": 2858, "title": "Better specify `torch.compile behaviour` on nested function/module", "body": "### \ud83d\udcda The doc issue\n\nCan we better specify the behavior and eventually the best practices when decorating a function or compiling a module and the effect on the nested modules and nested function call?\r\n\r\nhttps://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @sekyondaMeta @svekars @kit1980 @williamwen42 @msaroufim @ezyang @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng", "url": "https://github.com/pytorch/tutorials/issues/2858", "state": "closed", "labels": [ "medium", "docathon-h1-2024" ], "created_at": "2024-02-02T12:22:05Z", "updated_at": "2024-08-30T21:40:03Z", "comments": 10, "user": "bhack" }, { "repo": "pytorch/torchx", "number": 813, "title": "Docker build verbosity", "body": "## Description\r\nChanging the docker image build to its low level implementation so it can be more verbose.\r\n\r\n## Motivation/Background\r\nBuilding the docker image can take quite some time, and for new users this makes it seem like the program is stuck (especially since the default base image that includes torchx is so big). Making it more verbose is not only a quality of life improvement for all users of the docker workspace, it also gives better visibility into the build process, potentially allowing optimization on the dockerfile.\r\n\r\nOn a side note, what is the rational for naming Dockerfile.torchx instead of just using a normal Dockerfile, is there a difference in the format?\r\n\r\n\r\n## Detailed Proposal\r\nReplacing the current docker build API call with its low level implementation. This would require instantiating a low-level client and processing the build event stream to show in real time to docker build commands. Also a processing function for the stream to be printing to screen correctly.\r\n\r\n\r\n## Alternatives\r\n\r\n\r\n\r\n## Additional context/links\r\nhttps://github.com/pytorch/torchx/blob/19497eb1d2649f66cd12ca1eeed77353085f07e0/torchx/workspace/docker_workspace.py#L118\r\n\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/813", "state": "closed", "labels": [], "created_at": "2024-01-31T18:49:35Z", "updated_at": "2024-04-11T17:42:34Z", "comments": 3, "user": "ccharest93" }, { "repo": "pytorch/tutorials", "number": 2859, "title": "Correctness of when to call `set_device` in the docs for DDP", "body": "### \ud83d\udcda The doc issue\n\nIn the docs tutorial on [how to set up Multi-GPU training](https://pytorch.org/tutorials/beginner/ddp_series_multigpu.html), it is suggested that the following is the proper way to setup each process (initializing the, e.g., NCCL, process group and then calling `torch.cuda.set_device(rank)`):\r\n\r\n```python\r\ndef ddp_setup(rank: int, world_size: int):\r\n \"\"\"\r\n Args:\r\n rank: Unique identifier of each process\r\n world_size: Total number of processes\r\n \"\"\"\r\n os.environ[\"MASTER_ADDR\"] = \"localhost\"\r\n os.environ[\"MASTER_PORT\"] = \"12355\"\r\n init_process_group(backend=\"nccl\", rank=rank, world_size=world_size)\r\n torch.cuda.set_device(rank)\r\n```\r\n\r\nHowever, these issues suggest that the proper way is to call `set_device` before initializing the process group:\r\n- https://github.com/pytorch/pytorch/issues/54550#issuecomment-808703316\r\n- https://github.com/pytorch/pytorch/issues/18689#issuecomment-479042701\r\n\r\nWhich is the correct order? Are there pauses or slowdowns if the order changes?\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225", "url": "https://github.com/pytorch/tutorials/issues/2859", "state": "closed", "labels": [], "created_at": "2024-01-31T18:06:42Z", "updated_at": "2024-05-07T17:10:56Z", "comments": 5, "user": "craymichael" }, { "repo": "pytorch/cpuinfo", "number": 221, "title": "How to obtain information of CPU frequency?", "body": "if (core->processor_count == 1) {\r\n\t\t\tprintf(\"\\t%\" PRIu32 \": 1 processor (%\" PRIu32 \"), Frequency: %\" PRIu64 \" Hz\\n\",\r\n\t\t\t i,\r\n\t\t\t core->processor_start,\r\n\t\t\t core->frequency);\r\n}\r\nFrequency output 0", "url": "https://github.com/pytorch/cpuinfo/issues/221", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-01-30T03:21:26Z", "updated_at": "2025-12-30T22:59:44Z", "user": "yichenchenyi" }, { "repo": "pytorch/text", "number": 2227, "title": "Fail to import torchtext KeyError: 'SP_DIR'", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n\r\nI failed to import torchtext with the following error. I tried it with a fresh conda env install (under a different python version) and still got the same issue. \r\n\r\nOriginally I was able to use torchtext (I remember installed from pip) in an env of python 3.11, but then it raised error with the dataset module, so I updated torchtext with pip and started getting kernel crush for pytorch import. So I did some uninstall and install of the pytorch and torchtext packages from different sources (conda or pip) and couldn't fix the issue. Even a new conda env using python 3.10 raised the same error. I don't know what is messed up.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\nCell In[3], line 1\r\n----> 1 import torchtext\r\n\r\nFile ~/miniconda3/envs/ml2/lib/python3.10/site-packages/torchtext/__init__.py:6\r\n 3 from torch.hub import _get_torch_home\r\n 5 # the following import has to happen first in order to load the torchtext C++ library\r\n----> 6 from torchtext import _extension # noqa: F401\r\n 8 _TEXT_BUCKET = \\\"https://download.pytorch.org/models/text/\\\"\r\n 10 _CACHE_DIR = os.path.expanduser(os.path.join(_get_torch_home(), \\\"text\\\"))\r\n\r\nFile ~/miniconda3/envs/ml2/lib/python3.10/site-packages/torchtext/_extension.py:7\r\n 4 import torch\r\n 5 from torchtext._internal import module_utils as _mod_utils\r\n----> 7 _LIB_DIR = Path(os.environ[\\\"SP_DIR\\\"]) / \\\"torch\\\" / \\\"lib\\\"\r\n 10 def _get_lib_path(lib: str):\r\n 11 suffix = \\\"pyd\\\" if os.name == \\\"nt\\\" else \\\"so\\\"\r\n\r\nFile ~/miniconda3/envs/ml2/lib/python3.10/os.py:680, in _Environ.__getitem__(self, key)\r\n 677 value = self._data[self.encodekey(key)]\r\n 678 except KeyError:\r\n 679 # raise KeyError with the original key value\r\n--> 680 raise KeyError(key) from None\r\n 681 return self.decodevalue(value)\r\n\r\nKeyError: 'SP_DIR'\r\n```\r\n\r\n```\r\n# packages in environment at /Users/cecilia/miniconda3/envs/ml2:\r\n#\r\n# Name Version Build Channel\r\nannotated-types 0.6.0 pyhd8ed1ab_0 conda-forge\r\nappnope 0.1.3 pyhd8ed1ab_0 conda-forge\r\nasttokens 2.4.1 pyhd8ed1ab_0 conda-forge\r\nbrotli-python 1.1.0 py310h9e9d8ca_1 conda-forge\r\nbzip2 1.0.8 h10d778d_5 conda-forge\r\nca-certificates 2023.11.17 h8857fd0_0 conda-forge\r\ncatalogue 2.0.10 py310h2ec42d9_0 conda-forge\r\ncertifi 2023.11.17 pyhd8ed1ab_0 conda-forge\r\ncharset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge\r\nclick 8.1.7 unix_pyh707e725_0 conda-forge\r\ncloudpathlib 0.16.0 pyhd8ed1ab_0 conda-forge\r\ncolorama 0.4.6 pyhd8ed1ab_0 conda-forge\r\ncomm 0.2.1 pyhd8ed1ab_0 conda-forge\r\nconfection 0.1.4 py310h1cef2ca_0 conda-forge\r\ncymem 2.0.8 py310h9e9d8ca_1 conda-forge\r\ncython-blis 0.7.10 py310hf0b6da5_2 conda-forge\r\ndebugpy 1.8.0 py310h9e9d8ca_1 conda-forge\r\ndecorator 5.1.1 pyhd8ed1ab_0 conda-forge\r\ndouble-conversion 3.3.0 he965462_0 conda-forge\r\nexceptiongroup 1.2.0 pyhd8ed1ab_2 conda-forge\r\nexecuting 2.0.1 pyhd8ed1ab_0 conda-forge\r\nfilelock 3.13.1 pyhd8ed1ab_0 conda-forge\r\nfsspec 2023.12.2 pyhca7485f_0 conda-forge\r\ngmp 6.3.0 h93d8f39_0 conda-forge\r\ngmpy2 2.1.2 py310hb691cb2_1 conda-forge\r\nicu 73.2 hf5e326d_0 conda-forge\r\nidna 3.6 pyhd8ed1ab_0 conda-forge\r\nimportlib-metadata 7.0.1 pyha770c72_0 conda-forge\r\nimportlib_metadata 7.0.1 hd8ed1ab_0 conda-forge\r\nipykernel 6.29.0 pyh3cd1d5f_0 conda-forge\r\nipython 8.20.0 pyh707e725_0 conda-forge\r\njedi 0.19.1 pyhd8ed1ab_0 conda-forge\r\njinja2 3.1.3 pyhd8ed1ab_0 conda-forge\r\njoblib 1.3.2 pyhd8ed1ab_0 conda-forge\r\njupyter_client 8.6.0 pyhd8ed1ab_0 conda-forge\r\njupyter_core 5.7.1 py310h2ec42d9_0 conda-forge\r\nlangcodes 3.3.0 pyhd8ed1ab_0 conda-forge\r\nlibabseil 20230802.1 cxx17_h048a20a_0 conda-forge\r\nlibblas 3.9.0 ", "url": "https://github.com/pytorch/text/issues/2227", "state": "closed", "labels": [], "created_at": "2024-01-30T02:50:25Z", "updated_at": "2024-02-08T02:04:18Z", "comments": 1, "user": "cecilialee" }, { "repo": "pytorch/xla", "number": 6411, "title": "SPMD Global Batch size vs. --per_device_train_batch_size", "body": "## \u2753 Questions and Help\r\n\r\nHey all,\r\n\r\nAm looking to solidify my understanding and seeking a clarification on the SPMD user guide: https://github.com/pytorch-tpu/transformers/blob/llama2-google-next-training/SPMD_USER_GUIDE.md \r\n\r\nI see it says:\r\n\r\n _global_batch_size: The global batch size to use. Note that this value is supplied to the per_device_train_batch_size flag, since currently HuggingFace treats SPMD as a single-device program. This will change in future releases._\r\n\r\nI'd like to ask 2 questions here, to ensure my understanding is correct:\r\n\r\n1) With respect to the blog https://pytorch.org/blog/high-performance-llama-2/ and Figure 2, where it says, notably for the V4-32 use-case: \"per device batch\" = 16, Global Batch = 256, what was the argument to run_clm.py ? Was it \r\n--per_device_train_batch_size 256 ?\r\n\r\nIf it was indeed \"--per_device_train_batch_size 256 \" , is the \"Per Device Batch\" in Figure 2 just a simple calculation of 256/16 TPUv4-32 chips, and NOT an actual argument to run_clm.py ?\r\n\r\n\r\n2) Related, am looking to understand what (future release) project is tracking a refinement of how Global Batch Size is specified for a multi-device configuration ?\r\n\r\n\r\nMany thanks,\r\nIsaac", "url": "https://github.com/pytorch/xla/issues/6411", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-01-30T00:16:54Z", "updated_at": "2025-04-21T13:20:54Z", "user": "isaacr" }, { "repo": "pytorch/TensorRT", "number": 2624, "title": "\u2753 undefined reference when Building Torch-TensorRT", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\nI'm trying to build **Torch-TensorRT version 2.3.0a0**.\r\nI successfully built **Torch 2.3.0.dev**.\r\n\r\nWhen building Torch-TensorRT, if I comment **http_archive** for **libtorch** and **libtorch_pre_cxx11_abi** and use the **new_local_repository** for both of them I get an undefined reference error when running **sudo PYTHONPATH=$PYTHONPATH python3 setup.py install**\r\n\r\nNow If I leave http_archive for libtorch and libtorch_pre_cxx11_abi as default I can \"successfully\" build Torch-TensorRT but when trying to import it to any python code I get: \r\n\r\nImportError: /home/nick/.local/lib/python3.8/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKSs\r\n\r\n\r\nIn the pyproject.toml file I can see that Torch.2.3.0 is mandatory for building Torch-TensorRT and that is the version of torch installed and running in my environment.\r\n\r\nNot sure on how to proceed since it seems I have all the required packages installed.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.3.0a0+git4aa1f99\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): source\r\n - Build command you used (if compiling from source): sudo python3 setup.py build develop \r\n - Are you using local sources or building from archives: local\r\n - Python version: 3.8\r\n - CUDA version: 12.1\r\n - GPU models and configuration: 2080 ti\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2624", "state": "open", "labels": [ "question" ], "created_at": "2024-01-29T18:26:34Z", "updated_at": "2024-11-19T08:23:07Z", "user": "nicholasguimaraes" }, { "repo": "pytorch/vision", "number": 8236, "title": "segmentation fault when importing torchvision", "body": "### \ud83d\udc1b Describe the bug\n\nGet Segment Fault when import torchvision\r\n\r\n## Platform:\r\n Macbook Pro 2018 13.3' with macOS 14.3\r\n\r\n## Pytorch Version\r\n2.1.2\r\n\r\n## Torchvision Version:\r\n0.16.2\r\n\r\n\r\n## How to Reproduce\r\ninput below in shell terminal\r\n```sh\r\npython -c 'import torchvision'\r\n```\r\nthen the output is\r\n```sh\r\nzsh: segmentation fault python -c 'import torchvision'\r\n```\r\n\n\n### Versions\n\nPyTorch version: 2.1.2\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 14.3 (x86_64)\r\nGCC version: Could not collect\r\nClang version: 15.0.0 (clang-1500.1.0.2.5)\r\nCMake version: version 3.28.1\r\nLibc version: N/A\r\n\r\nPython version: 3.11.7 (main, Dec 15 2023, 12:09:04) [Clang 14.0.6 ] (64-bit runtime)\r\nPython platform: macOS-10.16-x86_64-i386-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nIntel(R) Core(TM) i5-8259U CPU @ 2.30GHz\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.3\r\n[pip3] torch==2.1.2\r\n[pip3] torchaudio==2.1.2\r\n[pip3] torchdata==0.7.1\r\n[pip3] torchtext==0.16.2\r\n[pip3] torchvision==0.16.2\r\n[conda] blas 1.0 mkl https://repo.anaconda.com/pkgs/main\r\n[conda] mkl 2023.1.0 h8e150cf_43560 https://repo.anaconda.com/pkgs/main\r\n[conda] mkl-service 2.4.0 py311h6c40b1e_1 https://repo.anaconda.com/pkgs/main\r\n[conda] mkl_fft 1.3.8 py311h6c40b1e_0 https://repo.anaconda.com/pkgs/main\r\n[conda] mkl_random 1.2.4 py311ha357a0b_0 https://repo.anaconda.com/pkgs/main\r\n[conda] numpy 1.26.3 py311h728a8a3_0 https://repo.anaconda.com/pkgs/main\r\n[conda] numpy-base 1.26.3 py311h53bf9ac_0 https://repo.anaconda.com/pkgs/main\r\n[conda] torch 2.1.2 pypi_0 pypi\r\n[conda] torchaudio 2.1.2 pypi_0 pypi\r\n[conda] torchdata 0.7.1 pypi_0 pypi\r\n[conda] torchtext 0.16.2 pypi_0 pypi\r\n[conda] torchvision 0.16.2 pypi_0 pypi", "url": "https://github.com/pytorch/vision/issues/8236", "state": "closed", "labels": [], "created_at": "2024-01-29T01:02:48Z", "updated_at": "2024-01-31T17:17:50Z", "comments": 9, "user": "Romeo-CC" }, { "repo": "pytorch/pytorch", "number": 118357, "title": "How to modify this framework to support using CUDA unified memory?", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nHi all,\r\n\r\nI am a PyTorch user and use open-sourced GPU-based GNN frameworks based on PyTorch. I want to ask if the latest GPU-based Pytorch support CUDA unified memory allocation for tensors?\r\nI found a PR https://github.com/pytorch/pytorch/pull/106200 has supported this to PyTorch, but it seems that it hasn't been merged.\r\nWhat should users do to enable this mode? \r\nWould you please suggest some instructions or lines of example code?\r\n\r\nThank you very much, sir!\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\ncc @ptrblck @0x804d8000", "url": "https://github.com/pytorch/pytorch/issues/118357", "state": "closed", "labels": [ "module: cuda", "triaged", "module: CUDACachingAllocator" ], "created_at": "2024-01-26T03:41:02Z", "updated_at": "2024-02-01T03:41:58Z", "user": "zlwu92" }, { "repo": "pytorch/vision", "number": 8232, "title": "Input Norms and Channel Order for EfficientNet", "body": "### \ud83d\udcda The doc issue\n\nThe documentation for all pretrained models lacks clear details regarding the order of color channels for input images, as well as the specific normalization mean and standard deviation values. I am particularly looking for this information in relation to the EfficientNet model.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8232", "state": "closed", "labels": [], "created_at": "2024-01-25T22:17:07Z", "updated_at": "2024-01-26T10:10:49Z", "comments": 2, "user": "ivanstepanovftw" }, { "repo": "pytorch/serve", "number": 2907, "title": "How to use torchserve metrics", "body": "### \ud83d\udcda The doc issue\n\nWhen I call curl http://127.0.0.1:8082/metrics, it always returns empty results, even if it is called after model inference. But there is clearly a corresponding log in model_metrics.log. I saw that the previous Issue said that prometheus is currently supported as a plug-in? I would like to ask if there is any corresponding documentation.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2907", "state": "closed", "labels": [], "created_at": "2024-01-25T07:50:39Z", "updated_at": "2024-03-20T21:53:20Z", "user": "pengxin233" }, { "repo": "pytorch/serve", "number": 2905, "title": "Can i use multiple workers in single GPU?", "body": "Thanks for your great project.\r\n\r\nI'm newbie and this is my first experience using Torchserve for my project.\r\nI tried to deploy my model using torchserve-gpu.\r\n\r\nIf I want better performance, I can increase the number of workers.\r\nWhen processing with a single worker, GPU usage was not high, so I added more workers to get more inference throughput.\r\n\r\nI think my short experience and knowledge can affect GPU resource scheduling.\r\nBut I'm asking because I think there may be more problems than I thought.\r\n\r\n- Is it ok to use more workers in single GPU environment?\r\n- Are there any other side effects of more workers settings?\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/2905", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2024-01-25T01:40:42Z", "updated_at": "2024-01-30T06:15:08Z", "user": "Twinparadox" }, { "repo": "pytorch/TensorRT", "number": 2618, "title": "\u2753 [Question] How to compile a model with A16W8?", "body": "Hi Torch-TensorRT team:\r\n\r\nI'm wondering how can I compile a model with 8 bit weights, but using 16 bit activations?\r\nThanks a lot!", "url": "https://github.com/pytorch/TensorRT/issues/2618", "state": "open", "labels": [ "question" ], "created_at": "2024-01-23T12:53:23Z", "updated_at": "2024-01-25T20:47:14Z", "user": "jiangwei221" }, { "repo": "pytorch/xla", "number": 6362, "title": "How to do multi-machine spmd training\uff1f", "body": "## \u2753 Questions and Help\r\nAt present, I have passed the single-machine spmd training, but I do not know how to run the multi-machine spmd training. Could you give me a running example\uff1f\r\n@vanbasten23", "url": "https://github.com/pytorch/xla/issues/6362", "state": "closed", "labels": [], "created_at": "2024-01-23T03:33:52Z", "updated_at": "2024-03-13T09:21:25Z", "user": "mars1248" }, { "repo": "pytorch/text", "number": 2223, "title": "The Future of torchtext", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n\r\n<!-- Please send questions or ask for help here. -->\r\n\r\nAs of September 2023 development efforts on torchtext has been stopped. I am wondering what's the future plans in this regard. To opt in for hugging face libraries such as tokenizers? Currently without using the torchtext library it's not really unclear how to work on simple task like text classfication where we don't use a LLM. I can do the preprocessing in spacy and connect to pytorch but somehow it feels different. I'd prefer to do all in pytorch but so far it doesn't seem possible. I didn't invest time into torchtext since so far there is no future for this library and also tutorials just don't work. Perhaps an update/pointers would be nice.\r\n\r\nThanks in advance\r\n", "url": "https://github.com/pytorch/text/issues/2223", "state": "closed", "labels": [], "created_at": "2024-01-22T20:40:10Z", "updated_at": "2024-03-15T16:18:22Z", "comments": 1, "user": "lordsoffallen" }, { "repo": "pytorch/serve", "number": 2899, "title": "How torchserve uses grpc in java", "body": "### \ud83d\udcda The doc issue\n\nI want to use grpc in the java service to call torchserve's model, but I don't seem to have found any relevant documentation.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2899", "state": "closed", "labels": [], "created_at": "2024-01-22T08:54:02Z", "updated_at": "2024-03-20T21:53:35Z", "comments": 2, "user": "pengxin233" }, { "repo": "pytorch/serve", "number": 2898, "title": "Low GPU utilization due to CPU-bound preprocessing ", "body": "I am running torchserve with batch size = 32 and delay = 30ms\r\n\r\nMy preprocessing is CPU bound and my inference is GPU bound.\r\nThe GPU cannot start until the batch is ready on the CPU.\r\n\r\nCurrently, this leads to a serialized workflow where each stage blocks on the previous one:\r\n\r\n* Wait for batch to accumulate in the \"front end\"\r\n* preprocessing - CPU bound\r\n* inference - GPU bound\r\n\r\nProblem \r\n======\r\nI am getting rather low GPU utilization\r\nThis is because GPU is idle while batch is being prepared on the CPU.\r\n\r\nWhat I tried\r\n=========\r\nRunning multiple workers - Helps, but limited by # of cores and GPU memory.\r\nUsing threadpool for preprocessing - helps, but requires having at least 2-3X cores than workers to avoid contention\r\n\r\nQuestion\r\n=======\r\nHow can I increase GPU utilization given that I need to wait for the pre-processing on the CPU?\r\nAny best practice or rules of thumb for this case?\r\n\r\n\r\nIdea\r\n====\r\nStarting processing the batch as it's being built up on the frontend vs. idle until the entire batch is ready on the frontend:\r\n* Start accumulating a new batch\r\n* Immediately call handle() with a *generator* rather than wait for the batch to accumulate\r\n* Start preprocessing on the CPU from the generator (block as long as payloads are not yet available)\r\n* When generator is exhausted, pass the entire batch of tensors to the GPU and infer.\r\n\r\nI don't know if this idea is possible without major changes in the core, but putting it out there..\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/2898", "state": "open", "labels": [], "created_at": "2024-01-20T14:01:30Z", "updated_at": "2024-01-24T05:15:05Z", "comments": 2, "user": "assapin" }, { "repo": "pytorch/kineto", "number": 857, "title": "Why PyTorch TensorBoard Profiler (Deprecated)", "body": "What is the reson to deptecate PyTorch TensorBoard Profiler ?\r\nhttps://github.com/pytorch/kineto#pytorch-tensorboard-profiler-deprecated", "url": "https://github.com/pytorch/kineto/issues/857", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-19T11:26:43Z", "updated_at": "2024-04-11T08:51:34Z", "user": "GuWei007" }, { "repo": "pytorch/xla", "number": 6331, "title": "How to choose XRT runtime when using Torch/XLA 2.1.0?", "body": "The PJRT docs say that setting `XRT_TPU_CONFIG` would choose the XRT runtime, but even when I set it I see the following warnings in the logs, and PJRT gets enabled. My model trains faster on XRT but I'd like to upgrade to 2.1.0. Thanks!\r\n\r\n```\r\nWARNING:root:PJRT is now the default runtime. For more information, see https://github.com/pytorch/xla/blob/master/docs/pjrt.md\r\nWARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.\r\n```\r\n", "url": "https://github.com/pytorch/xla/issues/6331", "state": "closed", "labels": [], "created_at": "2024-01-19T02:59:11Z", "updated_at": "2024-01-19T23:56:54Z", "user": "andrey-klochkov-liftoff" }, { "repo": "pytorch/TensorRT", "number": 2606, "title": "\u2753 [Question] mlp running with torch_tensorrt slower than with inductor\uff1f", "body": "## \u2753 Question\r\nI am within the nvcr.io/nvidia/pytorch:23.12-py3 container. The performance of torch_tensorrt is wrose than inductor.\r\nDetails:\r\nexample code\r\n```python\r\nimport torch\r\nimport torch_tensorrt\r\nimport torch.nn as nn\r\n\r\nclass MLPBlocks(nn.Module):\r\n def __init__(self, window_dim, hidden_dim):\r\n super().__init__()\r\n \r\n self.mlp_1 = nn.Sequential(\r\n nn.Linear(window_dim, window_dim * 4),\r\n nn.ReLU(),\r\n nn.Linear(window_dim * 4, window_dim),\r\n )\r\n self.mlp_2 = nn.Sequential(\r\n nn.Linear(hidden_dim, hidden_dim),\r\n nn.ReLU(),\r\n nn.Linear(hidden_dim, hidden_dim),\r\n )\r\n \r\n def forward(self, x):\r\n x = self.mlp_1(x.transpose(1, 2)).transpose(1, 2)\r\n x = self.mlp_2(x)\r\n return x\r\n\r\nclass MLP(nn.Module):\r\n def __init__(self, *_args):\r\n super(MLP, self).__init__()\r\n self.hidden_dim = 256\r\n self.window_dim = 50\r\n self.n_feature = 800\r\n \r\n self.fc_first = nn.Linear(self.n_feature, self.hidden_dim)\r\n self.fc_last = nn.Linear(self.hidden_dim, 1)\r\n self.blocks = nn.ModuleList([MLPBlocks(window_dim=self.window_dim, hidden_dim=self.hidden_dim) for _ in range(8)])\r\n \r\n def forward(self, input_x):\r\n net_x = self.fc_first(input_x.transpose(0, 1))\r\n for mlp_block in self.blocks:\r\n net_x = mlp_block(net_x)\r\n net_x = self.fc_last(torch.mean(net_x, dim=1))\r\n return net_x\r\n \r\ndef run_model(x, model):\r\n for _ in range(10):\r\n with torch.no_grad():\r\n res = model(x)\r\n \r\n torch.cuda.synchronize()\r\n start = torch.cuda.Event(enable_timing=True)\r\n end = torch.cuda.Event(enable_timing=True)\r\n start.record()\r\n \r\n for i in range(50):\r\n with torch.no_grad():\r\n res = model(x)\r\n \r\n end.record()\r\n torch.cuda.synchronize()\r\n return start.elapsed_time(end)/50\r\n \r\ndef test_inductor(data, model):\r\n x = data.float().cuda()\r\n m = model.float().cuda()\r\n torch._dynamo.reset()\r\n opt_model = torch.compile(m)\r\n print(f\"inductor fp32 time: {run_model(x, opt_model)}\")\r\n \r\n x = x.half()\r\n m = m.half()\r\n torch._dynamo.reset()\r\n opt_model = torch.compile(m)\r\n print(f\"inductor fp16 time: {run_model(x, opt_model)}\")\r\n \r\ndef test_trt_script(data, model):\r\n x = data.float().cuda()\r\n m = model.float().cuda()\r\n script_model = torch.jit.trace(m, x)\r\n trt_ts_model = torch_tensorrt.compile(script_model, ir=\"torchscript\", inputs=[x], enabled_precisions={torch.float})\r\n print(f\"trt_script fp32 time: {run_model(x, trt_ts_model)}\")\r\n \r\n x = x.half()\r\n m = m.half()\r\n script_model = torch.jit.trace(m, x)\r\n trt_ts_model = torch_tensorrt.compile(script_model, ir=\"torchscript\", inputs=[x], enabled_precisions={torch.half})\r\n print(f\"trt script fp16 time: {run_model(x, trt_ts_model)}\")\r\n \r\ndef test_trt_dynamo(data, model):\r\n x = data.float().cuda()\r\n m = model.float().cuda()\r\n torch._dynamo.reset()\r\n opt_model = torch_tensorrt.compile(m, ir=\"torch_compile\", inputs=[x], enabled_precisions={torch.float})\r\n print(f\"trt_dynamo fp32 time: {run_model(x, opt_model)}\")\r\n \r\n x = data.half().cuda()\r\n m = model.half().cuda()\r\n torch._dynamo.reset()\r\n opt_model = torch_tensorrt.compile(m, ir=\"torch_compile\", inputs=[x], enabled_precisions={torch.half})\r\n print(f\"trt_dynamo fp16 time: {run_model(x, opt_model)}\")\r\n \r\nif __name__ == \"__main__\":\r\n model = MLP()\r\n x = torch.randn(50, 5000, 800)\r\n test_inductor(x, model)\r\n test_trt_script(x, model)\r\n test_trt_dynamo(x, model)\r\n```\r\nresult\r\n![8f95d9cf-d710-44fe-b1e6-d21f97e08032](https://github.com/pytorch/TensorRT/assets/38726413/73262423-aadd-4579-aa54-456319f935d5)\r\n\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.2.0a0\r\n - CPU Architecture:\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.10\r\n - CUDA version: 12.3\r\n - GPU models and configuration: A100\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2606", "state": "open", "labels": [ "question" ], "created_at": "2024-01-18T11:29:42Z", "updated_at": "2024-01-19T19:27:17Z", "user": "johnzlli" }, { "repo": "pytorch/pytorch", "number": 117602, "title": "If I use torch.compile to compile the whole graph\uff0cin the my own compiler, how to manage the memory in my own compiler? ", "body": "### \ud83d\udc1b Describe the bug\n\nif I use torch.compile to compile the whole graph\uff0cin the my own compiler \uff0cin forward stage\uff0c\r\n1.if I enable memory reuse in the forward pass\uff0chow the backwards get the activation to calcute the gradient\uff1fhas there some example in pytorch\uff1f\r\n2.if i disable memory reuse\uff0cif i enable some op fusion\uff0cA op+B op fuison to one op, so the A output value is in sram or local memory or gloabal memory , torch can\u2019t get the activation, in backwards how to calcute the gradient?has there some example in pytorch\uff1f\r\n3.how to manage the memory in my own compiler to use torch.compile to speed up the training?\r\n4.how the backwards (autogradient) get the activation from my own compiler ? the memory of every op in the graph must be in ddr?\n\n### Error logs\n\nnone\n\n### Minified repro\n\nNone\n\n### Versions\n\nNone\n\ncc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler", "url": "https://github.com/pytorch/pytorch/issues/117602", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2024-01-17T02:23:18Z", "updated_at": "2024-01-19T17:55:06Z", "user": "mollon650" }, { "repo": "pytorch/pytorch", "number": 117490, "title": "What is the next plan of FP8 support in PyTorch?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nNow PyTorch only supports FP8 data type conversion without scaling. The accuracy is not that good.\r\n\r\nWhat is the plan of FP8 support in PyTorch? Will FP8 DelayedScaling from TransformerEngine be taken into account? Thanks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @svekars @brycebortree @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @albanD @kadeng", "url": "https://github.com/pytorch/pytorch/issues/117490", "state": "closed", "labels": [ "module: docs", "oncall: quantization", "triaged", "actionable", "module: floatx (formerly float8)" ], "created_at": "2024-01-15T10:02:37Z", "updated_at": "2024-01-26T01:48:45Z", "user": "yanbing-j" }, { "repo": "pytorch/audio", "number": 3725, "title": "Resampling at arbitrary time steps", "body": "### \ud83d\ude80 The feature\n\nCurrently, `torchaudio.functional.resample` can only resample at regular time points and the period is determined by `orig_freq` and `new_freq`.\r\n\r\nIs it possible to resample at arbitrary time steps?\r\nSo rather than specifying a resampling ratio, we specify a array of time steps.\n\n### Motivation, pitch\n\nI would like to be able to model jitter in an ADC which can be modelled by a slightly varying sample rate. If you integrate a sample rate curve (which isn't constant), you get irregular time steps. A function such as the one suggested above would allow me to resample using these time steps and model a jittery ADC.\n\n### Alternatives\n\nI've rolled out my own function but it's not super efficient. Some experts might do a better job.\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3725", "state": "open", "labels": [], "created_at": "2024-01-12T09:20:10Z", "updated_at": "2024-01-16T18:52:40Z", "comments": 5, "user": "pfeatherstone" }, { "repo": "pytorch/serve", "number": 2894, "title": "How can I implement batch inference in my model?", "body": "### \ud83d\udcda The doc issue\r\n\r\nI read the docs, and I see this sentence:\r\n\r\n> The frontend then tries to aggregate the batch-size number of requests and send it to the backend.\r\n\r\nHow does it work?\r\n\r\nIn my case, my batch_size is 4 and max_batch_delay is 5000. I sent 2 request simultaneously to torchserve, but in my handler log, which showed torchserve ran 2 preprocess, inference and postprocess. This situation is not as expected? How I can achieve batch inference in my model? \r\n\r\nMy model have 3 input tensors, shapes are [12568, 20, 4], [12568], [12568, 4]. When batch size is 2, shapes are [12568 x 2, 20, 4], [12568 x 2], [12568 x 2, 4]. \r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/2894", "state": "closed", "labels": [], "created_at": "2024-01-11T10:39:58Z", "updated_at": "2024-01-12T05:28:13Z", "comments": 5, "user": "steelONIONknight" }, { "repo": "pytorch/serve", "number": 2892, "title": "Setting log level of handler", "body": "### \ud83d\udcda The doc issue\n\nI need to set the logging level of handler to debug, i wanna see all the logs (of torch also). The docs dont mention much other than setting the log level for torch serve itself (log4j ones). \r\nI tried setting the config inside the handler, but it didnt work\r\n\r\n```python\r\nlogging.basicConfig(level=logging.DEBUG)\r\n```\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2892", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2024-01-09T18:04:49Z", "updated_at": "2024-06-07T21:39:33Z", "user": "hariom-qure" }, { "repo": "pytorch/xla", "number": 6274, "title": "Inconsistent behaviour with `xm.xrt_world_size()` and/or `xm.get_xla_supported_devices()`", "body": "## \ud83d\udc1b Bug\r\n\r\nI noticed that when I execute some code (see further below) on a TPU VM v3-8 (inside a Python venv 3.10.12 + torch 2.1.2+cu121 + torch_xla 2.1.0) uncommenting each time either the `xm.xrt_world_size()` part (**Output 1**) or `xm.get_xla_supported_devices()` (**Output 2**) or none of them - both commented - (**Output 3**) I get different outputs, warnings and sometimes errors.\r\n\r\n**P.S.: I've been trying for a few days to workout how to perform a single multicore processing on one TPU v3-8 device with the ultimate goal to perform distributed training across 5 TPU v3-8 devices but unfortunately I'm still stuck in the most basic operations and struggling to understand how things work in practice. Any help is really appreciated.**\r\n\r\n## To Reproduce\r\n\r\n1. Created 5 TPU VMs as queued resources using the following script (i=1,2,3,4,5):\r\n\r\n```\r\ngcloud alpha compute tpus queued-resources create queued-resource-v3-8-$i \\\r\n --node-id=my-tpu-vm-v3-8-$i \\\r\n --project=my-tpu-project \\\r\n --zone=europe-west4-a \\\r\n --accelerator-type=v3-8 \\\r\n --runtime-version=tpu-ubuntu2204-base \\\r\n --service-account=my-service account\r\n```\r\n\r\n2. Then, once I got access to them I checked their status and once ready I access each one using e.g. (VM1):\r\n`gcloud compute tpus tpu-vm ssh my-tpu-vm-v3-8-1 --zone=europe-west4-a`\r\n\r\n3. I connected to each Cloud TPU VM and run the following startup script (some env variables are the same for all VMs such as `MASTER_ADDR`, `MASTER_PORT` while others are specific to each VM such as `TPU_IP_ADDRESS` and `RANK`):\r\n\r\n```\r\n#!/bin/bash\r\n\r\n# Check if both TPU IP and TPU NAME arguments are provided\r\nif [ \"$#\" -ne 3 ]; then\r\n echo \"Usage: setup_tpu.sh <TPU-IP-ADDRESS> <TPU-NAME> <ENV_PATH>\"\r\n exit 1\r\nfi\r\n\r\n# Read TPU IP address, TPU NAME from the arguments and path to the virtual environment\r\nTPU_IP=$1\r\nTPU_NAME=$2\r\nENV_PATH=$3\r\n\r\n# Install python3-venv for creating virtual environments\r\nsudo apt-get update\r\nsudo apt-get install -y python3.10-venv\r\n\r\n\r\n# Check if the virtual environment already exists\r\nif [ -d \"$ENV_PATH\" ]; then\r\n echo \"Virtual environment '$ENV_PATH' already exists. Deleting it.\"\r\n sudo rm -rf $ENV_PATH\r\nfi\r\n\r\necho \"Creating a new virtual environment '$ENV_PATH'.\"\r\n\r\n# Create a Python virtual environment\r\npython3 -m venv $ENV_PATH\r\nsource $ENV_PATH/bin/activate\r\n\r\n# Upgrade pip\r\npip install --upgrade pip\r\n\r\n# Install PyTorch and Torch XLA\r\n**pip install torch~=2.1.0 torch_xla[tpu]~=2.1.0 torchvision -f https://storage.googleapis.com/libtpu-releases/index.html**\r\n\r\n# Install other dependencies\r\npip install numpy pandas notebook tensorboard tqdm altair datasets tokenizers torchmetrics jupyter ipywidgets google-cloud-storage\r\n\r\n# The script clones the PyTorch/XLA repository. We clone the branch r2.1. If you need a different version, adjust the branch name accordingly.\r\n**git clone -b r2.1 https://github.com/pytorch/xla.git**\r\n\r\n# empty .bash_profile Before Adding New Variables\r\n> ~/.bash_profile\r\n\r\n# Set TPU and GCS related environment variables\r\n**echo \"export PJRT_DEVICE=TPU\" >> ~/.bash_profile**\r\necho \"export TPU_NAME=$TPU_NAME\" >> ~/.bash_profile\r\necho \"export TPU_IP_ADDRESS=$TPU_IP\" >> ~/.bash_profile\r\necho \"export XRT_TPU_CONFIG='tpu_worker;0;$TPU_IP:8470'\" >> ~/.bash_profile\r\necho \"export BUCKET_NAME='my-bucket'\" >> ~/.bash_profile\r\necho \"export GCS_MOUNTED_BUCKET=\\\"/mnt/buckets/$BUCKET_NAME\\\"\" >> ~/.bash_profile\r\necho \"export HF_DATASETS_CACHE=\\\"\\$GCS_MOUNTED_BUCKET/huggingface_datasets_cache\\\"\" >> ~/.bash_profile\r\nGCSFUSE_REPO=$(lsb_release -c -s)\r\necho \"export GCSFUSE_REPO='gcsfuse-$GCSFUSE_REPO'\" >> ~/.bash_profile\r\n# Environment variables for distributed training\r\necho \"export MASTER_ADDR='10.164.0.4'\" >> ~/.bash_profile # Replace with the IP address of the master VM (in our case it is VM1) \r\necho \"export MASTER_PORT=9230\" >> ~/.bash_profile # Replace with your chosen port (see GC Console > VPC network > Firewall > Protocols/ports)\r\necho \"export WORLD_SIZE=40\" >> ~/.bash_profile # Total number of TPU cores across all VMs\r\necho \"export RANK=0\" >> ~/.bash_profile # Unique rank for this VM (0, 1, 2, ..., num_vms - 1)\r\necho \"export LOCAL_RANK=0\" >> ~/.bash_profile # Local rank (0 for a single TPU VM)\r\necho \"export XLA_IR_DEBUG=1\" # enables verbose logging in PyTorch XLA to get more detailed logs, which might help in diagnosing any issues\r\n\r\n# Apply the environment variables\r\nsource ~/.bash_profile\r\n\r\necho \"TPU VM setup is complete.\"\r\n\r\n```\r\n\r\n4. I run the most basic test on each VM to make sure everything works as it should ([see here](https://cloud.google.com/tpu/docs/run-calculation-pytorch#perform_a_simple_calculation)):\r\n\r\n```\r\nimport torch\r\nimport torch_xla.core.xla_model as xm\r\n\r\ndev = xm.xla_device()\r\nt1 = torch.randn(3,3,device=dev)\r\nt2 = torch.randn(3,3,device=dev)\r\nprint(t1 + t2)\r\n```\r\n\r\n**Note: the above code works as expected.**\r\n\r\n5. I also went through the [Troubleshooting](https://github.com/pytorch/xla/blob/master/TROUBLESHOO", "url": "https://github.com/pytorch/xla/issues/6274", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2024-01-09T05:45:42Z", "updated_at": "2025-04-23T14:42:27Z", "user": "h-sellak" }, { "repo": "pytorch/kineto", "number": 854, "title": "Is Kineto planning to support backend extensions?", "body": "Hello, there is 'PrivateUse1' in pytorch to support backend integration. Will Kineto provide similar features?", "url": "https://github.com/pytorch/kineto/issues/854", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-08T03:19:53Z", "updated_at": "2024-04-23T15:21:34Z", "user": "fwenguang" }, { "repo": "pytorch/executorch", "number": 1548, "title": "How to implement the \"aten.mul.Scalar\" for Qualcomm backend", "body": "The second arg of \"aten.mul.Scalar\" is const scalar value, such as float: 0.5f.\r\nThe function define_tensor/define_scalar/define_value of NodeVisitor should get the arg \"node\" as input, but how can I define one node like torch.fx.Node for const scalar value?", "url": "https://github.com/pytorch/executorch/issues/1548", "state": "closed", "labels": [ "partner: qualcomm", "triaged" ], "created_at": "2024-01-06T09:12:19Z", "updated_at": "2024-01-09T02:18:37Z", "user": "czy2014hust" }, { "repo": "pytorch/pytorch", "number": 116922, "title": "How to adapt to `at::scaled_dot_product_attention`'s routing logic for a third-party cuda-like device?", "body": "https://github.com/pytorch/pytorch/blob/f24bba1624a8bb5c920833b18fc6162db084ca09/aten/src/ATen/native/transformers/attention.cpp#L635-L642\r\n\r\nNow, I am adapting `at::scaled_dot_product_attention` to a specific type of cuda-like device and encounters a problem.\r\nIn `at::scaled_dot_product_attention`, it will choose a path between `at::_scaled_dot_product_flash_attention`, `at::_scaled_dot_product_efficient_attention` and `at::_scaled_dot_product_attention_math` for `cpu`, `cuda` and `romc` in the routing codes.\r\nBut for another cuda-like device, the routing codes will always go into the `at::_scaled_dot_product_attention_math` path.\r\n\r\nIf I have implemented all these three paths for my cuda-like device, how can I change these routing codes to fully support `at::scaled_dot_product_attention`?\r\nShould I write a new `at::scaled_dot_product_attention` only for my cuda-like device, or just change some codes in the current torch repository?\r\nI need some suggestions, thank you!\r\n\r\ncc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki", "url": "https://github.com/pytorch/pytorch/issues/116922", "state": "closed", "labels": [], "created_at": "2024-01-06T07:28:43Z", "updated_at": "2024-01-15T02:13:30Z", "user": "drslark" }, { "repo": "pytorch/serve", "number": 2890, "title": "Difference between `Custom handler with module level entry point` and `Custom handler with class level entry point`", "body": "### \ud83d\udcda The doc issue\n\n# Not an issue\r\nWhat is the difference between `Custom handler with module level entry point` and `Custom handler with class level entry point`?\r\nCan you give me any examples? \r\nThanks for help\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2890", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2024-01-05T20:45:25Z", "updated_at": "2024-01-25T05:07:51Z", "user": "IonBoleac" }, { "repo": "pytorch/TensorRT", "number": 2579, "title": "\u2753 [Question] Support for layers with Custom C++ and CUDA Extensions", "body": "## \u2753 Question\r\nSupport for layers with Custom C++ and CUDA Extensions\r\n\r\n## What you have already tried\r\nCan I convert the LLTM class in directory `cuda` of https://github.com/pytorch/extension-cpp (below) into a tensorrt engine through Torch-TensorRT?\r\n\r\nI tried the code below:\r\n```lltm.py\r\nimport math\r\nfrom torch import nn\r\nfrom torch.autograd import Function\r\nimport torch\r\n\r\nimport lltm_cuda\r\n\r\ntorch.manual_seed(42)\r\n\r\nclass LLTMFunction(Function):\r\n @staticmethod\r\n def forward(ctx, input, weights, bias, old_h, old_cell):\r\n outputs = lltm_cuda.forward(input, weights, bias, old_h, old_cell)\r\n new_h, new_cell = outputs[:2]\r\n variables = outputs[1:] + [weights]\r\n ctx.save_for_backward(*variables)\r\n\r\n return new_h, new_cell\r\n\r\n @staticmethod\r\n def backward(ctx, grad_h, grad_cell):\r\n outputs = lltm_cuda.backward(\r\n grad_h.contiguous(), grad_cell.contiguous(), *ctx.saved_variables)\r\n d_old_h, d_input, d_weights, d_bias, d_old_cell, d_gates = outputs\r\n return d_input, d_weights, d_bias, d_old_h, d_old_cell\r\n\r\n\r\nclass LLTM(nn.Module):\r\n def __init__(self, input_features, state_size):\r\n super(LLTM, self).__init__()\r\n self.input_features = input_features\r\n self.state_size = state_size\r\n self.weights = nn.Parameter(\r\n torch.Tensor(3 * state_size, input_features + state_size))\r\n self.bias = nn.Parameter(torch.Tensor(1, 3 * state_size))\r\n self.reset_parameters()\r\n\r\n def reset_parameters(self):\r\n stdv = 1.0 / math.sqrt(self.state_size)\r\n for weight in self.parameters():\r\n weight.data.uniform_(-stdv, +stdv)\r\n\r\n def forward(self, input, state):\r\n return LLTMFunction.apply(input, self.weights, self.bias, *state)\r\n\r\nimport torch_tensorrt\r\n\r\nmodel = LLTM(64, 32).cuda()\r\nprint(\r\n model(\r\n torch.randn(2, 64).cuda(),\r\n torch.randn(2, 32).cuda(),\r\n torch.randn(2, 32).cuda(),\r\n )\r\n)\r\n\r\n\r\ntraced_model = torch.jit.trace(\r\n model,\r\n [\r\n torch.randn(2, 64).cuda(),\r\n torch.randn(2, 32).cuda(),\r\n torch.randn(2, 32).cuda(),\r\n ],\r\n)\r\n\r\nimport torch_tensorrt\r\n\r\ntrt_model = torch_tensorrt.compile(\r\n traced_model,\r\n inputs=[\r\n torch_tensorrt.Input((2, 64), dtype=torch.float32),\r\n torch_tensorrt.Input((2, 32), dtype=torch.float32),\r\n torch_tensorrt.Input((2, 32), dtype=torch.float32),\r\n ],\r\n enabled_precisions={torch.float32},\r\n)\r\n```\r\noutput:\r\n```\r\n[1] 895656 segmentation fault (core dumped) python lltm.py\r\n````\r\n\r\n\r\n\r\nI read the relevant materials, but I have no idea how to proceed at all.", "url": "https://github.com/pytorch/TensorRT/issues/2579", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-05T07:25:23Z", "updated_at": "2024-01-15T06:22:05Z", "user": "Siyeong-Lee" }, { "repo": "pytorch/TensorRT", "number": 2577, "title": "Can please somebody give a clear explanation of how to install torch-tensorrt on Windows?", "body": "## \u2753 Question\r\n\r\nHello,\r\n\r\nI've encountered problems installing torch-tensorrt on Windows 10\r\n\r\nNo matter how I try, how many sources I look up to, there is no clear explanation on how to do everything. The documentation is vague, and because I am used to working with python code, which does everything for you, that is pip install... python code.py, and nothing more is required, I do not have as much experience with cmake, building libraries, files, and c++, which makes it very difficult to follow along the installation process.\r\n\r\n\r\n\r\nNow I've tried to follow along instructions from the [main page](https://github.com/pytorch/TensorRT)\r\n\r\npip install torch-tensorrt doesn't work\r\ndownloaded zip file of this repository; python setup.py install also doesn't work\r\n\r\ninstalled bazel\r\nmodified the workspace, still nothing\r\n\r\ntried to directly import into code py/torch-tensorrt - nothing\r\n\r\nthen inside the py folder opened command prompt ant typed in:\r\n\r\n`bazel build //:libtorchtrt --compilation_mode=dbg`\r\n\r\nand received this error:\r\n\r\n`Starting local Bazel server and connecting to it...\r\nINFO: Repository libtorch instantiated at:\r\n D:/pyth/tensorrt-main/WORKSPACE:53:13: in <toplevel>\r\nRepository rule http_archive defined at:\r\n C:/users/tomas/_bazel_tomas/r4zfvyvs/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in <toplevel>\r\nWARNING: Download from https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip failed: class com.google.devtools.build.lib.bazel.repository.downloader.ContentLengthMismatchException Bytes read 2210658461 but wanted 2501377827\r\nERROR: An error occurred during the fetch of repository 'libtorch':\r\n Traceback (most recent call last):\r\n File \"C:/users/tomas/_bazel_tomas/r4zfvyvs/external/bazel_tools/tools/build_defs/repo/http.bzl\", line 132, column 45, in _http_archive_impl\r\n download_info = ctx.download_and_extract(\r\nError in download_and_extract: java.io.IOException: Error downloading [https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip] to C:/users/tomas/_bazel_tomas/r4zfvyvs/external/libtorch/temp7217651597570855917/libtorch-cxx11-abi-shared-with-deps-latest.zip: Bytes read 2210658461 but wanted 2501377827\r\nERROR: D:/pyth/tensorrt-main/WORKSPACE:53:13: fetching http_archive rule //external:libtorch: Traceback (most recent call last):\r\n File \"C:/users/tomas/_bazel_tomas/r4zfvyvs/external/bazel_tools/tools/build_defs/repo/http.bzl\", line 132, column 45, in _http_archive_impl\r\n download_info = ctx.download_and_extract(\r\nError in download_and_extract: java.io.IOException: Error downloading [https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip] to C:/users/tomas/_bazel_tomas/r4zfvyvs/external/libtorch/temp7217651597570855917/libtorch-cxx11-abi-shared-with-deps-latest.zip: Bytes read 2210658461 but wanted 2501377827\r\nERROR: D:/pyth/tensorrt-main/core/util/logging/BUILD:13:11: //core/util/logging:logging depends on @libtorch//:libtorch in repository @libtorch which failed to fetch. no such package '@libtorch//': java.io.IOException: Error downloading [https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip] to C:/users/tomas/_bazel_tomas/r4zfvyvs/external/libtorch/temp7217651597570855917/libtorch-cxx11-abi-shared-with-deps-latest.zip: Bytes read 2210658461 but wanted 2501377827\r\nERROR: Analysis of target '//:libtorchtrt' failed; build aborted:\r\nINFO: Elapsed time: 458.697s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully (64 packages loaded, 413 targets configured)\r\n Fetching https://download.pytorch.org/...orch-cxx11-abi-shared-with-deps-latest.zip; 2.1 GiB (2,210,121,825B) 446s\r\n\r\n\r\n\r\nAnd also tried some other things, I cannot remember, but unsuccessfully.\r\n\r\n\r\n\r\nTHANK YOU FOR YOUR HELP IN ADVANCE\r\n\r\n\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 2.1.2+cu121\r\n - OS : Windows 10\r\n - I am running python and pytorch straight from Windows, without any environment\r\n - Python version: 3.10.13\r\n - CUDA version: 12.1 update 1\r\n - GPU models and configuration: GTX 1660 TI", "url": "https://github.com/pytorch/TensorRT/issues/2577", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-05T02:52:01Z", "updated_at": "2025-12-02T18:12:43Z", "user": "ninono12345" }, { "repo": "pytorch/executorch", "number": 1527, "title": "How to build qnn_executor_runner for linux-gcc9.3?", "body": "My requirements are that I want to compile the model on x86 host and run the inference on linux device using Qualcomm AI Engine, e.g. SA8295. So how to build `qnn_executor_runner` for linux-gcc9.3 not android? thanks~\r\nthe libQnnHtp.so is different in qnn.\r\n```\r\n$ find . -name libQnnHtp.so\r\n./lib/aarch64-oe-linux-gcc9.3/libQnnHtp.so\r\n./lib/aarch64-android/libQnnHtp.so\r\n```", "url": "https://github.com/pytorch/executorch/issues/1527", "state": "closed", "labels": [ "partner: qualcomm", "triaged" ], "created_at": "2024-01-03T09:04:08Z", "updated_at": "2024-01-29T07:49:12Z", "user": "huangzhiyuan" }, { "repo": "pytorch/pytorch", "number": 116687, "title": "How to install pytorch on", "body": "", "url": "https://github.com/pytorch/pytorch/issues/116687", "state": "closed", "labels": [], "created_at": "2024-01-03T08:12:33Z", "updated_at": "2024-01-03T08:42:38Z", "user": "Joseph513shen" }, { "repo": "pytorch/examples", "number": 1208, "title": "add examples/siamese_network with triplet loss example", "body": "<!--\r\nThank you for suggesting an idea to improve pytorch/examples\r\n\r\nPlease fill in as much of the template below as you're able.\r\n-->\r\n\r\n## Is your feature request related to a problem? Please describe.\r\nCan you please provide an example of Siamese network training / testing with triplet loss such that it can be used with more complex image datasets?\r\n\r\n## Describe the solution\r\nEither add an args flag to set triplet loss as the method in the existing example, or provide a separate example for triplet loss.\r\n\r\n## Describe alternatives solution\r\nI tried to do this on my own.\r\n", "url": "https://github.com/pytorch/examples/issues/1208", "state": "open", "labels": [], "created_at": "2024-01-01T19:19:35Z", "updated_at": "2024-01-01T19:19:35Z", "comments": 0, "user": "pax7" }, { "repo": "pytorch/tutorials", "number": 2724, "title": "\ud83d\udca1 Request - Tutorials for Holistic Trace Analysis", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nAdd tutorials explaining how to use features in Holistic Trace Analysis.\n\n### Existing tutorials on this topic\n\nNone\n\n### Additional context\n\nHTA eases the profiling distributed jobs in PyTorch. In order to introduce HTA to the PyTorch community it would be beneficial to add some tutorials.", "url": "https://github.com/pytorch/tutorials/issues/2724", "state": "closed", "labels": [], "created_at": "2023-12-28T21:56:27Z", "updated_at": "2024-01-02T23:03:08Z", "comments": 0, "user": "anupambhatnagar" }, { "repo": "pytorch/TensorRT", "number": 2558, "title": "How to set the input when compiling model for non-image input?", "body": "Hi, I have trained a model whose input is a set of 3D points with a shape `Nx3`, N is not a fixed number. In this case, how to set the input during compiling my model? \r\n\r\nFor image, the input shape is like this:\r\n```\r\ninputs = [torch.randn((1, 3, 224, 224)).to(\"cuda\").half()]\r\n```\r\n\r\nWhat if for my case? Thank you!\n```[tasklist]\n### Tasks\n```\n", "url": "https://github.com/pytorch/TensorRT/issues/2558", "state": "open", "labels": [ "question" ], "created_at": "2023-12-26T12:34:22Z", "updated_at": "2023-12-27T18:20:31Z", "user": "DeepDuke" }, { "repo": "pytorch/xla", "number": 6234, "title": "How to judge the input parameters in an hlo graph, which is the weight of the model", "body": "## \u2753 Questions and Help\r\nHow to judge the input parameters in an hlo graph, which is the weight of the model (that is, the parameters saved by the model and the parameters thought of the model training), is there any good way to judge it in C++ torch xla source code?\r\n \r\n for example: (one model of only linear Op) \r\n I want to find out the linear bias and weith. In here, %arg0: bias and %arg1: weight .\r\n\r\n```\r\n func.func @main(%arg0: tensor<5xf32>, %arg1: tensor<5x10xf32>, %arg2: tensor<1x10xf32>, %arg3: tensor<1x5xf32>) -> tuple<tensor<1x5xf32>, tensor<f32>> {\r\n %0 = mhlo.reshape %arg0 : (tensor<5xf32>) -> tensor<1x5xf32>\r\n %1 = \"mhlo.transpose\"(%arg1) {permutation = dense<[1, 0]> : tensor<2xi64>, xla_shape = \"f32[10,5]{0,1}\"} : (tensor<5x10xf32>) -> tensor<10x5xf32>\r\n %2 = \"mhlo.fusion\"(%0, %arg2, %1) ({\r\n ^bb0(%arg4: tensor<1x5xf32>, %arg5: tensor<1x10xf32>, %arg6: tensor<10x5xf32>):\r\n %10 = \"mhlo.dot\"(%arg5, %arg6) {precision_config = [#mhlo<precision DEFAULT>, #mhlo<precision DEFAULT>]} : (tensor<1x10xf32>, tensor<10x5xf32>) -> tensor<1x5xf32>\r\n %11 = mhlo.add %10, %arg4 : tensor<1x5xf32>\r\n mhlo.return %11 : tensor<1x5xf32>\r\n }) {fusion_kind = #mhlo<fusion_kind kLoop>} : (tensor<1x5xf32>, tensor<1x10xf32>, tensor<10x5xf32>) -> tensor<1x5xf32> \r\n %3 = mhlo.subtract %2, %arg3 : tensor<1x5xf32>\r\n %4 = mhlo.multiply %3, %3 : tensor<1x5xf32>\r\n %5 = mhlo.constant dense<0.000000e+00> : tensor<f32>\r\n %6 = mhlo.reduce(%4 init: %5) across dimensions = [0, 1] : (tensor<1x5xf32>, tensor<f32>) -> tensor<f32>\r\n reducer(%arg4: tensor<f32>, %arg5: tensor<f32>) {\r\n %10 = mhlo.add %arg4, %arg5 : tensor<f32>\r\n mhlo.return %10 : tensor<f32>\r\n }\r\n %7 = mhlo.constant dense<2.000000e-01> : tensor<f32>\r\n %8 = mhlo.multiply %6, %7 : tensor<f32> \r\n %9 = \"mhlo.tuple\"(%2, %8) {xla_shape = \"(f32[1,5]{1,0}, f32[])\"} : (tensor<1x5xf32>, tensor<f32>) -> tuple<tensor<1x5xf32>, tensor<f32>>\r\n return %9 : tuple<tensor<1x5xf32>, tensor<f32>>\r\n }\r\n```", "url": "https://github.com/pytorch/xla/issues/6234", "state": "closed", "labels": [], "created_at": "2023-12-25T09:22:28Z", "updated_at": "2024-01-24T06:22:24Z", "user": "ckfgihub" }, { "repo": "pytorch/TensorRT", "number": 2557, "title": "\u2753 [Question] a10 performance drop significantly", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nI converted the gfpgan model (https://github.com/TencentARC/GFPGAN) with torch_tensorrt, and I found torch_tensorrt is twice as fast as torch in 3070. But in one a10 server, torch_tensorrt and torch are closed; In other a10 server, torch_tensorrt is even twice as slow as torch. Statics shows below. \uff08two type of a10 from two difference cloud server\uff09.\r\n\r\n| GPU | CPU | CPU core | CPU freq | memory | inference framework | CPU usage | memory usage | GPU usage | inference time |\r\n|------------|---------|----------|---------|----------|------|-----------|-----------|----------|----------|\r\n| 3070 | AMD Ryzen 7 5800X 8-Core Processor | 16 | 2200-3800MHz | 32G | pytorch | 30-35% | 160-170% | 13.5g 987.7m | 33.889511s |\r\n| 3070 | | | | | torch_tensorrt | 15-20% | 180-200% | 11.7g 1.1g | 16.259879s |\r\n| a10\uff08v1\uff09 | Intel (R) Xeon (R) Platinum 8350C CPU @ 2.60GHz | 28 | 2593MHz | 112G | pytorch | 25-30% | 190-200% | 15.1g 1.2g | 33.933190s |\r\n| a10\uff08v1\uff09 | | | | | torch_tensorrt | 15-20% | 190-200% | 13.0g 1.2g | 31.899047s |\r\n| a10\uff08v2\uff09| Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz | 28 | 2300-4600MHz | 112G | pytorch | 20-30% | 180-200% | 15.1g 1.0g | 34.027398s |\r\n| a10\uff08v2\uff09| | | | | torch_tensorrt | 10-15% | 160-170% | 13.1g 1.1g | 66.498723s |\r\n\r\nI also tried torch2trt(https://github.com/NVIDIA-AI-IOT/torch2trt) and fixed some op error, finding it's twice as fast as torch_tensorrt in 3070. And performance didn't drop so strangely in a10 server.\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): nvcr.io/nvidia/pytorch:23.08-py3 \r\n - CPU Architecture: as above\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): docker\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration: as above\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2557", "state": "open", "labels": [ "question" ], "created_at": "2023-12-25T08:54:43Z", "updated_at": "2024-01-05T02:12:17Z", "user": "ArtemisZGL" }, { "repo": "pytorch/tutorials", "number": 2721, "title": "[BUG] - <title>RuntimeError: CUDA error: an illegal memory access was encountered using vmap and model ensembling call for cuda system", "body": "### Add Link\n\nhttps://pytorch.org/tutorials/intermediate/ensembling.html\r\nhttps://pytorch.org/docs/stable/notes/extending.func.html#defining-the-vmap-staticmethod\n\n### Describe the bug\n\n### \ud83d\udc1b Describe the bug\r\n\r\nI want to use **vmap** to vectorize the **ensemble models** inherited from torch.autograd.Function. And torch.autograd.Function\u2019s forward/backward calls into functions from **cuda**. etc, \r\n\r\n\r\nFirstly, I set **generate_vmap_rule=True** ,which means calling the system's vmap function directly.\r\n**error: RuntimeError: Cannot access data pointer of Tensor that doesn't have storage**\r\nBecaue model calls for cuda system\uff0cI need to write the own vmap, \r\n\r\n```\r\ndef vmap(info,in_dims,input):\r\n if in_dims[0] is not None:\r\n input_B = input.shape[0]\r\n input = einops.rearrange(input,'B N C -> (B N) C') \r\n outputs,_,_ = model.apply(input)\r\n if in_dims[0] is not None:\r\n outputs = einops.rearrange(input,'(B N) C -> B N C',B = input_B)\r\n return outputs,(0)\r\n```\r\n\r\n**error: RuntimeError: CUDA error: an illegal memory access was encountered,CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.**\r\n\r\n**How can I write the vmap.py to deal the Multiple models process multiple batches of data and models call for cuda to process data?**\r\n\r\ncode follows,I simplify the model class.\r\n\r\n```\r\ndef model(torch.autograd.Function):\r\n def foward():\r\n calls for cuda forward\r\n def backward():\r\n calls for cuda backward\r\n def setup_context():\r\n @staticmethod\r\n def vmap():\r\n\r\nfrom torch.func import stack_module_state\r\nb_p = torch.randn([10,100,3]).cuda() \r\n \r\nobjs = [model() for i in range(10)]\r\npe_models = []\r\nfor obj in objs:\r\n pe_models.append(obj.pe)\r\npe_param, pe_buffer = stack_module_state(pe_models)\r\nbase_model = copy.deepcopy(pe_models[0])\r\ndef fmodel(params,buffers,x):\r\n return functional_call(base_model,(params,buffers),x)\r\nout = vmap(fmodel)(pe_param,pe_buffer,b_p)\r\n```\r\n\r\n\n\n### Describe your environment\n\n### Versions\r\n\r\npytorch2.0\r\ncuda11.7\r\npython 3.8 \r\nubuntu20.4\r\ncollect_env.py error update later\n\ncc @albanD", "url": "https://github.com/pytorch/tutorials/issues/2721", "state": "open", "labels": [ "bug", "core" ], "created_at": "2023-12-22T09:26:03Z", "updated_at": "2024-01-04T08:27:38Z", "comments": 2, "user": "wuyingxiong" }, { "repo": "pytorch/serve", "number": 2866, "title": "The workers works in parallelism? ", "body": "### \ud83d\udcda The doc issue\n\nThis is not an issue. How the workers works in parallelism and if they works in parallelism? \n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2866", "state": "closed", "labels": [ "triaged" ], "created_at": "2023-12-21T23:33:39Z", "updated_at": "2024-01-05T20:54:53Z", "comments": 7, "user": "IonBoleac" }, { "repo": "pytorch/audio", "number": 3720, "title": "Can't install some of the libraries", "body": "Hello, i have a problem while installing some of the libraries because i can't install module fcntl. Is there any solution because on one windows pc works but on my main it doesn't. That module is linux dependent.", "url": "https://github.com/pytorch/audio/issues/3720", "state": "open", "labels": [], "created_at": "2023-12-21T13:58:55Z", "updated_at": "2023-12-21T13:58:55Z", "comments": 0, "user": "Toplica001" }, { "repo": "pytorch/audio", "number": 3719, "title": "streamreader add_video_stream doesn't seem to accept any filter_desc options", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI'm using the following options in my streamreader:\r\n```\r\nvr.add_video_stream(\r\n frames_per_chunk=decode_size, \r\n decoder=codec, \r\n decoder_option={\"threads\": \"0\", \"gpu\": \"0\"}, \r\n hw_accel='cuda',\r\n filter_desc=f\"format=pix_fmts=rgb24\"\r\n )\r\n```\r\n\r\nUnfortunately I get the error `RuntimeError: Failed to configure the graph: Function not implemented`. \r\nIf I remove the filter_desc option the code runs normally. For me the streamreader is not very useful if the output is not in rgb24 but in yuv444p instead. Is there a way to fix this (without moving to the nightly build), or are there any alternatives?\r\n\r\n\r\n### Versions\r\n\r\nPyTorch version: 2.1.2+cu118\r\nIs CUDA available: True\r\n[pip3] numpy==1.24.1\r\n[pip3] torch==2.1.2+cu118\r\n[pip3] torchaudio==2.1.2+cu118\r\n[pip3] torchvision==0.16.2+cu118\r\n[pip3] triton==2.1.0\r\n", "url": "https://github.com/pytorch/audio/issues/3719", "state": "open", "labels": [], "created_at": "2023-12-21T09:58:03Z", "updated_at": "2023-12-28T07:46:49Z", "comments": 1, "user": "caspersmit-sa" }, { "repo": "pytorch/benchmark", "number": 2094, "title": "how to get the memory test job", "body": "https://arxiv.org/pdf/2304.14226.pdf the paper says torchbench can do memory test\uff0c but I can\u2018t find any test jobs for memory test\r\n\r\nhttps://github.com/pytorch/benchmark/actions", "url": "https://github.com/pytorch/benchmark/issues/2094", "state": "closed", "labels": [], "created_at": "2023-12-19T14:18:29Z", "updated_at": "2023-12-20T01:59:46Z", "user": "GuWei007" }, { "repo": "pytorch/TensorRT", "number": 2551, "title": "\u2753 [Question] Error regarding the operation of pytorch_quantization\uff1a/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nWhen I run finetune_qat.py for vgg I get the error:\r\n\r\n```\r\npython finetune_qat.py \r\nTraceback (most recent call last):\r\n File \"/home/incar/tms/source/tensortclassicify/finetune_qat.py\", line 16, in <module>\r\n from pytorch_quantization import nn as quant_nn\r\n File \"/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/__init__.py\", line 20, in <module>\r\n from .quant_modules import *\r\n File \"/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/quant_modules.py\", line 23, in <module>\r\n from pytorch_quantization import nn as quant_nn\r\n File \"/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/nn/__init__.py\", line 19, in <module>\r\n from pytorch_quantization.nn.modules.tensor_quantizer import *\r\n File \"/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py\", line 24, in <module>\r\n from pytorch_quantization.tensor_quant import QuantDescriptor, tensor_quant, fake_tensor_quant, scaled_e4m3\r\n File \"/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/tensor_quant.py\", line 28, in <module>\r\n from pytorch_quantization import cuda_ext\r\nImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/cuda_ext.cpython-310-x86_64-linux-gnu.so)\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n '2.1.2+cu121'\r\n - CPU Architecture:\r\n intel \r\n - OS (e.g., Linux):\r\n ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n pip install torch torchvision torchaudio\r\n - Build command you used (if compiling from source):\r\n pip install nvidia-pyindex sphinx-glpi-theme prettytable pyyaml absl-py scipy\r\npip install -i https://pypi.ngc.nvidia.com pytorch-quantization\r\n - Are you using local sources or building from archives:\r\n no\r\n - Python version:\r\n 3.10\r\n - CUDA version:\r\n 12.2\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\nso,how can i run pytorch_quantization?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2551", "state": "open", "labels": [ "question" ], "created_at": "2023-12-19T10:16:49Z", "updated_at": "2024-02-16T02:29:47Z", "user": "tms2003" }, { "repo": "pytorch/torchx", "number": 802, "title": "Why can't tracker entrypoint be specified in .torchxconfig", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\nThe [documentation](https://pytorch.org/torchx/main/tracker.html#user-job-configuration-advanced) is somewhat confusing and is marked for Advanced use after mentioning the mechanism to reference entrypoint, but is there a reason we can't also specify the tracker's entrypoint right in `.torchxconfig` in addition to those discoverable via `entry_points.txt`? E.g.:\r\n\r\n```\r\n[torchx:tracker]\r\nmy_tracker=my_module:my_function\r\n\r\n[tracker:my_tracker]\r\n...\r\n```\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/802", "state": "open", "labels": [], "created_at": "2023-12-18T21:26:02Z", "updated_at": "2023-12-19T17:39:10Z", "comments": 2, "user": "clumsy" }, { "repo": "pytorch/audio", "number": 3717, "title": "AV-HuBERT integration with torchaudio.pipelines.Wav2Vec2FABundle ", "body": "### \ud83d\ude80 The feature\n\nHow would someone go about configuring AV-HuBERT to work with `torchaudio.pipelines.Wav2Vec2FABundle`? It currently only supports [MMS_FA](https://pytorch.org/audio/stable/pipelines.html#pertrained-models)\n\n### Motivation, pitch\n\nCurrently the `torchaudio.pipelines.Wav2Vec2FABundle` forced aligner only supports [MMS_FA](https://pytorch.org/audio/stable/pipelines.html#pertrained-models).\r\nThis is a request to add support for an AV-ASR, namely AV-HuBERT. The feature could also be a tutorial on how to extend the list of supported models that are multimodal speech+video.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3717", "state": "open", "labels": [], "created_at": "2023-12-16T01:04:05Z", "updated_at": "2023-12-16T01:04:05Z", "comments": 0, "user": "bejjani" }, { "repo": "pytorch/kineto", "number": 851, "title": "In Overview page, time unit error", "body": "Time unit error\r\n![1](https://github.com/pytorch/kineto/assets/47709353/0803d343-60a8-416d-af05-877f37e8878c)\r\n", "url": "https://github.com/pytorch/kineto/issues/851", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-15T04:15:45Z", "updated_at": "2024-04-23T15:23:24Z", "user": "Aiuan" }, { "repo": "pytorch/serve", "number": 2853, "title": "Torchserve Error: number of batch response mismatched", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWe deployed NER Model with n1-standard-8 machine without GPU with below config properties. when we kept batch size as 1, it is taking more time to process the simultaneous requests. when we try to increase the batch size, we are getting below error. (we tried with different batch size like 16,32,64,8 etc and max workers as 1 and 8). I want to process multiple threads simultaneously. Please suggest solution. Do I need to change handler script, if yes, how? How to increase throughput?\r\n\r\n### Error logs\r\n\r\nResponse: response_data: {'code': 503, 'type': 'InternalServerException', 'message': 'number of batch response mismatched'}\r\n\r\n\r\n### Installation instructions\r\n\r\nYes, we are using docker container to deploy the model on vertex ai\r\n\r\n### Model Packaing\r\n\r\nUsing docker and creating a custom prediction container and packaging all the serving scripts like handler.py, config properties etc\r\n\r\n### config.properties\r\n\r\ninference_address=http://0.0.0.0:8090\r\nmanagement_address=http://0.0.0.0:8091\r\nmetrics_address=http://0.0.0.0:8092\r\ninstall_py_dep_per_model=true\r\nprefer_direct_buffer=true\r\njob_queue_size=10000\r\nasync_logging=true\r\nnumber_of_netty_threads=8\r\nnetty_client_threads=8\r\ndefault_workers_per_model=1\r\nmodels={\\\r\n \"description\": {\\\r\n \"1.0\": {\\\r\n \"defaultVersion\": true,\\\r\n \"marName\": \"description.mar\",\\\r\n \"minWorkers\": 1,\\\r\n \"maxWorkers\": 8,\\\r\n \"batchSize\": 16,\\\r\n \"maxBatchDelay\": 65,\\\r\n \"responseTimeout\": 100\\\r\n }\\\r\n }\\\r\n}\r\n\r\n### Versions\r\n\r\nwe are using this base image\r\n\r\npytorch/torchserve:latest-gpu\r\n\r\n### Repro instructions\r\n\r\nwe carried out performance testing using 5/10/20 simultaneous users hitting vertex ai endpoint but avg time is around 20 seconds which is very high for 20 simultaneous users.\r\n\r\n### Possible Solution\r\n\r\nHow to optimize the config parameters? Do I need to update handler script? Please suggest a way", "url": "https://github.com/pytorch/serve/issues/2853", "state": "closed", "labels": [ "triaged" ], "created_at": "2023-12-14T08:33:11Z", "updated_at": "2024-01-18T20:11:46Z", "comments": 9, "user": "rajeshmore1" }, { "repo": "pytorch/TensorRT", "number": 2541, "title": "\u2753 [Question] Is it possible to export unet's tensorrt engine as a file in stable diffusion?", "body": "## \u2753 Question\r\n\r\nHello. I am currently trying to infer the stable diffusion XL inpaint model using your package. \r\nmodel link : https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1\r\n\r\nI referred to your example code and modified it as follows.\r\n\r\n```python\r\nimport torch\r\n\r\nfrom diffusers import AutoPipelineForInpainting\r\nfrom diffusers.utils import load_image\r\nimport torch_tensorrt\r\n\r\nmodel_id = \"diffusers/stable-diffusion-xl-1.0-inpainting-0.1\"\r\ndevice = \"cuda\"\r\n\r\n# Instantiate Stable Diffusion Pipeline with FP16 weights\r\npipe = AutoPipelineForInpainting.from_pretrained(\r\n model_id, variant=\"fp16\", torch_dtype=torch.float16\r\n)\r\n\r\npipe = pipe.to(device)\r\nbackend = \"torch_tensorrt\"\r\n\r\n# Optimize the UNet portion with Torch-TensorRT\r\npipe.unet = torch.compile(\r\n pipe.unet,\r\n backend=backend,\r\n options={\r\n \"truncate_long_and_double\": True,\r\n \"precision\": torch.float16,\r\n },\r\n dynamic=False,\r\n)\r\n\r\n# %%\r\n# Inference\r\n# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\nimg_url = \"https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png\"\r\nmask_url = \"https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png\"\r\n\r\nimage = load_image(img_url).resize((1024, 1024))\r\nmask_image = load_image(mask_url).resize((1024, 1024))\r\n\r\nprompt = \"a tiger sitting on a park bench\"\r\n\r\nimage = pipe(\r\n prompt=prompt,\r\n image=image,\r\n mask_image=mask_image,\r\n guidance_scale=8.0,\r\n num_inference_steps=20,\r\n strength=0.99,\r\n ).images[0]\r\n\r\n\r\nimage.save(\"inpaint-result.png\")\r\n```\r\n\r\nOn my gpu machine the conversion to tensorrt takes over 15 minutes. Since I can't do this conversion every time, I'm trying to find a way to save it in file format such as \".trt\" file and use it.\r\n\r\nWhen looking in your documentation, it was difficult to find such a feature. Do you support these features? If so, please let me know.\r\n\r\n\r\n## What you have already tried\r\n\r\nDescribed above\r\n\r\n## Environment\r\n\r\ndocker container : nvcr.io/nvidia/pytorch:23.11-py3\r\ngpu : p40\r\n\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2541", "state": "open", "labels": [ "question" ], "created_at": "2023-12-14T08:13:19Z", "updated_at": "2023-12-15T22:48:48Z", "user": "0-chan-kor" }, { "repo": "pytorch/TensorRT", "number": 2530, "title": "\u2753 [Question] The stable diffusion example doesn't work", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\nhttps://github.com/pytorch/TensorRT/blob/main/examples/dynamo/torch_compile_stable_diffusion.py\r\n\r\nI tried executing the above Python code, but conversion to TensorRT failed as shown below.\r\n\r\n```bash\r\nWARNING:torch_tensorrt.dynamo.backend.backends:TRT conversion failed on the subgraph. See trace above. Returning GraphModule forward instead.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/backend/backends.py\", line 93, in _pretraced_backend\r\n trt_compiled = compile_module(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/compile.py\", line 244, in compile_module\r\n trt_module = convert_module(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/conversion/conversion.py\", line 33, in convert_module\r\n module_outputs = module(*torch_inputs)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py\", line 726, in call_wrapped\r\n return self._wrapped_call(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py\", line 305, in __call__\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py\", line 292, in __call__\r\n return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py\", line 1519, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py\", line 1528, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"<eval_with_key>.14\", line 6, in forward\r\n view_10 = torch.ops.aten.view.default(permute_10, [2, -1, 320]); permute_10 = None\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/_ops.py\", line 499, in __call__\r\n return self._op(*args, **kwargs or {})\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py\", line 20, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py\", line 1323, in __torch_dispatch__\r\n return self.dispatch(func, types, args, kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py\", line 1621, in dispatch\r\n r = func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/_ops.py\", line 499, in __call__\r\n return self._op(*args, **kwargs or {})\r\nRuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.\r\n```\r\nIs this an example python that actually passes? Or is there an environment version that needs to be set for this example?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\nI used the latest version of the pytorch container, nvcr.io/nvidia/pytorch:23.11-py3, and pip installed the latest versions of diffusers and transformers.\r\n\r\n## Additional context\r\nNone", "url": "https://github.com/pytorch/TensorRT/issues/2530", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-12T10:35:01Z", "updated_at": "2024-10-25T10:30:09Z", "user": "0-chan-kor" }, { "repo": "pytorch/serve", "number": 2849, "title": "Broken pipe on big response tensors", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWe have a model which essentially does image segmentation of sorts. \r\nThe output tensor is of this size: `[batch, 920, 920]`, fp32. \r\n\r\nI keep getting broken pipe errors in this: \r\n\r\nFrom my debugging, it essentially fails after I return this tensor from my `postprocess` method in base handler. \r\nIs there a limit to response size for torchserve? \r\nThanks for the help!\r\n\r\n\r\n\r\n### Error logs\r\n\r\nthe main container logs: \r\n```\r\nhariomapp-torchserve-1 | java.lang.InterruptedException: null\r\nhariomapp-torchserve-1 | \tat java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?]\r\nhariomapp-torchserve-1 | \tat java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:515) ~[?:?]\r\nhariomapp-torchserve-1 | \tat java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:677) ~[?:?]\r\nhariomapp-torchserve-1 | \tat org.pytorch.serve.wlm.Model.pollBatch(Model.java:367) ~[model-server.jar:?]\r\nhariomapp-torchserve-1 | \tat org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:36) ~[model-server.jar:?]\r\nhariomapp-torchserve-1 | \tat org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:194) [model-server.jar:?]\r\nhariomapp-torchserve-1 | \tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]\r\nhariomapp-torchserve-1 | \tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]\r\nhariomapp-torchserve-1 | \tat java.lang.Thread.run(Thread.java:833) [?:?]\r\n```\r\n\r\nModel logs\r\n```\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - Backend worker process died.\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - Traceback (most recent call last):\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - File \"/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py\", line 258, in <module>\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - worker.run_server()\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - File \"/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py\", line 226, in run_server\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - self.handle_connection(cl_socket)\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - File \"/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py\", line 183, in handle_connection\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - cl_socket.sendall(resp)\r\n2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - BrokenPipeError: [Errno 32] Broken pipe\r\n2023-12-12T07:11:28,676 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - s_name_part0=/home/model-server/tmp/.ts.sock, s_name_part1=9000, p\r\n```\r\n\r\n\r\n### Installation instructions\r\n\r\nUsing docker, simply ran the stock image in dockerhub\r\n\r\ncompose file:\r\n```yml\r\nversion: '3'\r\nservices:\r\n torchserve:\r\n image: pytorch/torchserve:latest-gpu\r\n ports:\r\n - 9080:8080\r\n - 9081:8081\r\n - 9082:8082\r\n - 7070:7070\r\n - 7071:7071\r\n volumes:\r\n - ./modelstore:/home/model-server/model-store\r\n environment:\r\n - TS_METRICS_MODE=prometheus\r\n command: torchserve --model-store /home/model-server/model-store\r\n```\r\n\r\n### Model Packaing\r\n\r\nI simply take a tensor as input and return raw tensor generated by model in output. \r\nEssentially I get a `tuple[dict[str, Tensor], dict[str, Tensor]]` from the model, all tensor values would have the same size and have the batch size as first dimension.\r\n\r\nhandler\r\n\r\n```python\r\nfrom ts.torch_handler.base_handler import BaseHandler\r\nimport pickle\r\nimport base64\r\nimport logging\r\nimport torch\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\nclass ModelHandler(BaseHandler):\r\n def preprocess(self, data):\r\n all_tensors = [pickle.loads(d[\"body\"]) for d in data]\r\n result = torch.cat(all_tensors, 0)\r\n result.to(self.device)\r\n return result\r\n\r\n def _single_result(self, data, i):\r\n \"\"\"\r\n we get this:\r\n {\r\n \"90_rot\": tensor[1.000, 2.999, etc.],\r\n ...other keys, same structure\r\n }\r\n\r\n We take the index'th element out in value, so its tensor[1.00] but its size is torch.Size([])\r\n t[i].tolist() gives a number, the actual number we want to send back\r\n But remote expects a [number] format, so we send that\r\n \"\"\"\r\n return {\r\n k: [v[i].tolist()] for k, v in data.items()\r\n }\r\n\r\n def _get_len_batch(self, data):\r\n \"\"\"The final dict has a str[dict, tensor[length]]. The length is the batch size\r\n\r\n It is guaranteed that for each key, the length of the tensor is the same\r\n \"\"\"\r\n\r\n key = next(iter(data))\r\n return len(data[key])\r\n\r\n def _single_tuple(sel", "url": "https://github.com/pytorch/serve/issues/2849", "state": "open", "labels": [ "triaged" ], "created_at": "2023-12-12T07:30:27Z", "updated_at": "2023-12-29T11:17:16Z", "comments": 3, "user": "hariom-qure" }, { "repo": "pytorch/serve", "number": 2841, "title": "Not able to get the data for inference when using custom handler", "body": "I team, I have created my own custom handler by referencing to the base-handler and the vision-handler. What I am observing is that, when I pass data to the model for inference, the data is not reaching to the hosted model endpoint. \r\n\r\nThe exact error I am getting is: \r\n```\r\n2023-12-09T20:08:03,580 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - Invoking custom service failed.\r\n2023-12-09T20:08:03,580 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - Traceback (most recent call last):\r\n2023-12-09T20:08:03,580 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/ts/service.py\", line 120, in predict\r\n2023-12-09T20:08:03,581 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - ret = self._entry_point(input_batch, self.context)\r\n2023-12-09T20:08:03,581 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File \"/tmp/models/6ffe80d83e5341da81fe21bda0d735e0/custom_handler.py\", line 139, in handle\r\n2023-12-09T20:08:03,581 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - model_input = self.data_preprocess(data)\r\n2023-12-09T20:08:03,582 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File \"/tmp/models/6ffe80d83e5341da81fe21bda0d735e0/custom_handler.py\", line 91, in data_preprocess\r\n2023-12-09T20:08:03,583 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - image = Image.open(io.BytesIO(image))\r\n2023-12-09T20:08:03,585 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/PIL/Image.py\", line 3280, in open\r\n2023-12-09T20:08:03,586 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - raise UnidentifiedImageError(msg)\r\n2023-12-09T20:08:03,586 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f7677de3ce0>\r\n``` \r\n\r\n---\r\n\r\nWhen I printed my \"data\" before passing it for preprocessing, this is what I got: \r\n\r\n```\r\n2023-12-09T19:43:42,421 [INFO ] W-9000-vit_l__1.0-stdout MODEL_LOG - data: [{'data': bytearray(b'{\"payload\":{\"allShortcutsEnabled\":false,\"fileTree\":{\"examples/image_classifier/mnist/test_data\":{\"items\":[{\"name\":\"0.png\",\"path\":\"examples/image_classifier/mnist/test_data/0.png\",\"contentType\":\"file\"},{\"name\":\"1.png\",\"path\":\"examples/image_classifier/mnist/test_data/1.png\",\"contentType\":\"file\"},{\"name\":\"2.png\",\"path\":\"examples/image_classifier/mnist/test_data/2.png\",\"contentType\":\"file\"},{\"name\":\"3.png\",\"path\":\"examples/image_classifier/mnist/test_data/3.png\",\"contentType\":\"file\"},{\"name\":\"4.png\",\"path\":\"examples/image_classifier/mnist/test_data/4.png\",\"contentType\":\"file\"},{\"name\":\"5.png\",\"path\":\"examples/image_classifier/mnist/test_data/5.png\",\"contentType\":\"file\"},{\"name\":\"6.png\",\"path\":\"examples/image_classifier/mnist/test_data/6.png\",\"contentType\":\"file\"},{\"name\":\"7.png\",\"path\":\"examples/image_classifier/mnist/test_data/7.png\",\"contentType\":\"file\"},{\"name\":\"8.png\",\"path\":\"examples/image_classifier/mnist/test_data/8.png\",\"contentType\":\"file\"},{\"name\":\"9.png\",\"path\":\"examples/image_classifier/mnist/test_data/9.png\",\"contentType\":\"file\"}],\"totalCount\":10},\"examples/image_classifier/mnist\":{\"items\":[{\"name\":\"screenshots\",\"path\":\"examples/image_classifier/mnist/screenshots\",\"contentType\":\"directory\"},{\"name\":\"test_data\",\"path\":\"examples/image_classifier/mnist/test_data\",\"contentType\":\"directory\"},{\"name\":\"torchdata\",\"path\":\"examples/image_classifier/mnist/torchdata\",\"contentType\":\"directory\"},{\"name\":\"Docker.md\",\"path\":\"examples/image_classifier/mnist/Docker.md\",\"contentType\":\"file\"},{\"name\":\"README.md\",\"path\":\"examples/image_classifier/mnist/README.md\",\"contentType\":\"file\"},{\"name\":\"config.properties\",\"path\":\"examples/image_classifier/mnist/config.properties\",\"contentType\":\"file\"},{\"name\":\"mnist.py\",\"path\":\"examples/image_classifier/mnist/mnist.py\",\"contentType\":\"file\"},{\"name\":\"mnist_cnn.pt\",\"path\":\"examples/image_classifier/mnist/mnist_cnn.pt\",\"contentType\":\"file\"},{\"name\":\"mnist_handler.py\",\"path\":\"examples/image_classifier/mnist/mnist_handler.py\",\"contentType\":\"file\"},{\"name\":\"mnist_ts.json\",\"path\":\"examples/image_classifier/mnist/mnist_ts.json\",\"contentType\":\"file\"}],\"totalCount\":10},\"examples/image_classifier\":{\"items\":[{\"name\":\"alexnet\",\"path\":\"examples/image_classifier/alexnet\",\"contentType\":\"directory\"},{\"name\":\"densenet_161\",\"path\":\"examples/image_classifier/densenet_161\",\"contentType\":\"directory\"},{\"name\":\"mnist\",\"path\":\"examples/image_classifier/mnist\",\"contentType\":\"directory\"},{\"name\":\"near_real_time_video\",\"path\":\"examples/image_classifier/near_real_time_video\",\"contentType\":\"directory\"},{\"name\":\"resnet_152_batch\",\"path\":\"examples/image_classifier/resnet_152_batch\",\"contentType\":\"directory\"},{\"name\":\"resnet_18\",\"path\":\"examples/image_classifier/resnet_18\",\"contentType\":\"directory\"},{\"name\":\"squeezenet\",\"path\":\"examples/image_classifier/squeezenet\",\"contentType\":\"directory\"},{\"name\":\"vgg_16\",\"path\":\"examples/image_classifier/vgg_16\",\"contentType\":\"directory\"},{\"name\":\"README.md\",\"path\":\"examples/image_classifier/README.md\",\"conten", "url": "https://github.com/pytorch/serve/issues/2841", "state": "closed", "labels": [ "triaged_wait", "support" ], "created_at": "2023-12-09T20:10:19Z", "updated_at": "2023-12-23T17:13:36Z", "comments": 2, "user": "yogendra-yatnalkar" }, { "repo": "pytorch/TensorRT", "number": 2525, "title": "\u2753[Question] The only valid use of a module is looking up an attribute but found...", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nHello, I have a torch scripted model that I am trying to compile with TensorRT: \r\n```py\r\nimport cv2\r\nimport numpy as np\r\nimport torch\r\nfrom torchvision.transforms import ToTensor\r\nimport torch_tensorrt\r\n\r\nif __name__ == \"__main__\":\r\n # Load the pre-trained model\r\n model = torch.jit.load('model.jit')\r\n\r\n # Define sample points and bounding box labels\r\n pts_sampled = np.array([[100, 100], [800, 800]])\r\n bbox = torch.reshape(torch.tensor(pts_sampled), [1, 1, 2, 2])\r\n bbox_labels = torch.reshape(torch.tensor([2, 3]), [1, 1, 2])\r\n\r\n # Read and preprocess the image\r\n image = cv2.imread('image.jpg')\r\n image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\r\n img_tensor = ToTensor()(image)\r\n\r\n # Compile the model with TensorRT\r\n with torch_tensorrt.logging.debug():\r\n trt_model = torch_tensorrt.compile(model, \r\n inputs=[img_tensor[None, ...].cuda(),\r\n bbox.cuda(),\r\n bbox_labels.cuda()],\r\n enabled_precisions={torch.float32},\r\n workspace_size=2000000000,\r\n truncate_long_and_double=True\r\n )\r\n```\r\n\r\nThis returns the following debug information and error:\r\n```sh\r\nINFO: [Torch-TensorRT] - ir was set to default, using TorchScript as ir\r\nDEBUG: [Torch-TensorRT] - TensorRT Compile Spec: {\r\n \"Inputs\": [\r\nInput(shape=(1,3,1080,1920,), dtype=Float, format=Contiguous/Linear/NCHW, tensor_domain=[0, 2))Input(shape=(1,1,2,2,), dtype=Long, format=Contiguous/Linear/NCHW, tensor_domain=[0, 2))Input(shape=(1,1,2,), dtype=Long, format=Contiguous/Linear/NCHW, tensor_domain=[0, 2)) ]\r\n \"Enabled Precision\": [Float, ]\r\n \"TF32 Disabled\": 0\r\n \"Sparsity\": 0\r\n \"Refit\": 0\r\n \"Debug\": 0\r\n \"Device\": {\r\n \"device_type\": GPU\r\n \"allow_gpu_fallback\": False\r\n \"gpu_id\": 0\r\n \"dla_core\": -1\r\n }\r\n\r\n \"Engine Capability\": Default\r\n \"Num Avg Timing Iters\": 1\r\n \"Workspace Size\": 2000000000\r\n \"DLA SRAM Size\": 1048576\r\n \"DLA Local DRAM Size\": 1073741824\r\n \"DLA Global DRAM Size\": 536870912\r\n \"Truncate long and double\": 1\r\n \"Allow Shape tensors\": 0\r\n \"Torch Fallback\": {\r\n \"enabled\": True\r\n \"min_block_size\": 3\r\n \"forced_fallback_operators\": [\r\n ]\r\n \"forced_fallback_modules\": [\r\n ]\r\n }\r\n}\r\nDEBUG: [Torch-TensorRT] - init_compile_spec with input vector\r\nDEBUG: [Torch-TensorRT] - Settings requested for Lowering:\r\n torch_executed_modules: [\r\n ]\r\nTraceback (most recent call last):\r\n File \"/home/jupyter/main.py\", line 79, in <module>\r\n trt_model = torch_tensorrt.compile(model, \r\n File \"/home/jupyter/venv/lib/python3.9/site-packages/torch_tensorrt/_compile.py\", line 133, in compile\r\n return torch_tensorrt.ts.compile(\r\n File \"/home/jupyter/venv/lib/python3.9/site-packages/torch_tensorrt/ts/_compiler.py\", line 139, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nRuntimeError: \r\ntemporary: the only valid use of a module is looking up an attribute but found = prim::SetAttr[name=\"W\"](%self.1, %345)\r\n```\r\n\r\nLooking to understand what my options are and what I can change to successfully compile.\r\n\r\n## Environment\r\n```sh\r\nPyTorch version: 2.0.1+cu117\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Debian GNU/Linux 11 (bullseye) (x86_64)\r\nGCC version: (Debian 10.2.1-6) 10.2.1 20210110\r\nClang version: Could not collect\r\nCMake version: version 3.27.9\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)\r\nPython platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31\r\nIs CUDA available: True\r\nCUDA runtime version: 11.8.89\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA L4\r\nNvidia driver version: 525.105.17\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nCPU(s): 8\r\nOn-line CPU(s) list: 0-7\r\nThread(s) per core: 2\r\nCore(s) per socket: 4\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 85\r\nModel name: Intel(R) Xeon(R) CPU @ 2.20GHz\r\nStepping: 7\r\nCPU MHz: 2200.222\r\nBogoMIPS: 4400.44\r\nHypervisor vendor: KVM\r\nVirtualization type: full\r\nL1d cache: 128 KiB\r\nL1i cache: 128 KiB\r\nL2 cache: ", "url": "https://github.com/pytorch/TensorRT/issues/2525", "state": "closed", "labels": [ "question", "component: lowering" ], "created_at": "2023-12-08T23:09:04Z", "updated_at": "2024-06-11T18:33:42Z", "user": "edmuthiah" }, { "repo": "pytorch/torchx", "number": 798, "title": "Combine / rename `dist.ddp` and `dist.spmd` into `dist.torchrun`", "body": "## Description\r\nCurrently, `dist.ddp` and `dist.spmd` are basically identical (the latter being a lightweight wrapper on the former). Also, they could be named more explicitly \u2014 `dist.ddp` doesn't actually involve Distributed Data Parallel, it just calls `torchrun`.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\nAll else equal, simplification and explicit naming are good. For example, users leveraging Fully Sharded Data Parallel instead of DDP may find it confusing that they should be using `dist.ddp`.\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\nRefactor `components/dist.py` by combining the methods for `ddp` and `spmd` into one method called `torchrun`. Update docs, tests, examples, and callsites as appropriate.\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n1. Leave thing as-is.\r\n2. Remove `ddp` by rolling it into `spmd` and keep the `spmd` method, so `dist.spmd` is the only available command and it has a \"good enough\" name.\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n@danielbear", "url": "https://github.com/meta-pytorch/torchx/issues/798", "state": "open", "labels": [], "created_at": "2023-12-08T21:23:31Z", "updated_at": "2023-12-08T21:31:54Z", "comments": 0, "user": "schmidt-ai" }, { "repo": "pytorch/xla", "number": 6032, "title": "/content/content/q-e/bin/pw.x: error while loading shared libraries: libmkl_scalapack_lp64.so: cannot open shared object file: No such file or directory", "body": "I am using google colab and in the code section:\r\nI wrote: \r\n! /content/content/q-e/bin/pw.x < 01.vc-relax.in < 01.vc-relax.out\r\n\r\ngot an output like that:\r\n\r\n/content/content/q-e/bin/pw.x: error while loading shared libraries: libmkl_scalapack_lp64.so: cannot open shared object file: No such file or directory\r\n\r\nCan you help me to solve it?", "url": "https://github.com/pytorch/xla/issues/6032", "state": "open", "labels": [ "question" ], "created_at": "2023-12-06T13:48:03Z", "updated_at": "2025-04-24T14:53:55Z", "user": "safinmahmood" }, { "repo": "pytorch/kineto", "number": 847, "title": "How does kineto work actuallly?", "body": "Hello, everyone. \r\nI took a quick look at the source code of kineto and it seems the most important part of kineto is [CUPTI](https://docs.nvidia.com/cupti/r_main.html#r_main). I am curious how does kineto work and I have tried some examples of CUPTI. I have some questions hope someone could give me some insights.\r\n1. How does kineto get pytroch functions name?\r\nFrom my short experience of CUPTI programming, I knew I could get CUDA runtime function name from CUPTI, here is code snippet of it:\r\n\r\n```c++\r\n if (cbInfo->callbackSite == CUPTI_API_ENTER)\r\n {\r\n traceData->functionName = cbInfo->functionName; // get cuda function name \r\n CUPTI_CALL(cuptiGetTimestamp(&startTimestamp));\r\n traceData->startTimestamp = startTimestamp;\r\n traceData->memcpy_bytes = ((cudaMemcpy_v3020_params *)(cbInfo->functionParams))->count;\r\n traceData->memcpy_kind = ((cudaMemcpy_v3020_params *)(cbInfo->functionParams))->kind;\r\n }\r\n```\r\nWhat makes me confused is that how does kineto get functions name of pytroch (e.g. `torch::autograd::AccumulateGrad`) ? Is the supported by CUPTI or you guys use other ways to implement ? \r\n\r\n2. What is the purpose of `KINETO_USE_DAEMON=1` ?\r\nAccording to a [blog](https://pytorch.org/blog/automated-trace-collection/) I quote a pecie of it:\r\n > First, we modified PyTorch to register with the Dynolog daemon on start up. This feature is switched on by setting the environment variable KINETO_USE_DAEMON=True. With this environment variable set to True, the PyTorch Profiler periodically polls Dynolog to check for on-demand tracing requests.\r\n\r\n So does it mean that if the env variable was not set, then Pytorch profiler still enabled and it just doesn't send the trace info it captured to user ? In other words, the env variable does not affect whether Pytorch Profiler is enabled. Am I right ? \r\n\r\n I have also opened a similiar [issue](https://github.com/facebookincubator/dynolog/issues/195) in dynolog repo but I dont get any feedback yet. I would appreciate if someone could answer these questions.", "url": "https://github.com/pytorch/kineto/issues/847", "state": "closed", "labels": [ "documentation", "question" ], "created_at": "2023-12-06T06:48:28Z", "updated_at": "2023-12-28T16:46:47Z", "user": "stricklandye" }, { "repo": "pytorch/audio", "number": 3711, "title": "_pickle.UnpicklingError: invalid load key, 'v'.", "body": "### \ud83d\udc1b Describe the bug\n\n### ISSUE\r\nWhen I run \r\n`python preprocess_lrs3.py --data-dir=D:/BaiduNetdiskDownload/LRS3 --detector=retinaface --dataset=lrs3 --root-dir=D:/pycharmProject/audio_vision/audio-main/examples/avsr/predata --subset=test --seg-duration=16 --groups=4 --job-index=0`\r\nThe following appears\r\n`D:\\anaconda3\\envs\\davsr\\lib\\site-packages\\torchaudio\\backend\\utils.py:62: UserWarning: No audio backend is available.\r\n warnings.warn(\"No audio backend is available.\")\r\nTraceback (most recent call last):\r\n File \"preprocess_lrs3.py\", line 68, in <module>\r\n vid_dataloader = AVSRDataLoader(modality=\"video\", detector=args.detector, resize=(96, 96))\r\n File \"D:\\pycharmProject\\audio_vision\\audio-main\\examples\\avsr\\data_prep\\data\\data_module.py\", line 19, in __init__\r\n self.landmarks_detector = LandmarksDetector(device=\"cuda:0\")\r\n File \"D:\\pycharmProject\\audio_vision\\audio-main\\examples\\avsr\\data_prep\\detectors\\retinaface\\detector.py\", line 17, in __init__ \r\n self.face_detector = RetinaFacePredictor(\r\n File \"D:\\pycharmProject\\audio_vision\\audio-main\\examples\\avsr\\data_prep\\face_detection\\ibug\\face_detection\\retina_face\\retina_face_predictor.py\", line 28, in __init__\r\n pretrained_dict = torch.load(model.weights, map_location=self.device)\r\n File \"D:\\anaconda3\\envs\\davsr\\lib\\site-packages\\torch\\serialization.py\", line 795, in load\r\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n File \"D:\\anaconda3\\envs\\davsr\\lib\\site-packages\\torch\\serialization.py\", line 1002, in _legacy_load\r\n magic_number = pickle_module.load(f, **pickle_load_args)\r\n_pickle.UnpicklingError: invalid load key, 'v'.\r\n`\r\nMay I ask why this problem occurs? How to solve it\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 1.13.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Microsoft Windows 11 Home China\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: version 3.27.7\r\nLibc version: N/A\r\n\r\nPython version: 3.8.18 (default, Sep 11 2023, 13:39:12) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)\r\nPython platform: Windows-10-10.0.22621-SP0\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU\r\nNvidia driver version: 517.18\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture=9\r\nCurrentClockSpeed=1992\r\nDeviceID=CPU0\r\nFamily=198\r\nL2CacheSize=2048\r\nL2CacheSpeed=\r\nManufacturer=GenuineIntel\r\nMaxClockSpeed=1992\r\nName=Intel(R) Core(TM) i7-10700T CPU @ 2.00GHz\r\nProcessorType=3\r\nRevision=\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.3\r\n[pip3] torch==1.13.1\r\n[pip3] torchaudio==0.13.1\r\n[pip3] torchvision==0.14.1\r\n[conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] mkl 2023.1.0 h6b88ed4_46358 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] mkl-service 2.4.0 py38h2bbff1b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] mkl_fft 1.3.8 py38h2bbff1b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] mkl_random 1.2.4 py38h59b6b97_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] numpy 1.24.3 py38h79a8e48_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] numpy-base 1.24.3 py38h8a87ada_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main \r\n[conda] pytorch 1.13.1 py3.8_cuda11.7_cudnn8_0 pytorch\r\n[conda] pytorch-cuda 11.7 h16d0643_5 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 0.13.1 pypi_0 pypi\r\n[conda] torchvision 0.14.1 pypi_0 pypi\r\n", "url": "https://github.com/pytorch/audio/issues/3711", "state": "open", "labels": [], "created_at": "2023-12-04T15:32:55Z", "updated_at": "2024-11-12T15:06:54Z", "comments": 1, "user": "YuQing2000" }, { "repo": "pytorch/xla", "number": 6015, "title": "Kaggle TPU Finetuning Roberta Help", "body": "## \u2753 Questions and Help\r\nI have pretrained roberta-base on dna promoter sequences of plants (working on a project). I am currently trying to finetune it on a downstream task of predicting gene expression values, basically a list of 8 values (corresponding to various tissues) from a single promoter sequence. \r\n\r\nThis wasn't possible on kaggle's gpu (due to memory restrictions), so I tried to do the same on TPU using pytorch-xla (figured that was the best option). The link to the notebook as well as the datasets used are as follows:\r\n\r\n1. [Main Kaggle Notebook](https://www.kaggle.com/code/gurveersinghvirk/florabert-2/) \r\n2. [Dataset containing code and data](https://www.kaggle.com/datasets/gurveersinghvirk/florabert-base)\r\n3. [Dataset on github](https://github.com/gurveervirk/florabert/) (contains old code but has the correct structure)\r\n\r\nVersion 43 is the one using the pytorch-xla code (as far as I could figure out). The data's format is as follows:\r\n\r\nsequence \\t labels\r\ndna_promoter_seq_here list_of_8_values_here\r\n\r\neg: CTCAAGCTGAGCAGTGGGTTTGCTCTGGAGGGGAAGCTCAACGGTGGCGACAAGGAAGAATCTGCTTGCGAGGCGAGCCCTGACGCCGCTGATAGCGACCAAAGGTGGATTAAACAACCCATTTCATCATTCTTCTTCCTTGTTAGTTATGATTCCCACGCTTGCCTTTCATGAATCATGATCCTATATGTATATTGATATTAATCAGTTCTAGAAAGTTCAACAACATTTGAGCATGTCAAAACCTGATCGTTGCCTGTTCCATGTCAACAGTGGATTATAACACGTGCAAATGTAGCTATTTGTGTGAGAAGACGTGTGATCGACTCTTTTTTTATATAGATAGCATTGAGATCAACTGTTTGTATATATCTTGTCATAACATTTTTACTTCGTAGCAACGTACGAGCGTTCACCTATTTGTATATAAGTTATCATGATATTTATAAGTTACCGTTGCAACGCACGGACACTCACCTAGTATAGTTTATGTATTACAGTACTAGGAGCCCTAGGCTTCCAATAACTAGAAAAAGTCCTGGTCAGTCGAACCAAACCACAATCCGACGTATACATTCTGGTTCCCCCACGCCCCCATCCGTTCGATTCA\t[54.679647, 60.646678, 54.9113, 78.878474, 21.326259, 27.973276, 17.419968, 40.465529]\r\n\r\nThere's 7,22,000 examples of this kind, ~722 mb in total divided into ~400 mb train, 200 mb test and 100 mb eval. When running the code \"finetune.py\", all goes well till the training starts (datasets are loaded, processed, etc). But, the latest run took 3+ hrs to get to the next step and the RAM usage kept on increasing. It looked the TPU run was very slow and the run then crashed as it ran out of memory. I have tried accelerate and trainer but those efforts were in vain.\r\n\r\nFew questions: \r\n\r\n1. Is my approach correct?\r\n2. What changes should I make? \r\n3. Can I run this code using HuggingFace Trainer (was originally used in the code)? If so, how?\r\n4. Is the RAM usage normal?\r\n5. Should it take this long?\r\n\r\nIf I pass the model as an arg to xmp.spawn, I end up seeing either of \"Check failed: data()->tensor_data\" or \"RuntimeError: Function AddcmulBackward0 returned an invalid gradient at index 1 - expected device xla:1 but got xla:0\". Why?\r\n\r\nKindly guide.", "url": "https://github.com/pytorch/xla/issues/6015", "state": "open", "labels": [ "question", "performance", "xla:tpu" ], "created_at": "2023-12-04T14:07:43Z", "updated_at": "2025-04-24T14:56:25Z", "user": "gurveervirk" }, { "repo": "pytorch/xla", "number": 6014, "title": "How to add a new third-party Backend", "body": "## \u2753 Questions and Help\r\n1 We see PyTorch/XLA now pulls XLA from OpenXLA, is that means we just need to adapt OpenXLA to add a new backend?\r\n2 Will collective operations work with third-party backend?\r\n", "url": "https://github.com/pytorch/xla/issues/6014", "state": "closed", "labels": [], "created_at": "2023-12-04T10:10:18Z", "updated_at": "2023-12-28T22:31:11Z", "user": "dinghaodhd" }, { "repo": "pytorch/xla", "number": 5959, "title": "how pytorch NCHW TO XLA HWOI format \uff1f help", "body": "## \u2753 Questions and Help\r\nI have a request to make the pytorch input model in NCHW format by default, and convert it to HWOI format during the training process, which is conducive to hardware processing data. I wonder if there is a way to uniformly convert this model to the HWOI format when it is sent to XLA. In addition, when sending back from torch_xla to torch, should the HWOI format be converted to the default format NCHW of torch, is there an existing method?", "url": "https://github.com/pytorch/xla/issues/5959", "state": "open", "labels": [ "question" ], "created_at": "2023-12-01T06:18:20Z", "updated_at": "2025-04-28T11:44:59Z", "user": "ckfgihub" }, { "repo": "pytorch/serve", "number": 2814, "title": "[question] How to properly handle client request cancelation during inference?", "body": "Hey all,\r\n\r\nMy model's inference is quite long-running (around 50 seconds per request), so it would be great if closed client connections are handled properly by interrupting the inference that's currently in progress. I'm currently implementing `initialize`, `preprocess`, `inference` and `postprocess` methods in my custom handler class. What's the proper place for detecting closed connection, if possible?\r\n\r\nThanks,\r\nMiro", "url": "https://github.com/pytorch/serve/issues/2814", "state": "closed", "labels": [], "created_at": "2023-11-30T18:34:49Z", "updated_at": "2024-03-20T22:14:27Z", "user": "miroslavLalev" }, { "repo": "pytorch/xla", "number": 5953, "title": "xla NCHW to HWOI", "body": "## \u2753 Questions and Help\r\nIs there a simple way to modify the tensor layout (NCHW) in the entire xla computation graph to convert it to HWOI format, and continue to convert it to NCHW format when it is returned to torch? If there is no simple and unified modification method, how can we change it? For example, modifying each operator one by one is also a method. How should we achieve this goal?", "url": "https://github.com/pytorch/xla/issues/5953", "state": "closed", "labels": [ "duplicate", "question" ], "created_at": "2023-11-30T06:59:00Z", "updated_at": "2025-04-28T11:53:34Z", "user": "ckfgihub" }, { "repo": "pytorch/executorch", "number": 1313, "title": "How to run the pte model on GPU", "body": "Hello,\r\n\r\nI would like to konw if ExecuTorch supports GPU.\r\nNow I could export model into pte format and execute runtime for xnnpack backend in Intel device.\r\nThe device has GPU.\r\n\r\nBut when I check GPU usage while running the application, GPU wasn't utilized.\r\nIf ExecuTorch supports GPU, can you please share me how to use GPU?\r\n\r\n### Environment\r\n```\r\n$ lscpu\r\nArchitecture: x86_64\r\n CPU op-mode(s): 32-bit, 64-bit\r\n Address sizes: 39 bits physical, 48 bits virtual\r\n Byte Order: Little Endian\r\nCPU(s): 4\r\n On-line CPU(s) list: 0-3\r\nVendor ID: GenuineIntel\r\n Model name: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz\r\n CPU family: 6\r\n Model: 142\r\n Thread(s) per core: 2\r\n Core(s) per socket: 2\r\n Socket(s): 1\r\n Stepping: 9\r\n CPU max MHz: 3600.0000\r\n CPU min MHz: 400.0000\r\n BogoMIPS: 4599.93\r\n Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon\r\n pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe p\r\n opcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad \r\n fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window \r\n hwp_epp md_clear flush_l1d arch_capabilities\r\n```\r\n \r\nThanks,", "url": "https://github.com/pytorch/executorch/issues/1313", "state": "closed", "labels": [ "need-user-input" ], "created_at": "2023-11-30T05:19:12Z", "updated_at": "2023-12-14T23:54:25Z", "user": "EarthMu" }, { "repo": "pytorch/pytorch", "number": 114822, "title": "convert to onnx with the dynamic shape and onnx convert to tensorrt, but could't get the dynamic engine of tensorrt. dims.d[0]==1 !!! what is wrong with the model??? please give me some help. thanks", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n- convert my model to onnx with the dynamic shape and onnx convert to tensorrt, but could't get the dynamic engine of tensorrt. dims.d[0]==1 !!! but when i converted yolov8 model to onnx\uff0c then onnx convert to tensorrt\uff0c i got dims.d[0] == -1 and it worked well. what is wrong with the model??? \r\n\r\n\r\n### pth to onnx\r\n```python\r\ntorch.onnx.export(\r\n model,\r\n dummy_input,\r\n args.output,\r\n verbose=False,\r\n export_params=True,\r\n input_names=input_names,\r\n output_names=output_names,\r\n keep_initializers_as_inputs=False,\r\n opset_version=13, \r\n dynamic_axes = {\r\n \"input_image\":{0:\"batch\"},\r\n \"bases\":{0:\"batch\"},\r\n \"pred\":{0:\"batch\"}\r\n } if args.dynamic else None\r\n )\r\n```\r\n![image](https://github.com/pytorch/pytorch/assets/71381036/4ac882a2-ab38-4b30-9125-927d145ca040)\r\n\r\n### onnx to tensorrt\r\n```bash\r\n./trtexec --onnx=model_0364999-dy-op13.onnx \\\r\n --saveEngine=model_0364999-dy-op13 \\\r\n --minShapes=input_image:1x1x2048x2048 \\\r\n --optShapes=input_image:10x1x2048x2048 \\\r\n --maxShapes=input_image:10x1x2048x2048 \\\r\n --fp16 \\\r\n --device=0 \\\r\n --workspace=10240 \\\r\n --preview=+fasterDynamicShapes0805 \\\r\n```\r\n![image](https://github.com/pytorch/pytorch/assets/71381036/4874b949-6847-4e74-96a1-113beaa81d83)\r\n\r\n### Versions\r\nubuntu:20.04\r\ncuda:11.1\r\ncudnn:8.2\r\ntensorrt:8.5.2\r\npython: 3.6\r\npytorch:1.7.1\r\n\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/114822", "state": "closed", "labels": [], "created_at": "2023-11-30T02:21:23Z", "updated_at": "2023-11-30T03:22:52Z", "user": "tianlan6767" }, { "repo": "pytorch/text", "number": 2217, "title": "how to run this code", "body": "## how to run this code \r\ni need a --pip list -- to run this code ", "url": "https://github.com/pytorch/text/issues/2217", "state": "open", "labels": [], "created_at": "2023-11-29T02:15:16Z", "updated_at": "2024-08-05T12:51:43Z", "user": "ygqrc" }, { "repo": "pytorch/TensorRT", "number": 2486, "title": "\u2753 [Question] Using dynamic shapes with FX frontend", "body": "I tried to use dynamic shapes in FX path with the following codes. It seems that the `input_specs` argument passed to `LowerSetting` has no effect and TRT gives an error message.\r\n\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch_tensorrt.fx import InputTensorSpec, LowerSetting\r\nfrom torch_tensorrt.fx.lower import Lowerer\r\nfrom torch_tensorrt.fx.utils import LowerPrecision\r\n\r\n\r\nclass MyModule(nn.Module):\r\n def __init__(self):\r\n super(MyModule, self).__init__()\r\n self.conv = nn.Sequential(nn.Conv2d(1, 20, 5), nn.PReLU())\r\n\r\n def forward(self, input):\r\n return self.conv(input)\r\n\r\n\r\nwith torch.inference_mode():\r\n device = torch.device(\"cuda\")\r\n mod = MyModule().eval().to(device).half()\r\n\r\n lower_setting = LowerSetting(\r\n lower_precision=LowerPrecision.FP16,\r\n min_acc_module_size=1,\r\n input_specs=[\r\n InputTensorSpec(\r\n shape=(1, 1, -1, -1),\r\n dtype=torch.half,\r\n device=device,\r\n shape_ranges=[((1, 1, 16, 16), (1, 1, 32, 32), (1, 1, 64, 64))],\r\n )\r\n ],\r\n dynamic_batch=False,\r\n )\r\n lowerer = Lowerer.create(lower_setting=lower_setting)\r\n mod_trt = lowerer(mod, [torch.rand((1, 1, 16, 16), dtype=torch.half, device=device)])\r\n\r\n print(mod_trt(torch.rand((1, 1, 16, 16), dtype=torch.half, device=device)).shape)\r\n print(mod_trt(torch.rand((1, 1, 32, 32), dtype=torch.half, device=device)).shape)\r\n```\r\n\r\n```\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:MyModule__AccRewrittenModule does not have attribute _compiled_call_impl\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Sequential__AccRewrittenModule does not have attribute _compiled_call_impl\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Conv2d__AccRewrittenModule does not have attribute _compiled_call_impl\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:PReLU__AccRewrittenModule does not have attribute _compiled_call_impl\r\nC:\\Python311\\Lib\\site-packages\\torch\\overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()'\r\n torch.has_cuda,\r\nC:\\Python311\\Lib\\site-packages\\torch\\overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()'\r\n torch.has_cudnn,\r\nC:\\Python311\\Lib\\site-packages\\torch\\overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()'\r\n torch.has_mps,\r\nC:\\Python311\\Lib\\site-packages\\torch\\overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()'\r\n torch.has_mkldnn,\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:GraphModule.__new__.<locals>.GraphModuleImpl__AccRewrittenModule does not have attribute _compiled_call_impl\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Module__AccRewrittenModule does not have attribute _compiled_call_impl\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Module__AccRewrittenModule does not have attribute _compiled_call_impl\r\nWARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Module__AccRewrittenModule does not have attribute _compiled_call_impl\r\nINFO:torch_tensorrt.fx.passes.pass_utils:== Log pass <function fuse_permute_matmul at 0x000001E8B08F0E00> before/after graph to C:\\Users\\HOLYWU~1\\AppData\\Local\\Temp\\tmpgbz4qw6c, before/after are the same = True, time elapsed = 0:00:00.026858\r\nINFO:torch_tensorrt.fx.passes.pass_utils:== Log pass <function fuse_permute_linear at 0x000001E8B08F0B80> before/after graph to C:\\Users\\HOLYWU~1\\AppData\\Local\\Temp\\tmpp8c1a1dw, before/after are the same = True, time elapsed = 0:00:00.000981\r\nINFO:torch_tensorrt.fx.passes.pass_utils:== Log pass <function fix_clamp_numerical_limits_to_fp16 at 0x000001E8B08F1440> before/after graph to C:\\Users\\HOLYWU~1\\AppData\\Local\\Temp\\tmp43sia5pv, before/after are the same = True, time elapsed = 0:00:00\r\n\r\nSupported node types in the model:\r\nacc_ops.conv2d: ((), {'input': torch.float16, 'weight': torch.float16, 'bias': torch.float16})\r\n\r\nUnsupported node types in the model:\r\nacc_ops.prelu: ((), {'input': torch.float16, 'weight': torch.float16})\r\n\r\nGot 1 acc subgraphs and 1 non-acc subgraphs\r\nINFO:torch_tensorrt.fx.passes.lower_pass_manager_builder:Now lowering submodule _run_on_acc_0\r\nINFO:torch_tensorrt.fx.lower:split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 1, 16, 16]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]\r\nINFO:torch_tensorrt.fx.lower:Timing cache is used!\r\nINFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.001014\r\nINFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:00:00.993050\r\nINFO:torch_tensorrt.fx.passes.lower_pass_manager_builder:Lowering submodule _run_on_acc_0 elapsed time 0:00:05.996300\r\ntorch.Size([1, 20, 12, 12])\r\n[11/25/2023-13:55:00] [TRT] [E] 3: [executionContext.cpp::nvinfer1::rt::ExecutionContext::validat", "url": "https://github.com/pytorch/TensorRT/issues/2486", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-25T06:52:12Z", "updated_at": "2024-02-22T13:30:13Z", "user": "HolyWu" }, { "repo": "pytorch/TensorRT", "number": 2485, "title": "How may I install torch_tensorrt with my own local version of torch?", "body": "## \u2753 Question\r\n\r\nHow may I install `torch_tensorrt` with my own local version of torch?\r\n\r\n## What you have already tried\r\n\r\npip install torch-tensorrt --no-deps resulted in \r\n\r\n```\r\nImportError: /home/jonch/.local/lib/python3.10/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKSs\r\n```\r\nSeems like it tries to link to torch shared library but fails. I guess I can't configure it to point to my existing installation of torch.\r\n\r\nFor instance, what if I want to use torch_tensorrt with torch nightly?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2485", "state": "open", "labels": [ "question" ], "created_at": "2023-11-25T05:25:35Z", "updated_at": "2023-11-28T19:50:29Z", "user": "jon-chuang" }, { "repo": "pytorch/rl", "number": 1708, "title": "[Question] What is ESS in PPO?", "body": "Here [ppo.py](https://github.com/pytorch/rl/blob/main/torchrl/objectives/ppo.py#L649) from PPO source code is the definition.\r\n<img width=\"983\" alt=\"Screenshot 2023-11-22 at 1 21 12 AM\" src=\"https://github.com/pytorch/rl/assets/22335780/3ec3663e-7140-4353-a65a-8b13f761fab2\">\r\n\r\nDoes ESS stand for **Effective Sample Size** or something else? \r\nWhat is the purpose logging this info?\r\nA reference for 'ESS' would be helpful. Thank you.", "url": "https://github.com/pytorch/rl/issues/1708", "state": "closed", "labels": [], "created_at": "2023-11-22T06:13:37Z", "updated_at": "2023-11-23T03:07:41Z", "user": "gitfourteen" }, { "repo": "pytorch/executorch", "number": 1252, "title": "What is the codegen really done at the Executorch flow?", "body": "Hi,\r\n\r\nAlthough I study the https://pytorch.org/executorch/stable/concepts.html#codegen about codegen part, I do not understand very well about this part.\r\n![Screenshot from 2023-11-21 16-38-38](https://github.com/pytorch/executorch/assets/87454575/669a120d-714a-4861-9b5c-1d822bfd29dd)\r\n\r\nAbove the concepts map, after I export the model.pte file which is the binary file.\r\nCan I directly select the kernel op to run the model with Executorch Runtime library ?\r\n\r\nAnd there is another branch of model.pte file which do the codegen to gen the Kernel Registration Library. I do not understand very well about this part.\r\n\r\nMy question is that if I can run with model.pte file with kernel op run time library, why need to codegen again?\r\nOr what is the codegen output at real flow? Is it a c code about the graph of the model with ops and the weight? ", "url": "https://github.com/pytorch/executorch/issues/1252", "state": "closed", "labels": [ "need-user-input", "module: kernels", "triaged" ], "created_at": "2023-11-21T08:38:57Z", "updated_at": "2024-02-14T00:53:21Z", "user": "kris-himax" }, { "repo": "pytorch/serve", "number": 2801, "title": "When is initialize method called?", "body": "### \ud83d\udcda The doc issue\n\nI've created a custom handler with the following initialize method\r\n```python\r\nclass CustomHandler(VisionHandler):\r\n def initialize(self, context):\r\n print(\"Got here 000!\")\r\n time.sleep(20)\r\n print(\"Got here 111!\")\r\n super(VisionHandler, self).__init__()\r\n```\r\n\r\nI spin up the server using a single runner by running `torchserve --start --ncs --ts-config model-store/config.properties`, where config.properties looks like:\r\n```python\r\ninference_address=http://127.0.0.1:8080\r\nmanagement_address=http://127.0.0.1:8081\r\nmetrics_address=http://127.0.0.1:8082\r\nmodel_store=/home/inaki/code/animal_classifier/model-store\r\nload_models=animal.mar\r\nmin_workers=1\r\nmax_workers=1\r\ndefault_workers_per_model=1\r\nmodel_snapshot={\"name\":\"startup.cfg\", \"modelCount\":1, \"models\":{\"animal\":{\"1.0\":{\"defaultVersion\":true, \"marName\":\"animal.mar\", \"minWorkers\":1, \"maxWorkers\":1, \"batchSize\":2, \"maxBatchDelay\":2000, \"responseTimeout\":30000}}}}\r\n\r\n```\r\n\r\nI notice the \"Got here\" logs don't show up during the initial phase, where I assumed the model was loaded. Instead, they show up when I submit the first request to the server (`curl -X POST http://localhost:8080/predictions/animal -T ./data/cats_and_dogs/frames/2.png`), but not for subsequent requests. And there's no sleep time in between the two prints.\r\n\r\nMy assumption is that printing the logs is somehow cached? I'd like to know if there's a diagram to better understand the flow.\r\n\r\nI noticed too that in the model_service_worker, there seem to be two routes for handling incoming requests based on this [branching](https://github.com/pytorch/serve/blob/aa96cf60c044087e75a1472f3bd090422d4d349c/ts/model_service_worker.py#L180-L195). Can somebody explain what is the distinction between cmd == b\"I\" and cmd == b\"L\"?\n\n### Suggest a potential alternative/fix\n\nIncluding a diagram/explanation with the spin-up flow in the documentation", "url": "https://github.com/pytorch/serve/issues/2801", "state": "closed", "labels": [], "created_at": "2023-11-20T12:03:07Z", "updated_at": "2023-11-23T20:47:00Z", "comments": 4, "user": "InakiRaba91" }, { "repo": "pytorch/serve", "number": 2800, "title": "When is initialize method called?", "body": "### \ud83d\udcda The doc issue\n\nI've created a custom handler with the following initialize method\r\n```python\r\nclass CustomHandler(VisionHandler):\r\n def initialize(self, context):\r\n print(\"Got here 000!\")\r\n time.sleep(20)\r\n print(\"Got here 111!\")\r\n super(VisionHandler, self).__init__()\r\n```\r\n\r\nI spin up the server using a single runner by running `torchserve --start --ncs --ts-config model-store/config.properties`, where config.properties looks like:\r\n```python\r\ninference_address=http://127.0.0.1:8080\r\nmanagement_address=http://127.0.0.1:8081\r\nmetrics_address=http://127.0.0.1:8082\r\nmodel_store=/home/inaki/code/animal_classifier/model-store\r\nload_models=animal.mar\r\nmin_workers=1\r\nmax_workers=1\r\ndefault_workers_per_model=1\r\nmodel_snapshot={\"name\":\"startup.cfg\", \"modelCount\":1, \"models\":{\"animal\":{\"1.0\":{\"defaultVersion\":true, \"marName\":\"animal.mar\", \"minWorkers\":1, \"maxWorkers\":1, \"batchSize\":2, \"maxBatchDelay\":2000, \"responseTimeout\":30000}}}}\r\n\r\n```\r\n\r\nI notice the \"Got here\" logs don't show up during the initial phase, where I assumed the model was loaded. Instead, they show up when I submit the first request to the server (`curl -X POST http://localhost:8080/predictions/animal -T ./data/cats_and_dogs/frames/2.png`), but not for subsequent requests. And there's no sleep time in between the two prints.\r\n\r\nMy assumption is that printing the logs is somehow cached? I'd like to know if there's a diagram to better understand the flow.\r\n\r\nI noticed too that in the model_service_worker, there seem to be two routes for handling incoming requests based on this [branching](https://github.com/pytorch/serve/blob/aa96cf60c044087e75a1472f3bd090422d4d349c/ts/model_service_worker.py#L180-L195). Can somebody explain what is the distinction between cmd == b\"I\" and cmd == b\"L\"?\n\n### Suggest a potential alternative/fix\n\nIncluding a diagram/explanation with the spin-up flow in the documentation", "url": "https://github.com/pytorch/serve/issues/2800", "state": "closed", "labels": [], "created_at": "2023-11-20T11:46:16Z", "updated_at": "2023-11-20T12:02:53Z", "comments": 0, "user": "irabanillo91" }, { "repo": "pytorch/executorch", "number": 1239, "title": "How to access to result of tensor after inference", "body": "Hi, \r\nI am implementing executorch by following step.\r\n1. Exporting resnet18 including softmax layer.\r\n2. Implementing executor_runner.cpp to access to result of tensor after inference.\r\n \r\nI expected that I could get each classes' result like [0,0,0,0.1,0.9] after inference(including softmax).\r\nBut when I try to access to each result and stdout, the outputs were like following:\r\n```\r\nOutputTensor 0 1: <Unknown EValue tag 1915577445>\r\nOutputTensor 0 2: <Unknown EValue tag -284942848>\r\n.\r\n.\r\nOutputTensor 0 14: None\r\n``` \r\nI expected that I could get each classes' probability like following.\r\n```\r\nOutputTensor 0 1: 0\r\nOutputTensor 0 2: 0.5\r\n.\r\n.\r\nOutputTensor 0 14: 0.25\r\nOutputTensor 0 15: 0.25\r\n```\r\n\r\n### model export file\r\n`export-model-resnet18.py`\r\n```py\r\nimport torch\r\nimport torchvision.models as models\r\nimport torch.nn.functional as F\r\n\r\nfrom torchvision.models.resnet import ResNet18_Weights\r\n\r\nfrom torch._export import capture_pre_autograd_graph\r\nfrom torch.export import export, ExportedProgram\r\nimport executorch.exir as exir\r\n\r\n# ========== resnet18 + softmax layer ============\r\nresnet18 = models.resnet18(weights=ResNet18_Weights.DEFAULT).eval()\r\nresnet18.fc = torch.nn.Sequential(\r\n resnet18.fc,\r\n torch.nn.Softmax(dim=1)\r\n )\r\nexample_args = (torch.randn(1, 3, 224, 224), )\r\n# ====================================\r\n\r\n## export to exir\r\npre_autograd_aten_dialect = capture_pre_autograd_graph(resnet18, example_args)\r\n## export to aten dialect\r\naten_dialect: ExportedProgram = export(pre_autograd_aten_dialect, example_args)\r\n## export to edge\r\nedge_program: exir.EdgeProgramManager = exir.to_edge(aten_dialect)\r\n## export to executorch\r\nfrom executorch.exir import ExecutorchBackendConfig, ExecutorchProgramManager\r\nexecutorch_program: exir.ExecutorchProgramManager = edge_program.to_executorch(\r\n ExecutorchBackendConfig(\r\n passes=[], # User-defined passes\r\n )\r\n)\r\n## save pte model\r\nprint(\"save pte file\")\r\nwith open(\"exported-resnet18.pte\", \"wb\") as file:\r\n file.write(executorch_program.buffer)\r\n```\r\n### executor_runner.cpp\r\n```cpp\r\n for (int i = 0; i < outputs.size(); ++i) {\r\n std::cout << \"Output \" << i << \": \" << outputs[i] << std::endl;\r\n printTypeName<decltype(outputs[i])>();\r\n for (int j = 0; j < 1001; ++j) {\r\n // address\r\n //std::cout << \"OutputTensor 0 \" << j << \": \" << &outputs[0,j] << std::endl;\r\n // value\r\n std::cout << \"OutputTensor 0 \" << j << \": \" << outputs[j] << std::endl;\r\n }\r\n``` \r\n### output while running executor_runner.cpp\r\n```sh\r\n(executorch) root@c2aef39cb16e:~/test/executorch# ./cmake-out/executor_runner --model_path ./exported-resnet18.pte --img_path test.jpg\r\nNumber of arguments: 5\r\nArgument 0: ./cmake-out/executor_runner\r\nArgument 1: --model_path\r\nArgument 2: ./exported-resnet18.pte\r\nArgument 3: --img_path\r\nArgument 4: test.jpg\r\nI 00:00:00.356738 executorch:executor_runner.cpp:139] Model file ./exported-resnet18.pte is loaded.\r\nI 00:00:00.356793 executorch:executor_runner.cpp:148] Using method forward\r\nI 00:00:00.356799 executorch:executor_runner.cpp:196] Setting up planned buffer 0, size 64348896.\r\nI 00:00:00.406562 executorch:executor_runner.cpp:219] Method loaded.\r\nI 00:00:00.406674 executorch:executor_runner.cpp:225] Inputs prepared.\r\nI 00:02:09.169815 executorch:executor_runner.cpp:234] Model executed successfully.\r\nI 00:02:09.169871 executorch:executor_runner.cpp:238] 1 outputs: \r\nOutputTensor 0 0: tensor(sizes=[1, 1000], [\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n ...,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \r\n])\r\nOutputTensor 0 1: <Unknown EValue tag 1915577445>\r\nOutputTensor 0 2: <Unknown EValue tag -284942848>\r\nOutputTensor 0 3: <Unknown EValue tag 64348896>\r\nOutputTensor 0 4: <Unknown EValue tag -284943136>\r\nOutputTensor 0 5: <Unknown EValue tag -284942944>\r\nOutputTensor 0 6: <Unknown EValue tag 939732227>\r\nOutputTensor 0 7: <Unknown EValue tag -117183485>\r\nOutputTensor 0 8: <Unknown EValue tag -754662396>\r\nOutputTensor 0 9: <Unknown EValue tag -989481723>\r\nOutputTensor 0 10: <Unknown EValue tag -788086778>\r\nOutputTensor 0 11: <Unknown EValue tag 176>\r\nOutputTensor 0 12: <Unknown EValue tag -287441008>\r\nOutputTensor 0 1", "url": "https://github.com/pytorch/executorch/issues/1239", "state": "closed", "labels": [ "need-user-input" ], "created_at": "2023-11-20T09:03:14Z", "updated_at": "2023-11-22T19:24:01Z", "user": "EarthMu" }, { "repo": "pytorch/audio", "number": 3704, "title": "Random cropping for variable length sequences", "body": "### \ud83d\ude80 The feature\n\nI am proposing to add a `torch.nn.Module` transform that automatically crops/pads signals (with different options for padding such as constant/mirroring). I have the implementation already local so I would push it myself if this is alright.\r\n\r\nThe interface would like as follows:\r\n\r\n```python\r\nclass RandomCrop(torch.nn.Module):\r\n def __init__(\r\n self,\r\n output_size, # number of samples to be enforced on output signal\r\n axis=-1, # axis over which to crop\r\n pad=\"silence\", # a string controlling the behavior of padding (constant vs reflection)\r\n )\r\n def forward(self, signal): # signal of arbitrary size\r\n signal = ...\r\n return signal # signal now has a fixed size of `output_size` at `axis`\r\n```\r\n\r\nI am looking for feedback to see if this is also needed/desired by others and whether I should open a PR to add it.\n\n### Motivation, pitch\n\nThis feature is needed for datasets with variable lengths (a common occurrence for audio). By default, this mismatch in lengths now needs to be handled in the collate function of the dataloader. \r\n\r\nWith the proposed transform, the user can add it directly to their transform pipeline and/or make it part of their model if they so wish. Moreover, they could simply utilize it in their `collate_fn` if they want to crop based on the particular batch statistics (e.g. crop/pad to the shortest/longest sample in the batch).\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nA reference implementation and interface can be seen [here](https://github.com/audeering/audtorch/blob/d7144a4b5a6cd7da1c5b570a8e86f047a2170890/audtorch/transforms/transforms.py#L113). As it is implemented with `numpy`, I would update to `torch`.", "url": "https://github.com/pytorch/audio/issues/3704", "state": "open", "labels": [], "created_at": "2023-11-17T10:37:24Z", "updated_at": "2024-05-23T06:24:00Z", "comments": 4, "user": "ATriantafyllopoulos" }, { "repo": "pytorch/pytorch", "number": 113933, "title": "How to re-use torch.compile results in different python processes?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI'm trying to compile my custom vision transformer-based model. The compiled version is indeed faster than the traditional one.\r\n\r\nHowever, as scaled_dot_product_attention does not support dynamic shapes, the program compiles the transformer block for every input size. Thus, the TEST program takes ~15-20 minutes to compile the model and then processes hundreds to thousends pictures, which is ~10 times slower than the eager mode.\r\n\r\nI wonder if there's some api to save the intermediate states, so that when I run the same code again, I can reuse the compilation results in /tmp/torchinductor_$user and skip the boring compilation stage? \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @ezyang @gchanan @zou3519 @kadeng @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @wconstab @aakhundov", "url": "https://github.com/pytorch/pytorch/issues/113933", "state": "closed", "labels": [ "high priority", "feature", "triaged", "months", "oncall: pt2", "module: dynamic shapes", "module: dynamo" ], "created_at": "2023-11-17T08:22:11Z", "updated_at": "2024-08-30T06:47:28Z", "user": "flishwang" }, { "repo": "pytorch/benchmark", "number": 2040, "title": "How to run test_bench.py with ROCM?", "body": "Hi @xuzhao9, \r\n\r\nI don't know how to create a dockerfile for AMD ROCM, is there any example? \r\n\r\nBest Regards\r\n", "url": "https://github.com/pytorch/benchmark/issues/2040", "state": "closed", "labels": [ "module: rocm", "ciflow/rocm" ], "created_at": "2023-11-15T14:16:59Z", "updated_at": "2024-03-18T22:00:08Z", "user": "jinsong-mao" }, { "repo": "pytorch/TensorRT", "number": 2471, "title": "\u2753 [Question] How to compile model when input is a list of tensors", "body": "## \u2753 Question\r\n\r\nI am trying to follow the tutorial [here](https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html) and am stuck at compiling the model with tensor-rt. The model i am using takes a list of tensors as inputs and hence i could not get the following compile code to work as i cannot get the shape of a list:\r\n```\r\ntrt_model = torch_tensorrt.compile(self.model,\r\n inputs= [torch_tensorrt.Input(inputs.shape)], \r\n enabled_precisions= { torch.half} # Run with FP32\r\n )\r\n```\r\nInputs have the following tensors:\r\n\r\n> ic| i.shape: torch.Size([1, 3, 256, 256])\r\n> ic| i.shape: torch.Size([1, 98, 3])\r\n> ic| i.shape: torch.Size([1, 3, 3])\r\n\r\n## What you have already tried\r\n\r\nI have tried using `(3,)` but i am getting the following errror:\r\n```\r\n File \"/home/default/anaconda3/envs/driverstate_ttrt/lib/python3.10/site-packages/torch/jit/_recursive.py\", line 397, in create_methods_and_properties_from_stubs\r\n concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)\r\nRuntimeError: \r\n\r\nforward(__torch__.spiga.models.cnn.layers.___torch_mangle_24.Residual self, Tensor x) -> Tensor:\r\nKeyword argument core unknown.\r\n:\r\n File \"/home/default/driver-state-detection/Fabian/headpose/SPIGA/spiga/models/cnn/hourglass.py\", line 45\r\n low1 = self.low1(pool1)\r\n if self.n > 1:\r\n low2, core = self.low2(low1, core=core)\r\n ~~~~~~~~~ <--- HERE\r\n else:\r\n low2 = self.low2(low1)\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.0.1+cu118\r\n - OS (e.g., Linux): WSL2 on Windows11\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip \r\n - Python version: 3.10.12\r\n - GPU models and configuration: 2070Super\r\n - Any other relevant information: torch-tensorrt 1.4.0\r\n\r\n## Additional context\r\n\r\nBasically asking what should i used as inputs shape if it is a list of tensors. Should i instead look to [this](https://github.com/pytorch/TensorRT/tree/main/examples/dynamo)?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2471", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-15T09:50:36Z", "updated_at": "2025-11-24T17:44:36Z", "user": "HeChengHui" }, { "repo": "pytorch/vision", "number": 8118, "title": "missing labels in FER2013 test data", "body": "### \ud83d\udc1b Describe the bug\n\nThe file **test.csv** has no label column, so the labels in the test split all have value None:\r\n```\r\nfrom torchvision.datasets import FER2013\r\ndat = FER2013(root='./', split='test')\r\nprint(dat[0][1])\r\n```\r\nAdding labels to the file raises a RuntimeError, presumably because of a resulting different md5 hash. The code above assumes the data has been downloaded from kaggle, as described in the [source code](https://github.com/pytorch/vision/blob/main/torchvision/datasets/fer2013.py). \r\n\n\n### Versions\n\nPyTorch version: 2.1.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.8\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Debian GNU/Linux 11 (bullseye) (x86_64)\r\nGCC version: (Debian 10.2.1-6) 10.2.1 20210110\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.10.0-26-amd64-x86_64-with-glibc2.31\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 Ti\r\nNvidia driver version: 520.61.05\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\nCPU(s): 12\r\nOn-line CPU(s) list: 0-11\r\nThread(s) per core: 2\r\nCore(s) per socket: 6\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 165\r\nModel name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz\r\nStepping: 2\r\nCPU MHz: 1944.273\r\nCPU max MHz: 5000.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 5199.98\r\nVirtualization: VT-x\r\nL1d cache: 192 KiB\r\nL1i cache: 192 KiB\r\nL2 cache: 1.5 MiB\r\nL3 cache: 12 MiB\r\nNUMA node0 CPU(s): 0-11\r\nVulnerability Gather data sampling: Vulnerable: No microcode\r\nVulnerability Itlb multihit: KVM: Mitigation: VMX disabled\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable\r\nVulnerability Retbleed: Mitigation; Enhanced IBRS\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\r\nVulnerability Srbds: Mitigation; Microcode\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.0\r\n[pip3] torch==2.1.1\r\n[pip3] torchaudio==2.1.1\r\n[pip3] torchvision==0.16.1\r\n[pip3] triton==2.1.0\r\n[conda] blas 1.0 mkl \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch\r\n[conda] mkl 2023.1.0 h213fc3f_46344 \r\n[conda] mkl-service 2.4.0 py311h5eee18b_1 \r\n[conda] mkl_fft 1.3.8 py311h5eee18b_0 \r\n[conda] mkl_random 1.2.4 py311hdb19cb5_0 \r\n[conda] numpy 1.26.0 py311h08b1b3b_0 \r\n[conda] numpy-base 1.26.0 py311hf175353_0 \r\n[conda] pytorch ", "url": "https://github.com/pytorch/vision/issues/8118", "state": "closed", "labels": [ "enhancement", "help wanted", "module: datasets" ], "created_at": "2023-11-15T09:01:24Z", "updated_at": "2024-06-04T10:21:51Z", "comments": 8, "user": "dtafler" }, { "repo": "pytorch/TensorRT", "number": 2468, "title": "\u2753 [Question] New release of torch-tensorRT with PyTorch 2.1", "body": "## \u2753 Question\r\n\r\nNew release of torch-tensort with PyTorch 2.1\r\n\r\n## What you have already tried\r\n\r\nIs there going to be a new release? or is this supported now through torch.compile only?\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2468", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-14T23:42:50Z", "updated_at": "2025-01-21T17:21:34Z", "user": "agunapal" }, { "repo": "pytorch/TensorRT", "number": 2465, "title": "ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin Mod version 1", "body": "I want to use tensorrt to accelerate VisionEncoderDecoderModel. Use the following code to convert it to onnx and it was successful.\r\n```\r\n\r\nfrom transformers import VisionEncoderDecoderModel\r\ndef model_converter():\r\n model = VisionEncoderDecoderModel.from_pretrained(\"./examples/data\")\r\n model.to(device)\r\n model.eval()\r\n tokenizer = NougatTokenizerFast.from_pretrained(r'./examples/data')\r\n latex_processor = NougatLaTexProcessor.from_pretrained(r'./examples/data')\r\n task_prompt = tokenizer.bos_token\r\n decoder_input_ids = tokenizer(task_prompt, add_special_tokens=False,\r\n return_tensors=\"pt\").input_ids.to(device)\r\n # Create dummy inputs with the correct shapes for both inputs\r\n dummy_pixel_values = torch.randn(1, 3, 224, 560, device=device)\r\n # Provide names for the inputs\r\n input_names = ['pixel_values', 'decoder_input_ids']\r\n output_names = ['output']\r\n # Export the model to ONNX\r\n torch.onnx.export(\r\n model,\r\n (dummy_pixel_values, decoder_input_ids),\r\n './examples/test2.onnx',\r\n export_params=True,\r\n verbose=True,\r\n input_names=input_names,\r\n output_names=output_names\r\n )\r\n```\r\n\r\nThen, when changing onnx into trt, an error occurred:\r\n\r\n> Loading ONNX file from path ./examples/test.onnx...\r\nBeginning ONNX file parsing\r\n[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.\r\n[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 1400793072\r\n[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.\r\n[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin Mod version 1\r\nERROR: Failed to parse the ONNX file.\r\nIn node -1 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && \"Plugin not found, are the plugin name, version, and namespace correct?\"\r\nCompleted parsing of ONNX file\r\n[TensorRT] ERROR: Network must have at least one output\r\n[TensorRT] ERROR: Network validation failed.\r\nTraceback (most recent call last):\r\nFile \"create_onnx.py\", line 350, in <module>\r\nf.write(engine.serialize())\r\nAttributeError: 'NoneType' object has no attribute 'serialize'\r\n\r\nthe code is\r\n\r\n```\r\nimport os\r\nimport tensorrt as trt\r\n \r\nTRT_LOGGER = trt.Logger()\r\nmodel_path = './examples/test.onnx'\r\nengine_file_path = \"./examples/test.trt\"\r\nEXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) # batchsize=1\r\n \r\nwith trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) \\\r\n as network, trt.OnnxParser(network, TRT_LOGGER) as parser:\r\n builder.max_workspace_size = 1 << 28\r\n builder.max_batch_size = 1\r\n if not os.path.exists(model_path):\r\n print('ONNX file {} not found.'.format(model_path))\r\n exit(0)\r\n print('Loading ONNX file from path {}...'.format(model_path))\r\n with open(model_path, 'rb') as model:\r\n print('Beginning ONNX file parsing')\r\n if not parser.parse(model.read()):\r\n print('ERROR: Failed to parse the ONNX file.')\r\n for error in range(parser.num_errors):\r\n print(parser.get_error(error))\r\n\r\n network.get_input(0).shape = [1, 3, 224, 560]\r\n network.get_input(1).shape = [1,1]\r\n print('Completed parsing of ONNX file')\r\n engine = builder.build_cuda_engine(network)\r\n with open(engine_file_path, \"wb\") as f:\r\n f.write(engine.serialize())\r\n\r\n```\r\n\r\n> TensorRT-7.2.3.4", "url": "https://github.com/pytorch/TensorRT/issues/2465", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-14T09:37:12Z", "updated_at": "2023-11-15T01:34:35Z", "user": "lin-lcx" }, { "repo": "pytorch/executorch", "number": 1203, "title": "How to load original images for model inference", "body": "Hi, I am invsetigating in `examples/portable/executor_runner/executor_runner.cpp.` \r\nAnd on the [PrepareInputTensors](https://github.com/pytorch/executorch/blob/47900c96388453c83d9a6706151c0c2157fbfabd/examples/portable/executor_runner/executor_runner.cpp#L154), [method of PrepareInputTensor](https://github.com/pytorch/executorch/blob/9682172576d5d9a10f3162ad91e0a32b384a3b7c/util/util.h#L65-L137) generated just ones-initialized inputs. \r\n \r\nSo I would like to know how to load original dataset images and set them in Aten.\r\nIs it using opencv or other way?\r\n\r\nThanks\r\n\r\n", "url": "https://github.com/pytorch/executorch/issues/1203", "state": "closed", "labels": [ "need-user-input", "triaged" ], "created_at": "2023-11-14T05:11:42Z", "updated_at": "2024-01-15T07:12:37Z", "user": "EarthMu" }, { "repo": "pytorch/serve", "number": 2785, "title": "How to batch process in the intermediate node in touchserve workflow", "body": "Hi, I need some help with the TouchServe workflow. Currently, I use the Touchserve to orchestrate my server model and logic to work together, which could be represented in the graph below.\r\n\r\n```mermaid\r\nstateDiagram-v2\r\n [*] --> PreProcess\r\n PreProcess --> Model_A\r\n Model_A --> IntermediaProcess\r\n PreProcess --> IntermediaProcess\r\n IntermediaProcess --> Model_B\r\n Model_B --> PostProcess\r\n PostProcess --> [*]\r\n```\r\n\r\nMy problem is that the result from **IntermediaProcess** is batch output. When I try to send a batch output to **Model_B**, it raises an error about `one input cannot have multiple output`, so I solve this problem by packing a result from **IntermediaProcess** into 1 payload and then sending it to **Model_B** with batch processing inside **Model_B**, which could solve the problem. However, it affects the performance of the overall pipeline due to the fact that **Model_B** handles a lot of inference in each request for one request pipeline.\r\n\r\nMy question is there is alternative method to config pipeline to batch process with node level on **Model_B** ? I think it might increase concurrency of the node **Model_B** like this\r\n\r\n```mermaid\r\nstateDiagram-v2\r\n [*] --> PreProcess\r\n PreProcess --> Model_A\r\n Model_A --> IntermediaProcess\r\n PreProcess --> IntermediaProcess\r\n IntermediaProcess --> Model_B\r\n IntermediaProcess --> Model_B\r\n IntermediaProcess --> Model_B\r\n Model_B --> PostProcess\r\n PostProcess --> [*]\r\n```", "url": "https://github.com/pytorch/serve/issues/2785", "state": "closed", "labels": [], "created_at": "2023-11-11T02:47:56Z", "updated_at": "2023-11-27T08:18:12Z", "user": "RTae" }, { "repo": "pytorch/tutorials", "number": 2670, "title": "\ud83d\udca1 [REQUEST] - Tutorial of USB for Semi-Supervised Learning", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\nThis tutorial helps people to get a basic usage understanding of the Semi-Supervised Learning codebase [USB](https://github.com/microsoft/Semi-supervised-learning) - benchmark. We will show how to use the API provided in USB to train Semi-Supervised Algorithms, e.g., FixMatch, on different data. \r\n\r\n### Existing tutorials on this topic\r\n\r\nCategory: Extending PyTorch\r\nCategory: Image and Video\r\n\r\n\r\n### Additional context\r\n\r\nInvited by @carljparker as part of the PyTorch Docathon H2 2023. \r\nLabel: docathon-h2-2023", "url": "https://github.com/pytorch/tutorials/issues/2670", "state": "closed", "labels": [], "created_at": "2023-11-10T16:02:32Z", "updated_at": "2023-12-07T15:57:32Z", "comments": 0, "user": "Hhhhhhao" }, { "repo": "pytorch/tutorials", "number": 2669, "title": "\ud83d\udca1 [REQUEST] - A Tutorial on Whole Slide Image Classification using PyTorch and TIAToolbox", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\nWhole Slide Images are the digital data format from which pathologists and computational pathology researchers investigate cancer growth. To due their enormous image resolutions and and file size (in the order of several gigabytes), conventional image processing methods do not work effectively. This is why we propose writing this tutorial: to (a) explain how to load WSIs using TIAToolbox, which helps process such slides with speed and efficiency using its pyramid stack structure, and (b) show how you can use `torchvision` models can to analyse WSIs. We believe this tutorial will be useful to the PyTorch community, especially who is interested in using PyTorch models tackle cancer tissue research.\r\n\r\n### Existing tutorials on this topic\r\n\r\nThe tutorial will be adapted from our [WSI classification example](https://tia-toolbox.readthedocs.io/en/latest/_notebooks/jnb/05-patch-prediction.html). \r\n\r\n### Additional context\r\n\r\n**Category: Image and Video**\r\n\r\nWritten by Tissue Image Analytics Centre (TIA) and invited by @carljparker as part of the PyTorch Docathon H2 2023.\r\ncc @datumbox @nairbv @fmassa @NicolasHug @YosuaMichael @sekyondaMeta @svekars @carljparker @kit1980 @subramen @measty @behnazelhaminia @DavidBAEpstein @shaneahmed @msaroufim", "url": "https://github.com/pytorch/tutorials/issues/2669", "state": "closed", "labels": [ "module: vision", "docathon-h2-2023" ], "created_at": "2023-11-10T14:32:47Z", "updated_at": "2023-12-19T06:57:38Z", "comments": 1, "user": "Abdol" }, { "repo": "pytorch/vision", "number": 8107, "title": "cannot install torch==2.0.0 torchvision==0.15.2", "body": "### \ud83d\udc1b Describe the bug\n\nFor some reason, I cannot do:\r\n\r\n```\r\npip install torch==2.0.0 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118\r\n```\r\n\r\nBut I can install them separately with `--no-deps` and `torchvision` seems to work just fine. Why is this the case? Isn't `torchvision==0.15` supposed to be compatible with `torch==2.0`?\n\n### Versions\n\n```\r\nCollecting environment information...\r\nPyTorch version: 2.0.0+cu118\r\nIs debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64)\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: version 3.25.0\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.18 | packaged by conda-forge | (default, Oct 10 2023, 15:44:36) [GCC 12.3.0] (64-bit runtime)\r\nPython platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.10\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual\r\nCPU(s): 48\r\nOn-line CPU(s) list: 0-47\r\nThread(s) per core: 2\r\nCore(s) per socket: 12 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6\r\nModel: 63\r\nModel name: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz\r\nStepping: 2\r\nCPU MHz: 2299.882\r\nCPU max MHz: 2300.0000\r\nCPU min MHz: 1200.0000\r\nBogoMIPS: 4599.76\r\nVirtualization: VT-x\r\nL1d cache: 768 KiB\r\nL1i cache: 768 KiB\r\nL2 cache: 6 MiB\r\nL3 cache: 60 MiB\r\nNUMA node0 CPU(s): 0-11,24-35\r\nNUMA node1 CPU(s): 12-23,36-47\r\nVulnerability Itlb multihit: KVM: Mitigation: Split huge pages\r\nVulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable\r\nVulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable\r\nVulnerability Meltdown: Mitigation; PTI\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi\r\nmmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts md_clear flush_l1d\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.1\r\n[pip3] torch==2.0.0+cu118\r\n[pip3] torchaudio==2.0.1+cu118\r\n[pip3] torchvision==0.15.1+cu118\r\n[pip3] triton==2.0.0\r\n[conda] numpy 1.24.1 pypi_0 pypi\r\n[conda] torch 2.0.0+cu118 pypi_0 pypi\r\n[conda] torchaudio 2.0.1+cu118 pypi_0 pypi\r\n[conda] torchvision 0.15.2+cu118 pypi_0 pypi\r\n[conda] triton ", "url": "https://github.com/pytorch/vision/issues/8107", "state": "closed", "labels": [], "created_at": "2023-11-10T02:13:06Z", "updated_at": "2023-11-10T14:26:40Z", "comments": 1, "user": "wemoveon2" }, { "repo": "pytorch/serve", "number": 2780, "title": "example of integrating deepspeed fastgen into TorchServe", "body": "### \ud83d\ude80 The feature\n\nProvide an example of integrating deepspeed fastgen in TorchServe.\n\n### Motivation, pitch\n\ndeepspeed fastgen was published in mii.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2780", "state": "open", "labels": [ "future", "example" ], "created_at": "2023-11-09T19:32:46Z", "updated_at": "2023-11-09T19:32:46Z", "comments": 0, "user": "lxning" }, { "repo": "pytorch/xla", "number": 5784, "title": "Is there a Bug with AllGather backprop algorithm?", "body": "https://github.com/pytorch/xla/blob/d5d023063bfa8ecb4629f621f9b5890bc8396f58/torch_xla/core/functions.py#L66C1-L66C1\r\n\r\nIn the aforementioned line, we see the class \r\n```\r\nclass AllGather(torch.autograd.Function):\r\n\r\n @staticmethod\r\n def forward(ctx, input, dim):\r\n ctx.dim = dim\r\n ctx.ordinal = xm.get_ordinal()\r\n ctx.world_size = xm.xrt_world_size()\r\n return xm.all_gather(input, dim=dim)\r\n\r\n @staticmethod\r\n def backward(ctx, grad_output):\r\n slice_size = grad_output.size(ctx.dim) // ctx.world_size\r\n return torch.narrow(grad_output.clone(), ctx.dim, ctx.ordinal * slice_size,\r\n slice_size), None\r\n\r\n```\r\n\r\nI went to test this method with the following: \r\n\r\n```\r\nimport torch\r\nimport os\r\nimport torch.distributed as dist\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_backend \r\n\r\nif __name__ == \"__main__\": \r\n \r\n dist.init_process_group('xla') \r\n device = xm.xla_device()\r\n rank = xm.get_ordinal()\r\n xla_ = True\r\n\r\n t = torch.arange(0,8,1,dtype=torch.float,requires_grad=True,device=device).view(2,4).contiguous()\r\n t.retain_grad()\r\n \r\n #t1 = torch.narrow(t,0,rank,1).contiguous()\r\n #t2 = torch.narrow(t,0,1,1).contiguous()\r\n \r\n t2 = torch.arange(0,8,1,dtype=torch.float,requires_grad=True,device=device).view(2,4).contiguous()\r\n t2.retain_grad()\r\n tout = torch.matmul(t,t2.T)\r\n loss=tout.sum()\r\n loss.backward()\r\n res_t = t.grad.detach().cpu() \r\n \r\n tnew = torch.arange(0,8,1,dtype=torch.float,requires_grad=True,device=device).view(2,4).contiguous() \r\n tnew = torch.narrow(tnew,0,rank,1)\r\n tnew = tnew.clone()\r\n tnew.retain_grad()\r\n t2n = torch.arange(4*rank,(rank+1)*4,device=device,requires_grad=True,dtype=torch.float).contiguous()\r\n t2n.retain_grad()\r\n tnew2 = AllGather.apply(tnew)\r\n ton = torch.matmul(tnew2,t2n.T)\r\n loss=ton.sum()\r\n loss.backward()\r\n \r\n rest_tn = tnew.grad.detach().cpu()\r\n xm.rendezvous('completed')\r\n print(res_t)\r\n print(rest_tn)\r\n```\r\n\r\nI noticed that the results are not the same,\r\n\r\nHowever, if I run \r\n```\r\nclass AllGather(torch.autograd.Function):\r\n\r\n @staticmethod\r\n def forward(ctx, input, dim):\r\n ctx.dim = dim\r\n ctx.ordinal = xm.get_ordinal()\r\n ctx.world_size = xm.xrt_world_size()\r\n return xm.all_gather(input, dim=dim)\r\n\r\n @staticmethod\r\n def backward(ctx, grad_output):\r\n slice_size = grad_output.size(ctx.dim) // ctx.world_size\r\n xm.reduce(xm.REDUCE_SUM,grad_output.contiguous())\r\n return torch.narrow(grad_output.clone(), ctx.dim, ctx.ordinal * slice_size,\r\n slice_size), None\r\n\r\n```\r\n\r\nThen they are the same. Is there an issue with my code? I am trying to confirm that backprop is working properly?\r\n", "url": "https://github.com/pytorch/xla/issues/5784", "state": "open", "labels": [ "question", "distributed" ], "created_at": "2023-11-09T18:30:24Z", "updated_at": "2025-04-28T12:21:19Z", "user": "mathephysicist" }, { "repo": "pytorch/pytorch", "number": 113370, "title": "Incorrect stride when permuting shapes where a zero dimension is present.", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI ran into a problem while permuting the following tensor (to convert into a complex dtype):\r\n\r\n```python\r\n>>> torch.view_as_complex(torch.empty(1,0,2,100,100).permute(0,1,3,4,2).contiguous())\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nRuntimeError: Tensor must have a last dimension with stride 1\r\n```\r\n\r\nUpon further investigation I found that strides behave oddly when permuting with a zero dimension present.\r\n\r\nContrast the difference when `tensor.size(1) == 0` and `tensor.size(1) == 99`:\r\n\r\n```python\r\n>>> torch.empty(1,0,2,100,100).stride()\r\n(20000, 20000, 10000, 100, 1)\r\n\r\n>>> torch.empty(1,0,2,100,100).permute(0,1,3,4,2).contiguous().stride()\r\n(20000, 20000, 100, 1, 10000)\r\n\r\n>>> torch.empty(1,99,2,100,100).permute(0,1,3,4,2).contiguous().stride()\r\n(1980000, 20000, 200, 2, 1)\r\n```\r\n\r\nIs this expected behavior? \r\n\r\n**Notes:** \r\n\r\nI am aware that there is no data at all if a dim is 0, I wouldn't have been surprised to observe a stride tuple containing all 0's or 1's. The latter - which would work with `view_as_complex` - would obviously be most convenient for me.\r\n\r\n(My motivation for using 0 sized tensors is that it's often easier to work with an empty array `[]` value than working with a `None` value which requires null-checks all over the place.)\r\n\r\n### Versions\r\n\r\nPyTorch version: 1.12.1+cu116\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.6\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: NixOS 22.11 (Raccoon) (x86_64)\r\nGCC version: (GCC) 11.3.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.6 (main, Aug 1 2022, 20:38:21) [GCC 11.3.0] (64-bit runtime)\r\nPython platform: Linux-5.15.114-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: \r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060\r\nNvidia driver version: 520.56.06\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nAddress sizes: 48 bits physical, 48 bits virtual\r\nByte Order: Little Endian\r\nCPU(s): 32\r\nOn-line CPU(s) list: 0-31\r\nVendor ID: AuthenticAMD\r\nModel name: AMD Ryzen 9 5950X 16-Core Processor\r\nCPU family: 25\r\nModel: 33\r\nThread(s) per core: 2\r\nCore(s) per socket: 16\r\nSocket(s): 1\r\nStepping: 2\r\nFrequency boost: enabled\r\nCPU(s) scaling MHz: 72%\r\nCPU max MHz: 5083.3979\r\nCPU min MHz: 2200.0000\r\nBogoMIPS: 6787.42\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm\r\nVirtualization: AMD-V\r\nL1d cache: 512 KiB (16 instances)\r\nL1i cache: 512 KiB (16 instances)\r\nL2 cache: 8 MiB (16 instances)\r\nL3 cache: 64 MiB (2 instances)\r\nNUMA node(s): 1\r\nNUMA node0 CPU(s): 0-31\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant librarie", "url": "https://github.com/pytorch/pytorch/issues/113370", "state": "open", "labels": [ "triaged", "module: edge cases", "module: empty tensor" ], "created_at": "2023-11-09T17:16:14Z", "updated_at": "2024-02-23T18:06:34Z", "user": "rehno-lindeque" }, { "repo": "pytorch/executorch", "number": 1162, "title": "How to deploy llama2 on Qualcomm Snapdragon chips through ExecuTorch\uff1f", "body": "Excuse me, if I need to deploy llama2 on Qualcomm Snapdragon chip through ExecuTorch and want to use NPU computing power as an inference computing unit, what do I need to do?\r\n\r\nThe chip specs I'm currently using are SG885G-WF https://www.quectel.com/product/wi-fi-bt-sg885g-wf-smart-module\u3002", "url": "https://github.com/pytorch/executorch/issues/1162", "state": "closed", "labels": [ "need-user-input", "partner: qualcomm", "triaged" ], "created_at": "2023-11-07T12:32:59Z", "updated_at": "2025-02-03T18:21:13Z", "user": "tensorflowt" }, { "repo": "pytorch/tutorials", "number": 2655, "title": "Why multiply sqrt(d_model) before TransformerEncoderLayer?", "body": "Hi,\r\n\r\nThank you so much for the tutorial! I notice that in https://github.com/pytorch/tutorials/blob/main/beginner_source/transformer_tutorial.py#L92, you multiply sqrt(d_model) before TransformerEncoderLayer. May I ask why we need to do this?\r\n\r\nThanks!", "url": "https://github.com/pytorch/tutorials/issues/2655", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-06T19:48:45Z", "updated_at": "2023-11-06T20:13:28Z", "user": "yuzhenmao" }, { "repo": "pytorch/audio", "number": 3688, "title": "Why does `transforms.TimeStretch` return of type `complex64`?", "body": "### \ud83d\udc1b Describe the bug\n\nGood day!\r\n\r\nhttps://pytorch.org/audio/2.1.0/generated/torchaudio.transforms.TimeStretch.html#torchaudio.transforms.TimeStretch.forward:\r\n\r\n> Stretched spectrogram. The resulting tensor is of the same dtype as the input spectrogram, but the number of frames is changed to `ceil(num_frame / rate)`.\r\n\r\nBut:\r\n```\r\ns = torchaudio.transforms.Spectrogram()(x)\r\ns.dtype # => torch.float32\r\n\r\nt = torchaudio.transforms.TimeStretch(fixed_rate=0.9)(s)\r\nt.dtype # => torch.complex64\r\n```\r\n\r\nShould I collect a bug report or don't I understand time stretching?\r\n\r\n(previously posted [at the forum](https://discuss.pytorch.org/t/why-does-transforms-timestretch-return-complex64/191208))\n\n### Versions\n\ntorchaudio 2.1.1 from Google Colab", "url": "https://github.com/pytorch/audio/issues/3688", "state": "closed", "labels": [], "created_at": "2023-11-05T12:02:57Z", "updated_at": "2023-11-10T10:25:51Z", "comments": 4, "user": "kuraga" }, { "repo": "pytorch/xla", "number": 5768, "title": "How to provide sharding annotation for MpDeviceLoader when data has different dimensions", "body": "## \u2753 Questions and Help\r\n\r\nLet's say my dataloader yields a dict when iterating over and the members of this dict has different dimensions\r\n```python\r\n{\r\n \"input_ids\": shape = (batch, seq),\r\n \"masks\": shape = (batch, seq, seq),\r\n}\r\n```\r\n\r\n`pl.MpDeviceLoader` appears to only able to provide one sharding annotation. I'm currently using it like this:\r\n```python\r\ndata_loader = pl.MpDeviceLoader(\r\n data_loader,\r\n dev,\r\n input_sharding=xs.ShardingSpec(mesh, ('data', None, None)))\r\n```\r\nObviously, ('data', None, None) is not valid for `input_ids` which has only 2 dimensions. But this seems to work. I wonder what's the proper way of using `MpDeviceLoader` in this case. ", "url": "https://github.com/pytorch/xla/issues/5768", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2023-11-03T20:43:19Z", "updated_at": "2025-04-28T12:30:11Z", "user": "hanzhi713" }, { "repo": "pytorch/pytorch", "number": 112876, "title": "How to handle CVE vulnerabilities in underlying operating system?", "body": "Hello,\r\n\r\nThe base images for Cuda are pretty old (2.1.0-cuda11.8 was pushed more than a month ago) how should we act to get latest security updates from the Ubuntu base image?", "url": "https://github.com/pytorch/pytorch/issues/112876", "state": "open", "labels": [ "triaged", "module: docker", "security" ], "created_at": "2023-11-03T17:32:14Z", "updated_at": "2023-11-06T22:34:04Z", "user": "bjorn-ali-goransson" }, { "repo": "pytorch/serve", "number": 2766, "title": "How to auto-scale model replicas in a single GPU based EC2 instance based on number-of-requests-in-queue ?", "body": "Hi team, I mainly had 1 question and 1 observation:\r\n---\r\n\r\n### **Question:**\r\n\r\n- **I was not able to locate any resource explaining ways to auto-scale ML model in torch-serve on single GPU instance.** \r\n- I did had a look at the model configuration documentation which explained the 2 parameters: min-workers and max-worker where each worker will have 1 model loaded. I also had a look at this issue: https://github.com/pytorch/serve/issues/714 where **ts_queue_latency_microseconds** flag was explained to auto-scale in a Kubernetes cluster. \r\n\r\n#### **_But what I need is:_** \r\n\r\n> A way to load more replicas of the model in the same instance based on certain conditions like: number-of-requests-in-queue or something similar.\r\n\r\n> **Assumption**: There is sufficient amount of GPU memory remaining and GPU utilization is not 100%\r\n\r\n---\r\n\r\n### **Observation:** The Problem I faced: \r\n\r\n- I hosted the simple **MNIST classifier example** provided in torch-serve tutorials on a **T4 GPU (G4dn EC2 Instance**) and load tested it using the **Locust Application**\r\n- I had set the max-workers to 2 and min-workers to 1. The batch-size was set to 1. \r\n- With the help of Locust Application, I gradually sent 1000 requests per sec to the model server. \r\n- I observed the GPU and CPU memory and compute utilization:\r\n - GPU memory utilization was less than 3% because the ML model is very small. The compute utilization was also less than 5%. \r\n - Even CPU memory and compute was not utilized at max (was higher than GPU but less than 20% of total availability) \r\n- **Problem**: \r\n - **The model server did not process all the requests. At any given point, it only responded to ~700-750 requests and the remaining requests were discarded/dropped** \r\n - I dont think the model got replicated as 2nd worker because the GPU memory and compute utilization was very small. \r\n---\r\n\r\nPlease let me know if there are any good resources to refer and how to auto-scale based on **ts_queue_latency_microseconds** flag in a single GPU instance. ", "url": "https://github.com/pytorch/serve/issues/2766", "state": "closed", "labels": [ "triaged" ], "created_at": "2023-11-02T16:03:49Z", "updated_at": "2023-11-26T18:39:03Z", "user": "yogendra-yatnalkar" }, { "repo": "pytorch/serve", "number": 2765, "title": "How to auto-scale model replicas in a single GPU based EC2 instance based on time_of_request_in_queue ", "body": "", "url": "https://github.com/pytorch/serve/issues/2765", "state": "closed", "labels": [], "created_at": "2023-11-02T15:39:16Z", "updated_at": "2023-11-02T17:46:34Z", "user": "yogendra-yatnalkar" }, { "repo": "pytorch/vision", "number": 8090, "title": "to_pil_image different results depending on numpy/torch input", "body": "### \ud83d\udc1b Describe the bug\n\nto_pil_image has different behaviour depending on torch or numpy input. This is not documented as far as I can see. There is a note that numpy is expected to be HWC, whereas torch is expected to be CHW, but that's not relevant here.\r\n\r\n```python\r\nimport torch\r\nfrom torchvision.transforms.functional import to_pil_image\r\na = torch.rand((100, 101))\r\nprint(to_pil_image(a).mode)\r\n# L\r\nprint(to_pil_image(a.numpy()).mode)\r\n# F\r\n```\r\nThis is not documented, nor is there any warning, so errors due to this are hard to track down. The problematic code is this section:\r\n```python\r\n if isinstance(pic, torch.Tensor):\r\n if pic.is_floating_point() and mode != \"F\":\r\n pic = pic.mul(255).byte()\r\n```\r\nin which the torch.tensor is rescaled. Can we mirror functionality for numpy arrays? `(pic * 255).round().astype(np.uint8)`\n\n### Versions\n\nall versions", "url": "https://github.com/pytorch/vision/issues/8090", "state": "closed", "labels": [], "created_at": "2023-11-02T12:46:29Z", "updated_at": "2023-11-08T08:51:45Z", "comments": 5, "user": "rb-synth" }, { "repo": "pytorch/xla", "number": 5762, "title": "how to use torch-xla with huggingface transformers", "body": "## \u2753 Questions and Help\r\nI am fine-tuning the model provided by huggingface, modify a model from pytorch to torch-xla and run it. but it will freeze when running. Is there something wrong here?\r\n\r\ndataset as follows:\r\nhttps://github.com/zyds/transformers-code/blob/master/01-Getting%20Started/04-model/ChnSentiCorp_htl_all.csv\r\n\r\npytorch code as follows:\r\n```\r\nimport pandas as pd\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\nimport torch\r\nfrom torch.optim import Adam\r\nfrom torch.utils.data import Dataset\r\nfrom torch.utils.data import random_split\r\nfrom torch.utils.data import DataLoader\r\n\r\nclass MyDataset(Dataset):\r\n\r\n def __init__(self, data_path) -> None:\r\n super().__init__()\r\n self.data = pd.read_csv(data_path)\r\n self.data = self.data.dropna()\r\n\r\n def __getitem__(self, index):\r\n return self.data.iloc[index][\"review\"], self.data.iloc[index][\"label\"]\r\n\r\n def __len__(self):\r\n return len(self.data)\r\n\r\nif __name__ == \"__main__\":\r\n dataset = MyDataset('./ChnSentiCorp_htl_all.csv')\r\n trainset, validset = random_split(dataset, lengths=[0.9, 0.1])\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\"rbt3\")\r\n\r\n def collate_func(batch):\r\n texts, labels = [], []\r\n for item in batch:\r\n texts.append(item[0])\r\n labels.append(item[1])\r\n inputs = tokenizer(texts, max_length=128, padding=\"max_length\", truncation=True, return_tensors=\"pt\")\r\n inputs[\"labels\"] = torch.tensor(labels)\r\n return inputs\r\n\r\n trainloader = DataLoader(trainset, batch_size=32, shuffle=True, collate_fn=collate_func)\r\n validloader = DataLoader(validset, batch_size=64, shuffle=False, collate_fn=collate_func)\r\n\r\n model = AutoModelForSequenceClassification.from_pretrained(\"./rbt3/\")\r\n\r\n if torch.cuda.is_available():\r\n model = model.cuda()\r\n optimizer = Adam(model.parameters(), lr=2e-5)\r\n def evaluate():\r\n model.eval()\r\n acc_num = 0\r\n with torch.inference_mode():\r\n for batch in validloader:\r\n if torch.cuda.is_available():\r\n batch = {k: v.cuda() for k, v in batch.items()}\r\n output = model(**batch)\r\n pred = torch.argmax(output.logits, dim=-1)\r\n acc_num += (pred.long() == batch[\"labels\"].long()).float().sum()\r\n return acc_num / len(validset)\r\n\r\n def train(epoch=3, log_step=100):\r\n global_step = 0\r\n for ep in range(epoch):\r\n model.train()\r\n for batch in trainloader:\r\n if torch.cuda.is_available():\r\n batch = {k: v.cuda() for k, v in batch.items()}\r\n optimizer.zero_grad()\r\n output = model(**batch)\r\n output.loss.backward()\r\n optimizer.step()\r\n if global_step % log_step == 0:\r\n print(f\"ep: {ep}, global_step: {global_step}, loss: {output.loss.item()}\")\r\n global_step += 1\r\n acc = evaluate()\r\n print(f\"ep: {ep}, acc: {acc}\")\r\n train()\r\n```\r\n\r\nmodel by torch-xla as follows, run cmd is 'PJRT_DEVICE=CUDA python classification_demo_xla.py'\r\n```\r\nimport pandas as pd\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n\r\nimport torch\r\nfrom torch.optim import Adam\r\nfrom torch.utils.data import Dataset\r\nfrom torch.utils.data import random_split\r\nfrom torch.utils.data import DataLoader\r\nimport torch_xla\r\nfrom torch_xla import runtime as xr\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_multiprocessing as xmp\r\nimport torch_xla.distributed.parallel_loader as pl\r\n\r\nclass MyDataset(Dataset):\r\n\r\n def __init__(self, data_path) -> None:\r\n super().__init__()\r\n self.data = pd.read_csv(data_path)\r\n self.data = self.data.dropna()\r\n\r\n def __getitem__(self, index):\r\n return self.data.iloc[index][\"review\"], self.data.iloc[index][\"label\"]\r\n\r\n def __len__(self):\r\n return len(self.data)\r\n\r\nif __name__ == \"__main__\":\r\n dataset = MyDataset('./ChnSentiCorp_htl_all.csv')\r\n trainset, validset = random_split(dataset, lengths=[0.9, 0.1])\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\"rbt3\")\r\n\r\n def collate_func(batch):\r\n texts, labels = [], []\r\n for item in batch:\r\n texts.append(item[0])\r\n labels.append(item[1])\r\n inputs = tokenizer(texts, max_length=128, padding=\"max_length\", truncation=True, return_tensors=\"pt\")\r\n inputs[\"labels\"] = torch.tensor(labels)\r\n return inputs\r\n\r\n train_loader = DataLoader(trainset, batch_size=32, shuffle=True, collate_fn=collate_func)\r\n valid_loader = DataLoader(validset, batch_size=64, shuffle=False, collate_fn=collate_func)\r\n\r\n model = AutoModelForSequenceClassification.from_pretrained(\"./rbt3/\")\r\n\r\n device = xm.xla_device()\r\n model = model.to(device)\r\n print('model device:', model.device)\r\n\r\n optimizer = Ada", "url": "https://github.com/pytorch/xla/issues/5762", "state": "closed", "labels": [], "created_at": "2023-11-02T08:43:46Z", "updated_at": "2023-11-03T01:34:51Z", "user": "markc-614" }, { "repo": "pytorch/tutorials", "number": 2630, "title": "\ud83d\udca1 [REQUEST] - <title>An inbuilt function to retrieve a list of datasets categorised by problem type (e.g., classification, regression, clustering).", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nPyTorch has inbuilt function to list all datasets.\r\n\r\n`import torchvision.datasets as datasets\r\n\r\n//Get a list of all datasets\r\nall_datasets = datasets.__all__\r\n\r\n//Print the list of datasets\r\nprint(all_datasets)\r\n`\r\nRather than focusing on getting all the dataset, we can include a parameter. Parameter will take the type of task person wants to do e.g Clustering, Regression, Classification. After putting parameter all the related dataset according to task will be shown.\r\n\r\n\r\n\r\nOverall, a built-in function to retrieve a list of datasets categorised by problem type would be a valuable addition to PyTorch. It would make it easier for users to find, discover, use, and share datasets.\r\n\r\n### Existing tutorials on this topic\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\r\n```[tasklist]\r\n### Tasks\r\n```\r\n\r\n```[tasklist]\r\n### Tasks\r\n- [ ] Add a draft title or issue reference here\r\n```\r\n\r\n```[tasklist]\r\n### Tasks\r\n```\r\n", "url": "https://github.com/pytorch/tutorials/issues/2630", "state": "open", "labels": [], "created_at": "2023-10-31T09:08:51Z", "updated_at": "2023-11-01T16:06:56Z", "comments": 1, "user": "xd932" }, { "repo": "pytorch/executorch", "number": 1117, "title": "[build Error initializing DaemonStateData] how to fix it", "body": "hi,\r\n\r\nI reference [the tutorial](https://pytorch.org/executorch/stable/getting-started-setup.html#building-a-runtime) to install the buck2-x86_64-unknown-linux-musl.zst on my PC.\r\nAnd I want to build \r\n```\r\n/tmp/buck2 build //examples/portable/executor_runner:executor_runner --show-output\r\n```\r\nand face the build failed.\r\n\r\nAnd I try to use `killall` reference from https://stackoverflow.com/questions/76771689/buck2-cant-create-inotify-watchers\r\nand try build again.\r\nBut it is still build failed. Could somebody help me?\r\n\r\n![Screenshot from 2023-10-31 13-44-29](https://github.com/pytorch/executorch/assets/87454575/ac9e9225-c7b8-4924-8a55-a474c0e4e49a)\r\n\r\nOS: Linux Ubuntu 20.04.4 LTS x86_64 \r\nbuck2 version: 2023-07-18\r\n\r\nThanks,\r\nKris", "url": "https://github.com/pytorch/executorch/issues/1117", "state": "closed", "labels": [], "created_at": "2023-10-31T05:49:12Z", "updated_at": "2024-01-23T10:08:57Z", "user": "kris-himax" }, { "repo": "pytorch/pytorch", "number": 112454, "title": "Inductor chooses too large of a block size in cases where the `YBLOCK` dimension is too large.", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n```python\r\nimport torch\r\n\r\ntorch.set_default_device('cuda')\r\n\r\n@torch.compile\r\ndef f(x, y):\r\n return x.t() + y\r\n\r\nf(torch.randn(2**25, 128), torch.randn(128, 2**25))\r\n```\r\n\r\nThe concrete issue is that this results in us potentially choosing a config like `XBLOCK=256, YBLOCK=512`, which requires too much shared memory.\r\n\r\nThe reason we end up in this situation is: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/triton_heuristics.py#L810\r\n\r\nBasically, because we are limited to launching 65536 blocks on the second/third dim, `triton_config` will elect to scale up `YBLOCK` until we \"fit\" within the limit.\r\n\r\nIn this case, we start with a config like `XBLOCK=256, YBLOCK=32`, but we end up scaling `YBLOCK` to 512.\r\n\r\nPossible solutions are:\r\n1. Stop launching 2d configs, and just flatten it down to one axis of threadblocks.\r\n2. Choose the XBLOCK axis to be the \"large\" one.\r\n3. Scale down XBLOCK if `XBLOCK * RBLOCK` is too large.\n\ncc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler", "url": "https://github.com/pytorch/pytorch/issues/112454", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: inductor" ], "created_at": "2023-10-31T00:18:12Z", "updated_at": "2023-11-07T01:48:02Z", "user": "Chillee" }, { "repo": "pytorch/pytorch", "number": 112369, "title": "In the func Tensor.to, how can I make privateuse lazy init", "body": "### \ud83d\udc1b Describe the bug\n\nI\u2019m using privateuse1 to add our backend. My customer find the following code is working in cuda, but not working in my backend.\r\nUse `Tensor.to()` with device message which not has a index, for example \"cuda\". \r\n```\r\nimport torch\r\ntensor_a = torch.rand(2).to(\"cuda\")\r\n```\r\nPrivateuse1 uses the same logic but fails.\r\n```\r\nimport torch\r\n# assumption my device is privateuseone\r\nimport torch_privateuseone\r\n\r\ntensor_a = torch.rand(2).to(\"privateuseone\")\r\n```\r\nAbove code will fail with `impl->getDevice()` because of the lack of lazy_init for the privateuseone device.\r\nhttps://github.com/pytorch/pytorch/blob/bbd5b935e49a54578ac88cb23ca962ab896a8c7a/aten/src/ATen/native/TensorConversions.cpp#L210-L216\r\ncuda will init in `THPVariable_to`\r\nhttps://github.com/pytorch/pytorch/blob/bbd5b935e49a54578ac88cb23ca962ab896a8c7a/tools/autograd/templates/python_variable_methods.cpp#L958-L980\r\n\r\nWhere I can add `privateuseone_init` is in my own `to_impl` after Dispatcher, but by then it was too late. Any advice for this case?\r\n\r\n\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.1.0a0+git7bcf7da\r\nIs debug build: True\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.6 LTS (aarch64)\r\nGCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: Could not collect\r\nCMake version: version 3.23.1\r\nLibc version: glibc-2.27\r\n\r\nPython version: 3.8.17 (default, Jul 5 2023, 20:40:03) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-4.15.0-29-generic-aarch64-with-glibc2.26\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: False\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nByte Order: Little Endian\r\nCPU(s): 192\r\nOn-line CPU(s) list: 0-191\r\nThread(s) per core: 1\r\nCore(s) per socket: 48\r\nSocket(s): 4\r\nNUMA node(s): 4\r\nVendor ID: 0x48\r\nModel: 0\r\nStepping: 0x1\r\nBogoMIPS: 200.00\r\nL1d cache: 64K\r\nL1i cache: 64K\r\nL2 cache: 512K\r\nL3 cache: 24576K\r\nNUMA node0 CPU(s): 0-47\r\nNUMA node1 CPU(s): 48-95\r\nNUMA node2 CPU(s): 96-143\r\nNUMA node3 CPU(s): 144-191\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.23.4\r\n[pip3] torch==2.1.0a0+git7bcf7da\r\n[pip3] torch-npu==2.1.0+gitf565f75\r\n[pip3] torchair==0.1\r\n[pip3] torchvision==0.15.2\r\n[conda] numpy 1.23.4 pypi_0 pypi\r\n[conda] torch 2.1.0a0+git7bcf7da pypi_0 pypi\r\n[conda] torch-npu 2.1.0+gitf565f75 pypi_0 pypi\r\n[conda] torchair 0.1 pypi_0 pypi\r\n[conda] torchvision 0.15.2 pypi_0 pypi\r\n\n\ncc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/112369", "state": "closed", "labels": [ "module: internals", "triaged" ], "created_at": "2023-10-30T06:39:18Z", "updated_at": "2024-01-09T20:12:12Z", "user": "huihoaan" }, { "repo": "pytorch/TensorRT", "number": 2419, "title": "\u2753 [Question] How do the dtypes work with torch.compile(backend=\"torch_tensorrt\"). Getting error.", "body": "## \u2753 Question\r\n\r\nI tried the following script to load a resnet50 model and test a sample input - \r\n\r\n```python\r\nimport torch_tensorrt\r\nimport torch\r\n\r\n# Load a pre-trained ResNet50 model\r\nx = torch.randn(1, 3, 224, 224, device='cuda').half()\r\nmodel = torch.hub.load(\r\n 'pytorch/vision:v0.6.0', 'resnet50', pretrained=True\r\n).cuda().half().eval()\r\n\r\nmodel_opt = torch.compile(model, backend=\"torch_tensorrt\", dynamic=False, options={\"debug\": True, \"min_block_size\": 1, \"enabled_precisions\": {torch.half}})\r\n\r\n# Check correctness\r\ntorch.testing.assert_close(actual=model_opt(x), expected=model(x), rtol=1e-2, atol=1e-2)\r\n```\r\n\r\nand I am getting the following error - \r\n\r\n```\r\nUsing cache found in /home/shreyansh/.cache/torch/hub/pytorch_vision_v0.6.0\r\n/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\r\n warnings.warn(\r\n/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.\r\n warnings.warn(msg)\r\n[2023-10-28 09:37:06,703] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward\r\n[2023-10-28 09:37:08,530] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)\r\n[2023-10-28 09:37:08,552] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function torch_tensorrt_backend\r\n[10/28/2023-09:37:36] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See \"Lazy Loading\" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading\r\nINFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.008624\r\nINFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:01:02.300433\r\n[10/28/2023-09:38:38] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See \"Lazy Loading\" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading\r\n[10/28/2023-09:38:38] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See \"Lazy Loading\" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading\r\nINFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.004251\r\nINFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:00:01.587664\r\n[10/28/2023-09:38:40] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See \"Lazy Loading\" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading\r\n[10/28/2023-09:38:40] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See \"Lazy Loading\" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading\r\nINFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.004451\r\nINFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:00:01.805693\r\n[10/28/2023-09:38:42] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See \"Lazy Loading\" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading\r\nERROR:torch_tensorrt.dynamo.backend.backends:FX2TRT conversion failed on the subgraph. See trace above. Returning GraphModule forward instead.\r\nTraceback (most recent call last):\r\n File \"/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/backends.py\", line 74, in _pretraced_backend\r\n trt_compiled = _compile_module(\r\n File \"/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/backends.py\", line 129, in _compile_module\r\n submodule_inputs = get_submod_inputs(\r\n File \"/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/lowering/_partition.py\", line 207, in get_submod_inputs\r\n mod(*inputs)\r\n File \"/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch/fx/graph_module.py\", line 662, in call_wrapped\r\n return self._wrapped_call(self, *args, **k", "url": "https://github.com/pytorch/TensorRT/issues/2419", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-28T16:48:28Z", "updated_at": "2023-10-30T17:24:55Z", "user": "shreyansh26" }, { "repo": "pytorch/vision", "number": 8071, "title": "How to tell if Faster RCNN Detection model is overfitting", "body": "I'm confused as to how I can tell if the Faster RCNN Detection model I'm training is overfitting or not given that the validation loss is not computed in the `evaluate` function seen [here](https://github.com/pytorch/vision/blob/main/references/detection/engine.py#L75C1-L115C26) and below.\r\n\r\nAny help would be greatly appreciated.\r\n\r\n```\r\n@torch.inference_mode()\r\ndef evaluate(model, data_loader, device):\r\n n_threads = torch.get_num_threads()\r\n # FIXME remove this and make paste_masks_in_image run on the GPU\r\n torch.set_num_threads(1)\r\n cpu_device = torch.device(\"cpu\")\r\n model.eval()\r\n metric_logger = utils.MetricLogger(delimiter=\" \")\r\n header = \"Test:\"\r\n\r\n coco = get_coco_api_from_dataset(data_loader.dataset)\r\n iou_types = _get_iou_types(model)\r\n coco_evaluator = CocoEvaluator(coco, iou_types)\r\n\r\n for images, targets in metric_logger.log_every(data_loader, 100, header):\r\n images = list(img.to(device) for img in images)\r\n\r\n if torch.cuda.is_available():\r\n torch.cuda.synchronize()\r\n model_time = time.time()\r\n outputs = model(images)\r\n\r\n outputs = [{k: v.to(cpu_device) for k, v in t.items()} for t in outputs]\r\n model_time = time.time() - model_time\r\n\r\n res = {target[\"image_id\"]: output for target, output in zip(targets, outputs)}\r\n evaluator_time = time.time()\r\n coco_evaluator.update(res)\r\n evaluator_time = time.time() - evaluator_time\r\n metric_logger.update(model_time=model_time, evaluator_time=evaluator_time)\r\n\r\n # gather the stats from all processes\r\n metric_logger.synchronize_between_processes()\r\n print(\"Averaged stats:\", metric_logger)\r\n coco_evaluator.synchronize_between_processes()\r\n\r\n # accumulate predictions from all images\r\n coco_evaluator.accumulate()\r\n coco_evaluator.summarize()\r\n torch.set_num_threads(n_threads)\r\n return coco_evaluator\r\n```", "url": "https://github.com/pytorch/vision/issues/8071", "state": "open", "labels": [], "created_at": "2023-10-27T00:03:39Z", "updated_at": "2025-12-22T11:12:36Z", "user": "1andDone" }, { "repo": "pytorch/tutorials", "number": 2624, "title": " ~ PyTorch Docathon H2 2023 ~", "body": "# ~ PyTorch Docathon H2 2023 ~\r\nWe have a large backlog of issues that we want to address and it's a great opportunity for you to start contributing to PyTorch. We have limited this docathon to the [pytorch/tutorials](https://github.com/pytorch/tutorials/pulls?q=is%3Apr+is%3Aopen+label%3Adocathon-h2-2023+) and [pytorch/pytorch](https://github.com/pytorch/pytorch/pulls?q=is%3Apr+is%3Aopen+label%3Adocathon-h2-2023+) repositories, so please work on the issues from these two repositories.\r\n\r\n**NOTE**: This issue outlines the work in the pytorch/tutorials repo. If you would prefer to work on the PyTorch docstrings issues, please go to the [pytorch/pytorch Docathon issue](https://github.com/pytorch/pytorch/issues/112176).\r\n\r\n# Date and location\r\n**WHEN:** The docathon starts on November 1st 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on November 12th.\r\n**WHERE:** Virtual\r\n**WHAT:** Issues with the **docathon-h2-2023** label - will be posted on November 1st.\r\n\r\nWatch our intro video to learn more details about the event.\r\n\r\n[![Watch the docathon intro](https://github-production-user-asset-6210df.s3.amazonaws.com/5317992/242342554-2a0d5489-0f16-4db0-b3c7-67a9ada9abe6.png)](https://youtu.be/IhTjsRKqjtA?si=OdRvcjDj_82axD2I)\r\n\r\n\r\n# Can everyone participate?\r\n\r\nWe encourage everyone to consider participating in the docathon but there are a few things we expect from the participants:\r\n\r\n- You must have a GitHub account and know how to use Git and GitHub, how to submit or rebase your PR on the latest main branch, how to fork or clone the repo. We reserve the right to reject incorrectly submitted PRs.\r\n- You must be familiar with Python, the basics of Machine Learning, and have at least a basic knowledge of PyTorch. Familiarity with Sphinx, sphinx-gallery, and reStructuredText is a plus.\r\n\r\nBefore you start contributing make sure to read [Linux Foundation Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/).\r\n\r\n# What contributions are we looking for?\r\n\r\nAll issues for this docathon are tagged with the **docathon-h2-2023** label. Please note that contributions that address other issues won't be counted. We are primarily looking for the following contributions:\r\n\r\n**NOTE:** Please avoid working on issues with **intel**, **amd**, and **nvidia** labels which are reserved for our partners.\r\n\r\n- Bug fixes in the [pytorch/tutorials](https://github.com/pytorch/tutorials) repo tagged with the docathon-h2-2023 label - see [the list](https://github.com/pytorch/tutorials/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h2-2023) repo.\r\n- Docstring fixes in the [pytorch/pytorch](https://github.com/pytorch/pytorch) repo tagged with the docathon-h2-2023 label - see [this list](https://github.com/pytorch/pytorch/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h2-2023) repo.\r\n\r\n**NOTE:** Due to the large number of RSVPs, the tasks are provided on a first come first serve basis \u2014 please don't hoard the tasks!\r\n\r\n# Difficulty Levels\r\n\r\nThe issues have three levels of difficulty: **easy**, **medium**, and **advanced**. If this is your first time contributing to PyTorch, we recommend that you start with an issue that is tagged as **easy** or **medium**.\r\n\r\n# How to contribute to tutorials?\r\n\r\n1. Read [pytorch/tutorials/CONTRIBUTING.md](https://github.com/pytorch/tutorials/blob/main/CONTRIBUTING.md) for general guidelines on how the submission process works and overall style and voice.\r\n\r\n2. Pick an issue that is labeled as **docathon-h2-2023**.\r\n3. In the issue, add a comment with the text **/assigntome**. If the issue is already assigned, please find another issue to work on. We ask that you assign one issue at a time - we want to give everyone a fair chance to participate. When you are done with one issue and get it approved, you can assign another one to yourself and start working on it.\r\n4. If you are submitting a new tutorial, use [this template](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py).\r\n5. Fork or clone the PyTorch repository to your computer. For simple fixes, like incorrect URLs, you could use the GitHub UI as well.\r\n6. Create a branch and work on the fix.\r\n7. Test your fix by running the single tutorial locally. Don't run the whole build as it takes hours and requires a GPU. You can run one tutorial as a script python3 <tutorial-name.py> or GALLERY_PATTERN=\"neural_style_transfer_tutorial.py\" make html\r\n8. After you fix all the issues, you are ready to submit your PR.\r\n\r\n# Submit Your PR\r\n\r\n1. Submit your PR referencing the issue you've picked. For example:\r\n<img width=\"1058\" alt=\"docathonsubmission\" src=\"https://github.com/pytorch/tutorials/assets/127536312/3096037c-14d8-46ba-bb48-4a7314b463eb\">\r\n\r\n\r\n2. Pick an issue that is labeled as **docathon-h2-2023**.\r\n3. If you have not yet, sign the Contributor License Agreement (CLA) - prompted as a check in the PR. We can't accept any PRs without a signed CLA.\r\n4", "url": "https://github.com/pytorch/tutorials/issues/2624", "state": "open", "labels": [ "docathon-h2-2023" ], "created_at": "2023-10-26T16:14:39Z", "updated_at": "2023-11-06T17:50:19Z", "comments": 3, "user": "sekyondaMeta" }, { "repo": "pytorch/executorch", "number": 1101, "title": "How to virtualize the qte model?", "body": "Hi,\r\n\r\nI am now working on executorch. I want to see the model architecture of qte, which is easy for us to debug.\r\nHowever, I cannot find a virtualizing tool. Netron does not support qte format now.\r\nCould executorch support to virtualize the qte format model? \r\n\r\nBesides, I wonder whether the export function will translate the ops in the Pytorch model to specifics ops in qte format? \r\n\r\nThanks!!!\n```[tasklist]\n### Tasks\n```\n", "url": "https://github.com/pytorch/executorch/issues/1101", "state": "closed", "labels": [ "need-user-input" ], "created_at": "2023-10-26T12:52:41Z", "updated_at": "2023-10-27T13:48:47Z", "user": "liang1232018" }, { "repo": "pytorch/TensorRT", "number": 2415, "title": "\u2753 [Question] Examples not working in nvcr.io/nvidia/pytorch:23.09-py3.", "body": "## \u2753 Question\r\n\r\nI am within the `nvcr.io/nvidia/pytorch:23.09-py3` container. Trying out some snippets from:\r\nhttps://youtu.be/eGDMJ3MY4zk?si=MhkbgwAPVQSFZEha. \r\n\r\nBoth JIT and AoT examples failed. For JIT, it complained that \"tensorrt\" backend isn't available, for AoT, it complained that \"The user code is using a feature we don't support. Please try torchdynamo.explain() to get possible the reasons\". \r\n\r\nI am on an A100. What's going on? ", "url": "https://github.com/pytorch/TensorRT/issues/2415", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-26T09:53:16Z", "updated_at": "2025-11-24T17:42:35Z", "user": "sayakpaul" }, { "repo": "pytorch/torchx", "number": 782, "title": "Workspace patch is applied only on role[0] image", "body": "## \u2753 Questions and Help\r\n\r\nPer https://github.com/pytorch/torchx/blob/main/torchx/runner/api.py#L362-L370, we assume that patch needs to be applied only for a single role. Effectively assumes that:\r\n\r\n1. role0 is the only image that needs to be updated \r\n2. workspace is mapped to image of role0.\r\n\r\nThis issue has surfaced for an internal Meta user.\r\n\r\n### Question\r\nShould we treat this as a bug an apply patch to all the roles or introduce proper mapping between workspaces and roles? This hasn't surfaced since most of our customers use single role, but looks like it is broken. My personal preference is to provide warning to users when multiple roles are defined first, then add non-default option to specify workspace for each role name.\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/782", "state": "open", "labels": [ "enhancement", "question" ], "created_at": "2023-10-22T23:26:32Z", "updated_at": "2023-10-23T19:56:21Z", "comments": 5, "user": "kurman" }, { "repo": "pytorch/tutorials", "number": 2610, "title": "[BUG] - <title>When I use fsdp, Because the flattened parameters, I always meet some question", "body": "### Add Link\n\nWhen I use fsdp, Because the flattened parameters, I always meet some question.\r\nfor examples:\r\n`\r\nRuntimeError: mat2 must be a matrix, got 1-D tensor\r\n`\r\nand\r\n`\r\nRuntimeError: weight should have at least three dimensions\r\n`\r\nIt always occurred in some flattened model weights, sucn as conv, linear etc.\r\nHow can I solve this problem?\n\n### Describe the bug\n\nWhen I use fsdp, Because the flattened parameters, I always meet some question\r\nfor examples:\r\n`\r\nRuntimeError: mat2 must be a matrix, got 1-D tensor\r\n`\r\nand\r\n`\r\nRuntimeError: weight should have at least three dimensions\r\n`\r\nIt always occurred in some flattened model weights, sucn as conv, linear etc.\r\nHow can I solve this problem?\n\n### Describe your environment\n\nPytorch 2.1.0\n\ncc @osalpekar @H-Huang @kwen2501", "url": "https://github.com/pytorch/tutorials/issues/2610", "state": "closed", "labels": [ "bug", "distributed" ], "created_at": "2023-10-19T14:18:09Z", "updated_at": "2025-05-12T15:33:13Z", "comments": 4, "user": "sqzhang-lazy" }, { "repo": "pytorch/vision", "number": 8053, "title": "When will torch and torchvision support Python 3.12?", "body": "### \ud83d\ude80 The feature\n\nPython 3.11 is the latest version that is supported by torch and torchvision. Python 3.12 was released this month and I'd like to know when we'll be able to use torch & torchvision packages with Python 3.12.\n\n### Motivation, pitch\n\nI'm not specifically having any troubles, it's just that I personally like to stay up to date, and since ya'll have an astonishing library, I'm trying to contribute to it from my side by first raising an issue, and see if there's any other technical contribution that I would be able to make in this specific goal for your package \ud83e\udd1d\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8053", "state": "closed", "labels": [], "created_at": "2023-10-18T10:44:17Z", "updated_at": "2023-10-18T12:10:37Z", "comments": 1, "user": "AlirezaShk" }, { "repo": "pytorch/xla", "number": 5709, "title": "how can I debug in openxla xla source. in pytorch xla . ", "body": "I build pytorch and pytorch xla install in my computer. and I can debug in pytorch xla\uff0c but I dont known \uff0chow debug in openxla xla source code.\r\nThe compilation of xla depends on openxla. The openxla xla compiled source code can be seen here, xla/build/temp.linux-x86_64-cpython-310/bazel-xla/external. How should I set it so that the debug version on vscode can breakpoint to openxla? What about the source code of xla in it?\r\n", "url": "https://github.com/pytorch/xla/issues/5709", "state": "closed", "labels": [ "question", "openxla" ], "created_at": "2023-10-17T12:02:34Z", "updated_at": "2025-04-29T13:07:15Z", "user": "ckfgihub" }, { "repo": "pytorch/vision", "number": 8050, "title": "Any plans to implement the functions in opencv?", "body": "### \ud83d\ude80 The feature\n\nExpect an implementation of some of the apis available in opencv (e.g. cv2.findContours(), cv2.connectedComponents(), ...)\n\n### Motivation, pitch\n\nJust want torchvision to be able to do these things faster using gpus, and make these api faster.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/8050", "state": "open", "labels": [], "created_at": "2023-10-17T07:59:40Z", "updated_at": "2023-10-18T18:24:54Z", "comments": 1, "user": "mortal-Zero" }, { "repo": "pytorch/examples", "number": 1194, "title": "resume train", "body": "when I try to resume trainImagenet\uff0cthis happens\uff0cHow to solve this problem\uff1f\r\n\r\n![Snipaste_2023-10-12_19-38-46](https://github.com/pytorch/examples/assets/52640516/dd20ff95-bdde-4448-847b-e5d73779191d)\r\n![image](https://github.com/pytorch/examples/assets/52640516/5a5a3f0c-9209-41fa-9339-dc64ee693298)\r\n", "url": "https://github.com/pytorch/examples/issues/1194", "state": "open", "labels": [], "created_at": "2023-10-12T11:39:48Z", "updated_at": "2024-05-31T06:03:55Z", "comments": 2, "user": "hefangnan" }, { "repo": "pytorch/xla", "number": 5687, "title": "Through step_trace api profile xla program, but the result cannot be opened using Tensorboard", "body": "## \u2753 Questions and Help\r\ntensorboard will report this error: Failed to load libcupti (is it installed and accessible?)\r\nbut I think load libcupti is success\u3002I use the blew command\uff0cwill get correct load info\r\nlsof -p 430621 | grep cup\r\npython 430621 root mem REG 253,17 7199856 104860301 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcupti.so.2022.3.0\r\n", "url": "https://github.com/pytorch/xla/issues/5687", "state": "open", "labels": [ "question" ], "created_at": "2023-10-09T08:06:30Z", "updated_at": "2025-04-29T13:11:27Z", "user": "mars1248" }, { "repo": "pytorch/vision", "number": 8026, "title": "How to make the RegionProposalNetwork generate more proposals in FasterRCNN?", "body": "I'm trying to update the proposal losses function of MaskRCNN to increase the recall. I'm trying to do this by adding a positive weight to the BCE function\r\n\r\nHow I create my proposal losses function:\r\n```\r\nCLASS_WEIGHTS = torch.tensor([50])\r\n\r\ndef compute_loss(\r\n objectness: Tensor, pred_bbox_deltas: Tensor, labels: List[Tensor], regression_targets: List[Tensor]\r\n ) -> Tuple[Tensor, Tensor]:\r\n \"\"\"\r\n Args:\r\n objectness (Tensor)\r\n pred_bbox_deltas (Tensor)\r\n labels (List[Tensor])\r\n regression_targets (List[Tensor])\r\n\r\n Returns:\r\n objectness_loss (Tensor)\r\n box_loss (Tensor)\r\n \"\"\"\r\n\r\n sampled_pos_inds, sampled_neg_inds = model.rpn.fg_bg_sampler(labels)\r\n sampled_pos_inds = torch.where(torch.cat(sampled_pos_inds, dim=0))[0]\r\n sampled_neg_inds = torch.where(torch.cat(sampled_neg_inds, dim=0))[0]\r\n\r\n sampled_inds = torch.cat([sampled_pos_inds, sampled_neg_inds], dim=0)\r\n\r\n objectness = objectness.flatten()\r\n\r\n labels = torch.cat(labels, dim=0)\r\n regression_targets = torch.cat(regression_targets, dim=0)\r\n\r\n box_loss = F.smooth_l1_loss(\r\n pred_bbox_deltas[sampled_pos_inds],\r\n regression_targets[sampled_pos_inds],\r\n beta=1 / 9,\r\n reduction=\"sum\",\r\n ) / (sampled_inds.numel())\r\n\r\n objectness_loss = F.binary_cross_entropy_with_logits(objectness[sampled_inds], labels[sampled_inds],\r\n pos_weight=CLASS_WEIGHTS # USE CLASS WEIGHT HERE\r\n )\r\n return objectness_loss, box_loss\r\n```\r\n\r\nThen how I set the model to use this proposal losses function:\r\n```\r\nmodel = maskrcnn_resnet50_fpn(weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT)\r\nmodel.rpn.compute_loss = compute_loss\r\n```\r\n\r\nWhen I train the model now:\r\n- the **loss** increases significantly (e.g. before it was 1, now it is like 50, which is expected)\r\n- BUT the **recall** stays around the same (e.g. stagnates around 0.55 after training for several epochs)\r\n\r\nWhy is this the case? How do I get the recall to improve (i.e. how do I generate more proposals)?\r\n\r\n*FYI: I already tried setting the score threshold to 0, this didn't do anything either\u2026*", "url": "https://github.com/pytorch/vision/issues/8026", "state": "open", "labels": [], "created_at": "2023-10-07T00:06:53Z", "updated_at": "2023-10-08T08:36:19Z", "user": "darian69" }, { "repo": "pytorch/pytorch", "number": 110630, "title": "Memory efficient attention for tensors where the last dimension is not divisible by 8", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nCurrently, using `scaled_dot_product_attention` and the memory efficient kernel requires that the last dimension of the inputs is divisible by 8. Typically, this corresponds to the dimension per head in multihead attention, for example when using the `[batch, head, seq, dim]` convention.\r\n\r\nUsing inputs that do not conform to this requirement results in a `RuntimeError: No available kernel. Aborting execution.` and a warning: `UserWarning: Mem efficient attention requires last dimension of inputs to be divisible by 8.`\r\n\r\nIt would be great if this requirement could be relaxed, for example by only being divisible by 2. The [TPU implementation associated with the paper](https://github.com/google-research/google-research/tree/master/memory_efficient_attention) appears to work with arbitrary dimensions, but this might not be the case for GPUs.\r\n\r\nIt would also be helpful if these requirements would be documented (the [documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) appears to be missing in this regard).\r\n\r\n### Alternatives\r\n\r\nThe Flash attention kernel supports this feature, but it is missing some others, e.g. attention masks.\r\n\r\n### Additional context\r\n\r\nA minimal example:\r\n\r\n```python\r\nimport torch\r\nimport torch.nn.functional as F\r\n\r\nqkv_size = (10, 128, 123, 2)\r\n\r\nQ = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)\r\nK = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)\r\nV = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)\r\n\r\nwith torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=False, enable_mem_efficient=True):\r\n O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)\r\n```\r\n\r\nThe output\r\n```\r\n[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Memory efficient kernel not used because: (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:350](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:350).)\r\n O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)\r\n[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Mem efficient attention requires last dimension of inputs to be divisible by 8. Got Query.size(-1): 2, Key.size(-1): 2, Value.size(-1): 2 instead. (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:128](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:128).)\r\n O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)\r\n[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Flash attention kernel not used because: (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:352](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:352).)\r\n O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)\r\n[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Flash attention has been runtime disabled. (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/sdp_utils_cpp.h:439](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/sdp_utils_cpp.h:439).)\r\n O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[34], line 2\r\n 1 with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=False, enable_mem_efficient=True):\r\n----> 2 O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)\r\n\r\nRuntimeError: No available kernel. Aborting execution.\r\n```\r\n\r\nThis is using PyTorch 2.2.0.dev20231001, CUDA 11.8, and an Ampere GPU.\n\ncc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki", "url": "https://github.com/pytorch/pytorch/issues/110630", "state": "open", "labels": [ "triaged", "module: sdpa" ], "created_at": "2023-10-05T18:23:58Z", "updated_at": "2024-11-27T20:11:39Z", "user": "davidbuterez" }, { "repo": "pytorch/vision", "number": 8024, "title": "How to update RegionProposalNetwork loss function in FasterRCNN to generate MORE proposals?", "body": "", "url": "https://github.com/pytorch/vision/issues/8024", "state": "closed", "labels": [], "created_at": "2023-10-05T14:52:06Z", "updated_at": "2023-10-07T00:26:21Z", "user": "darian69" }, { "repo": "pytorch/TensorRT", "number": 2356, "title": "\u2753 [Question] How do you find the exact line of python code that triggers a backend compiler error?", "body": "I was trying to compile the huggingface Llama 2 model using the following code:\r\n\r\n```python\r\nimport os\r\nimport torch\r\nimport torch_tensorrt\r\nimport torch.backends.cudnn as cudnn\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch._dynamo as dynamo\r\nfrom optimum.onnxruntime import ORTModelForCausalLM\r\n\r\nbase_model = 'llama-2-7b'\r\ncomp_method = 'magnitude_unstructured'\r\ncomp_degree = 0.2\r\n\r\nmodel_path = f'vita-group/{base_model}_{comp_method}'\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_path,\r\n revision=f's{comp_degree}',\r\n torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True,\r\n device_map=\"auto\")\r\nmodel.save_pretrained(\"model_ckpt/\")\r\nmodel.eval()\r\n\r\n# setting\r\n# torch._dynamo.config.suppress_errors = True\r\nenabled_precisions = {torch.float, torch.int, torch.long}\r\ndebug = False\r\nworkspace_size = 20 << 30\r\nmin_block_size = 7\r\ntorch_executed_ops = {}\r\n\r\ncompilation_kwargs = {\r\n \"enabled_precisions\": enabled_precisions,\r\n \"debug\": debug,\r\n \"workspace_size\": workspace_size,\r\n \"min_block_size\": min_block_size,\r\n \"torch_executed_ops\": torch_executed_ops,\r\n}\r\n\r\n\r\nwith torch.no_grad():\r\n optimized_model = torch.compile(\r\n model.generate,\r\n backend=\"torch_tensorrt\",\r\n dynamic=True,\r\n options=compilation_kwargs,\r\n )\r\n\r\n tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')\r\n input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids.cuda()\r\n\r\n #outputs = model.generate(input_ids, max_new_tokens=128)\r\n outputs = optimized_model(input_ids, max_new_tokens=128)\r\n```\r\n\r\nAnd here is the complete log:\r\n\r\n```text\r\nINFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)\r\nINFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)\r\n\r\nWARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7\r\nINFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)\r\nINFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)\r\n\r\nWARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7\r\nINFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)\r\nINFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)\r\n\r\nWARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7\r\nINFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)\r\nINFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)\r\n\r\nWARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7\r\nINFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)\r\nINFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False", "url": "https://github.com/pytorch/TensorRT/issues/2356", "state": "open", "labels": [ "question", "No Activity" ], "created_at": "2023-10-02T01:15:22Z", "updated_at": "2024-01-02T00:02:08Z", "user": "BDHU" }, { "repo": "pytorch/TensorRT", "number": 2352, "title": "\u2753 [Question] How do you build Torch-TensorRT from origin/main with dependence on tensorrt 8.5.2 from Jetpack5.1?", "body": "## \u2753 Question\r\n\r\nWhen compiling the latest version of Torch-TensorRT from `origin/main` (`2.2.0.dev0+76de80d0`) on Jetpack5.1 using the latest locally compiled PyTorch (`2.2.0a0+a683bc5`) (so that I can use the latest v2 transforms in TorchVision (`0.17.0a0+4cb3d80`)), the resulting python package has a dependence on `tensorrt` version `8.6.1`, but Jetpack5.1 only supports version `8.5.2.2-1+cuda11.4` and is thus not installable.\r\nIs it possible to compile the latest Torch-TensorRT with dependence on the installed version of `tensorrt`?\r\n\r\n## Environment\r\n\r\n<details>\r\n<summary>\r\nEnvironment details\r\n</summary>\r\n\r\n```\r\nbr@nx:~/github/torch$ python /tmp/collect_env.py \r\nCollecting environment information...\r\nPyTorch version: 2.2.0a0+a683bc5\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.4\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.6 LTS (aarch64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\r\nClang version: 10.0.0-4ubuntu1 \r\nCMake version: version 3.27.5\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.10.104-tegra-aarch64-with-glibc2.29\r\nIs CUDA available: True\r\nCUDA runtime version: 11.4.315\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 6\r\nOn-line CPU(s) list: 0-3\r\nOff-line CPU(s) list: 4,5\r\nThread(s) per core: 1\r\nCore(s) per socket: 2\r\nSocket(s): 2\r\nVendor ID: Nvidia\r\nModel: 0\r\nModel name: ARMv8 Processor rev 0 (v8l)\r\nStepping: 0x0\r\nCPU max MHz: 1907,2000\r\nCPU min MHz: 115,2000\r\nBogoMIPS: 62.50\r\nL1d cache: 256 KiB\r\nL1i cache: 512 KiB\r\nL2 cache: 4 MiB\r\nL3 cache: 4 MiB\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Spec store bypass: Not affected\r\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Branch predictor hardening\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm dcpop\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy==1.5.1\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.24.4\r\n[pip3] numpy-quaternion==2022.4.3\r\n[pip3] pytorch-ranger==0.1.1\r\n[pip3] tensorrt==8.5.2.2\r\n[pip3] torch==2.2.0a0+a683bc5\r\n[pip3] torch-optimizer==0.3.0\r\n[pip3] torchmetrics==0.11.3\r\n[pip3] torchvision==0.17.0a0+4cb3d80\r\n[conda] Could not collect\r\n```\r\nTorch and TorchVision are built with\r\n```bash\r\nexport BUILD_TEST=OFF\r\nexport USE_FBGEMM=OFF # Fails to build\r\nexport USE_NCCL=OFF # Fails to build\r\nexport USE_KINETO=OFF # Fails to build\r\nexport BUILD_SPLIT_CUDA=ON # Required so that Torch-TensorRT finds the libraries it needs.\r\nexport _GLIBCXX_USE_CXX11_ABI=1 # Use the new C++ ABI\r\n```\r\n```bash\r\ncd ~/github/torch/pytorch\r\npython3 -m build -n\r\npip install dist/torch-<version>.whl\r\n```\r\n```bash\r\ncd ~/github/torch/vision\r\npython3 setup.py bdist_wheel # Doesn't support the newer build module.\r\npip install dist/torchvision-<version>.whl\r\nmkdir -p build; cd build\r\nTorch_DIR=~/github/torch/pytorch/torch/share/cmake/Torch cmake -DCMAKE_BUILD_TYPE=Release -Wno-dev -DWITH_CUDA=on -GNinja -DCMAKE_INSTALL_PREFIX=~/.local ..\r\nninja install\r\n```\r\n</details>\r\n\r\n[WORKSPACE](https://github.com/pytorch/TensorRT/files/12749136/WORKSPACE.txt) file used to build Torch-TensorRT on Jetpack5.1. Built with\r\n```bash\r\ncd ~/github/torch/Torch-TensorRT\r\nbazel build //:libtorchtrt -c opt\r\nsudo tar -xvzf bazel-bin/libtorchtrt.tar.gz -C /usr/local/\r\npython3 setup.py bdist_wheel --use-cxx11-abi # Doesn't support the newer build module.\r\npip install dist/torch_tensorrt-<version>.whl # <-- fails to install due to tensorrt==8.6 dependency\r\n```\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2352", "state": "open", "labels": [ "question", "No Activity" ], "created_at": "2023-09-28T20:25:41Z", "updated_at": "2024-01-01T00:02:42Z", "user": "BrettRyland" }, { "repo": "pytorch/data", "number": 1201, "title": "Loading `.tfrecords` files that require a deserialization method", "body": "### \ud83d\udc1b Describe the bug\n\nHi,\r\n\r\nI have a dataset in TFRecords format and am trying to move to TorchData's API for loading tfrecords files.\r\nThis is the minimal example:\r\n```python3\r\ndatapipe1 = IterableWrapper(['path/to/my/tfrecords/file.tfrecords'])\r\ndatapipe2 = FileOpener(datapipe1, mode=\"b\")\r\ntfrecord_loader_dp = datapipe2.load_from_tfrecord()\r\n\r\nfor d in tfrecord_loader_dp:\r\n pass\r\n```\r\nIt fails, as the datapipe does not know how to properly deserialize the tfrecord file.\r\n```\r\nFile ~/.conda/envs/bend/lib/python3.10/site-packages/torchdata/datapipes/iter/util/tfrecordloader.py:245, in TFRecordLoaderIterDataPipe.__iter__(self)\r\n 243 pathname, data_stream = data\r\n 244 try:\r\n--> 245 for example_bytes in iterate_tfrecord_file(data_stream):\r\n 246 example = example_pb2.SequenceExample() # type: ignore\r\n 247 example.ParseFromString(example_bytes) # type: ignore\r\n\r\nFile ~/.conda/envs/bend/lib/python3.10/site-packages/torchdata/datapipes/iter/util/tfrecordloader.py:83, in iterate_tfrecord_file(data)\r\n 81 (length,) = struct.unpack(\"<Q\", length_bytes)\r\n 82 if length > len(data_bytes):\r\n---> 83 data_bytes = data_bytes.zfill(int(length * 1.5))\r\n 84 data_bytes_view = memoryview(data_bytes)[:length]\r\n 85 if data.readinto(data_bytes_view) != length:\r\n\r\nOverflowError: Python int too large to convert to C ssize_t\r\nThis exception is thrown by __iter__ of TFRecordLoaderIterDataPipe(datapipe=FileOpenerIterDataPipe, length=-1, spec=None)\r\n```\r\n\r\n\r\nIn the legacy tensorflow codebase, I would have to specify a function to deserialize the tfrecord, by doing\r\n```python3\r\nimport tensorflow as tf\r\nimport tensorflow_datasets as tfds\r\n\r\ndataset = tf.data.Dataset.from_tensor_slices(['path/to/my/tfrecords/file.tfrecords'])\r\ndataset = dataset.interleave(lambda fp: tf.data.TFRecordDataset(fp, compression_type=compression_type), cycle_length=1, block_length=1, num_parallel_calls=tf.data.AUTOTUNE)\r\n\r\nfeatures = tfds.features.FeaturesDict.from_json(json.load(json_file)) # this file contains info about the .tfrecords file i'm trying to load\r\ndataset = dataset.map(features.deserialize_example, num_parallel_calls=tf.data.AUTOTUNE)\r\n\r\niterator = dataset.as_numpy_iterator()\r\nfor d in iterator:\r\n pass #this works, returning a dict of tf tensors\r\n```\r\n\r\nThe problem is basically that I have to deserialize the tfrecord, but I can't apply anything to the `TFRecordLoaderIterDataPipe` before it fails.\r\n\r\nIs there a workaround? I tried just wrapping the tensorflow dataset object in an `IterableWrapper`, but the tensorflow dataset can't be pickled so fails in `DataLoader2`.\r\n\r\nThanks!\r\n\r\n\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.0.1+cu117\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.27.4\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-1027-aws-x86_64-with-glibc2.31\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 46 bits physical, 48 bits virtual\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nThread(s) per core: 2\r\nCore(s) per socket: 8\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 85\r\nModel name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz\r\nStepping: 7\r\nCPU MHz: 2499.994\r\nBogoMIPS: 4999.98\r\nHypervisor vendor: KVM\r\nVirtualization type: full\r\nL1d cache: 256 KiB\r\nL1i cache: 256 KiB\r\nL2 cache: 8 MiB\r\nL3 cache: 35.8 MiB\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\r\nVulnerability L1tf: Mitigation; PTE Inversion\r\nVulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\r\nVulnerability Meltdown: Mitigation; PTI\r\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\r\nVulnerability Retbleed: Vulnerable\r\nVulnerability Spec store bypass: Vulnerable\r\nVulnerability Spectre v1: Mitigation; usercopy/s", "url": "https://github.com/meta-pytorch/data/issues/1201", "state": "open", "labels": [], "created_at": "2023-09-26T09:17:39Z", "updated_at": "2024-10-21T16:25:37Z", "comments": 1, "user": "fteufel" }, { "repo": "pytorch/TensorRT", "number": 2348, "title": "\u2753 [Question] How do you build and use PytorchTRT on Windows 10? ", "body": "## \u2753 Question\r\n\r\nAfter trying even using MSVC instead of Ninja, I kind was able to generate some dll files. The files are torchtrt.dll, torch_plugins.dll, torchtrt_runtimes.dll, torchtrtc.exe.\r\nNow what do I do with these. I just assumed, I put them in the lib folder \"C:\\Users\\{Username}\\AppData\\Local\\Programs\\Python\\Python310\\Lib\\site-packages\\torch\\lib\" and now this does not work.\r\n\r\n## What you have already tried\r\n\r\nI have read everything and tried literrally everything and the building process is literaly broken.\r\nhttps://pytorch.org/TensorRT/getting_started/getting_started_with_windows.html\r\nand then tried a few diffrerent things and somehow was able to this.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (torch-2.0.1+cu118.dist-info):\r\n - CPU Architecture: Intel x86 10500H\r\n - OS (e.g., Linux): Windows\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): Pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.10\r\n - CUDA version:11.8\r\n - GPU models and configuration: RTX 3060 Laptop\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\nThe building and then also using of PytorchRT Tensor is not easy and very problematic and outdated it seems. \r\n\r\nAnd even if you manage to get it build somehow, you do not know, what to expect. Are those 4 dll files enough or did I miss something? \r\n\r\nWhat do I do with these dll files? \r\n\r\n\r\nIs there a simple example on Windows starting python files, that will run on TensorRT Pytorch like a Hello World Tensort TRT like.\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2348", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-26T04:48:04Z", "updated_at": "2023-09-29T03:15:04Z", "user": "jensdraht1999" }, { "repo": "pytorch/audio", "number": 3619, "title": "torchaudio/compliance/kaldi.py FBank _get_window function can not support multiprocessing?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\ni use torchaudio 0.13.0+cu117 to get Fbank, if i use it in one thread is ok, but i want to use multiprocessing, like this\r\n`p = multiprocessing.Pool(1)\r\nxx = p.apply_async(audio_functiong, arg=(audio_in,))\r\np.close()\r\np.join()\r\nemb = xx.get()`\r\nthe code will hold on, and get nothing, i use debug found this function _get_window in kaldi.py can not run, so please help fix it,thanks!\r\n\r\n### Versions\r\npython 3.8", "url": "https://github.com/pytorch/audio/issues/3619", "state": "closed", "labels": [], "created_at": "2023-09-26T02:06:19Z", "updated_at": "2023-10-09T05:39:47Z", "comments": 1, "user": "haha010508" }, { "repo": "pytorch/tutorials", "number": 2569, "title": "\ud83d\udca1 [REQUEST] - <title>", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nThis tutorial \u201cA GENTLE INTRODUCTION TO TORCH.AUTOGRAD\u201d, the gradients of the error w.r.t. parameters, Q w.r.t a, I think the result should be a 2x2 matrix but not a 2-d vector, according to the matrix calculus.\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @albanD", "url": "https://github.com/pytorch/tutorials/issues/2569", "state": "closed", "labels": [ "question", "core" ], "created_at": "2023-09-24T11:24:53Z", "updated_at": "2023-10-27T19:23:44Z", "user": "haoyunliang" }, { "repo": "pytorch/vision", "number": 7987, "title": "How to update RegionProposalNetwork loss function in Faster RCNN? ", "body": "Excuse me if this question is stupid, but I can't seem to figure out how to do this\u2026\r\n\r\nI want to update the loss function of the RPN in FasterRCNN. See these lines [here](https://github.com/pytorch/vision/blob/beb4bb706b5e13009cb5d5586505c6d2896d184a/torchvision/models/detection/generalized_rcnn.py#L104-L105), which calls the `compute_loss` function [here](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/rpn.py#L298). I want to modify the `compute_loss` function (the second link).\r\n\r\nI\u2019m trying to update this `compute_loss` function in my code like so:\r\n\r\n```rpn.RegionProposalNetwork.compute_loss = custom_loss```\r\n\r\nHowever, this is not working i.e. it has no effect. Any idea how to update the RPN\u2019s loss function?", "url": "https://github.com/pytorch/vision/issues/7987", "state": "closed", "labels": [], "created_at": "2023-09-24T09:16:17Z", "updated_at": "2023-10-05T14:46:37Z", "user": "darian69" }, { "repo": "pytorch/pytorch", "number": 109958, "title": "How to compile torch 2.0.1 version from source?", "body": "### \ud83d\udc1b Describe the bug\n\nWhile I was using 'git clone --branch v2.0.1 https://github.com/pytorch/pytorch.git & python setup.py develop', and 'Building wheel torch-1.14.0a0+410ce96' version was being built. \n\n### Versions\n\nI also checked the version.txt, it shows '2.0.0a0' which should be the version in v2.0.1 tag branch.\r\n\r\nSo how should I compile torch 2.0.1 version from source? Thanks!", "url": "https://github.com/pytorch/pytorch/issues/109958", "state": "open", "labels": [ "oncall: releng", "triaged" ], "created_at": "2023-09-24T00:53:04Z", "updated_at": "2023-09-25T11:01:11Z", "user": "tonylin52" }, { "repo": "pytorch/TensorRT", "number": 2340, "title": "\u2753 [Question] Why import torch_tensorrt set log level to info automatically?", "body": "## \u2753 Question\r\n\r\nThe default log level of python is warning.\r\nWhy import torch_tensorrt set log level to info automatically?\r\nHow could I set log level back to warning?\r\n\r\n```\r\nimport logging\r\nimport torch_tensorrt\r\n\r\nlogging.info(\"INFO\")\r\nlogging.warning(\"WARNING\")\r\nlogging.error(\"ERROR\")\r\n```\r\n\r\nstderr outputs:\r\n```\r\nINFO:root:INFO\r\nWARNING:root:WARNING\r\nERROR:root:ERROR\r\n```\r\n\r\nwhat I want:\r\n```\r\nWARNING:root:WARNING\r\nERROR:root:ERROR\r\n```\r\n\r\n## What you have already tried\r\n\r\nBelow statements doesn't work\r\n\r\n```\r\ntorch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Warning)\r\n# or\r\nlogging.basicConfig(level=logging.WARNING)\r\n```\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\nDocker image from: nvcr.io/nvidia/pytorch:23.05-py3\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2340", "state": "open", "labels": [ "question", "No Activity" ], "created_at": "2023-09-23T13:51:10Z", "updated_at": "2024-01-01T00:02:44Z", "user": "KindRoach" }, { "repo": "pytorch/pytorch", "number": 109880, "title": "[FSDP ]How to convert sharded_state_dict files into full_state_dict offline without distributed process", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrently, if I use FSDP with 128 gpus and save checkpoints with sharded_state_dict to avoid gathering the full_state_dict on rank0 for saving, there is no way to obtain the full_state_dict ckpt offline. \r\n\r\nThe only way to obtain full_state_dict is to launch the exact 128GPU distributed process with FSDP to load that sharded_state_dict model, then switch to full_state_dict config and save the ckpt to files, which is originally problem we wanted to avoid.\r\n\r\nI cannot read the sharded_state_dict file (with `torch.load()`) individually either, except if I launch a 128gpu distributed process to read it. The file contain `ShardedTensor` which requires the same world_size=128 to load.\r\n\r\nI would like to have an offline script to read each sharded file and write iterative to a pytorch_model_0.bin, pytorch_model_1.bin, pytorch_model_2.bin...\r\n\r\nAnd then we can load the model with `AutoModelForCausalLM.from_pretrained(...)` by loading each `.bin`\r\n\r\nThanks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin", "url": "https://github.com/pytorch/pytorch/issues/109880", "state": "closed", "labels": [ "oncall: distributed", "triaged", "module: fsdp" ], "created_at": "2023-09-22T13:44:11Z", "updated_at": "2024-05-16T01:16:12Z", "user": "nxphi47" }, { "repo": "pytorch/tutorials", "number": 2566, "title": "[BUG] - Per sample gradients using function transforms not working for RNN", "body": "### Add Link\n\nHello!\r\nI'm working on a optimization algorithm that requires computing the per sample gradients. Assuming the batch size is $N$ and the number of model parameters is $M$, I want to calculate $\\partial \\log p(\\mathbf{x}^{(i)};\\theta)/\\partial \\theta_j$, which is an $N \\times M$ matrix. I found the [[PER-SAMPLE-GRADIENTS](https://pytorch.org/tutorials/intermediate/per_sample_grads.html)](https://pytorch.org/tutorials/intermediate/per_sample_grads.html) tutorial and began my own experiments. As a proof of concept, I defined a generative model with a tractable likelihood, such as MADE (Masked Autoencoder for Distribution Estimation), PixelCNN, RNN, etc., and sepcified the `log_prob` and `sample` methods. I utilized the function transforms methods mentioned in the tutorial, but currently, it only works for MADE (I believed it would work for NADE and PixelCNN too, since these models need only one forward pass to calculate the log likelihood of $\\mathbf{x}$. For RNN however, both sampling and inference require $N$ forward pass). \r\nBelow, I've provided my code snippets, and I'm interested in figuring out why it's not working for RNN. Making it work for RNN would significantly reduce the number of parameters for my research purpose. \r\nThank you!\n\n### Describe the bug\n\n```python\r\nimport math\r\n\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\ntorch.manual_seed(0)\r\n\r\n\r\nclass MADE(nn.Module):\r\n '''A simple one-layer MADE (Masked Autoencoder for Distribution Estimation)'''\r\n\r\n def __init__(self, n=10, device='cpu', *args, **kwargs):\r\n super().__init__()\r\n self.n = n\r\n self.device = device\r\n\r\n self.weight = nn.Parameter(torch.randn(self.n, self.n) / math.sqrt(self.n))\r\n self.bias = nn.Parameter(torch.zeros(self.n))\r\n mask = torch.tril(torch.ones(self.n, self.n), diagonal=-1)\r\n self.register_buffer('mask', mask)\r\n\r\n def pred_logits(self, x):\r\n return F.linear(x, self.mask * self.weight, self.bias)\r\n\r\n def forward(self, x):\r\n logits = self.pred_logits(x)\r\n log_probs = - F.binary_cross_entropy_with_logits(logits, x, reduction='none')\r\n return log_probs.sum(-1)\r\n\r\n @torch.no_grad()\r\n def sample(self, batch_size):\r\n x = torch.zeros(batch_size, self.n, dtype=torch.float, device=self.device)\r\n for i in range(self.n):\r\n logits = self.pred_logits(x)[:, i]\r\n x[:, i] = torch.bernoulli(torch.sigmoid(logits))\r\n return x\r\n\r\n\r\nclass GRUModel(nn.Module):\r\n '''GRU for density estimation'''\r\n\r\n def __init__(self, n=10, input_size=2, hidden_size=8, device='cpu'):\r\n super().__init__()\r\n self.n = n\r\n self.input_size = input_size # input_size=2 when x is binary\r\n self.hidden_size = hidden_size\r\n self.device = device\r\n self.gru_cell = nn.GRUCell(self.input_size, self.hidden_size)\r\n self.fc_layer = nn.Linear(self.hidden_size, 1)\r\n\r\n def pred_logits(self, x, h=None):\r\n x = torch.stack([x, 1 - x], dim=1) # 1 -> (1, 0), 0 -> (0, 1), (batch_size, 2)\r\n h_next = self.gru_cell(x, h) # h_{i+1}\r\n logits = self.fc_layer(h_next).squeeze(1)\r\n return h_next, logits\r\n\r\n def forward(self, x):\r\n log_prob_list = []\r\n x = torch.cat([torch.zeros(x.shape[0], 1, dtype=torch.float, device=self.device), x], dim=1) # cat x_0\r\n h = torch.zeros(x.shape[0], self.hidden_size, dtype=torch.float, device=self.device) # h_0\r\n for i in range(self.n):\r\n h, logits = self.pred_logits(x[:, i], h)\r\n log_prob = - F.binary_cross_entropy_with_logits(logits, x[:, i + 1], reduction='none')\r\n log_prob_list.append(log_prob)\r\n return torch.stack(log_prob_list, dim=1).sum(dim=1)\r\n\r\n @torch.no_grad()\r\n def sample(self, batch_size):\r\n x = torch.zeros(batch_size, self.n + 1, dtype=torch.float, device=self.device)\r\n for i in range(self.n):\r\n h, logits = self.pred_logits(x[:, i], h=None if i == 0 else h)\r\n x[:, i + 1] = torch.bernoulli(torch.sigmoid(logits))\r\n return x[:, 1:]\r\n\r\n\r\nif __name__ == '__main__':\r\n model = MADE()\r\n # model = GRUModel()\r\n\r\n # Sample from the generative model\r\n samples = model.sample(128)\r\n\r\n # Then I use the function transforms methods mentioned in the tutorial\r\n # to calculate the per sample mean\r\n from torch.func import functional_call, grad, vmap\r\n params = {k: v.detach() for k, v in model.named_parameters()}\r\n\r\n def loss_fn(log_probs):\r\n return log_probs.mean(0)\r\n\r\n def compute_loss(params, sample):\r\n batch = sample.unsqueeze(0)\r\n log_prob = functional_call(model, (params,), (batch,))\r\n loss = loss_fn(log_prob)\r\n return loss\r\n\r\n ft_compute_grad = grad(compute_loss)\r\n ft_compute_sample_grad = vmap(ft_compute_grad, in_dims=(None, 0))\r\n ft_per_sample_grads = ft_compute_sample_grad(params, samples)\r\n\r\n print(ft_pe", "url": "https://github.com/pytorch/tutorials/issues/2566", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-22T02:15:18Z", "updated_at": "2023-10-26T16:03:36Z", "user": "bnuliujing" }, { "repo": "pytorch/TensorRT", "number": 2335, "title": "\u2753 [Question] Bert lost a lot of accuracy when using fp16", "body": "## \u2753 Question\r\n\r\nBERT Text Classification model run in fp16 gets huge different result compared to fp32\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.13\r\n - CPU Architecture:\r\n - OS (e.g., Linux): REHL8\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8.10\r\n - CUDA version:11.7\r\n - GPU models and configuration: Tesla T4\r\n - Any other relevant information:\r\n\r\nTorch-TensorRT Version: 1.3\r\n\r\n## Additional context\r\nModel converted from TouchScript to TensorRT\r\n\r\n```\r\nenabled_precisions= {torch.half} # run with 16-bit precision \r\ntrt_model = torch_tensorrt.compile(model, inputs=inputs, enabled_precisions=enabled_precisions,\r\n truncate_long_and_double=True, require_full_compilation=False\r\n )\r\n```\r\n\r\nThe logs\r\n``` shell\r\nWARNING: [Torch-TensorRT] - For input input_ids.1, found user specified input dtype as Long, however when inspecting the graph, the input type expected was inferred to be Float\r\nThe compiler is going to use the user setting Long\r\nThis conflict may cause an error at runtime due to partial compilation being enabled and therefore\r\ncompatibility with PyTorch's data type convention is required.\r\nIf you do indeed see errors at runtime either:\r\n- Remove the dtype spec for input_ids.1\r\n- Disable partial compilation by setting require_full_compilation to True\r\nWARNING: [Torch-TensorRT] - For input token_type_ids.1, found user specified input dtype as Long, however when inspecting the graph, the input type expected was inferred to be Float\r\nThe compiler is going to use the user setting Long\r\nThis conflict may cause an error at runtime due to partial compilation being enabled and therefore\r\ncompatibility with PyTorch's data type convention is required.\r\nIf you do indeed see errors at runtime either:\r\n- Remove the dtype spec for token_type_ids.1\r\n- Disable partial compilation by setting require_full_compilation to True\r\nWARNING: [Torch-TensorRT] - For input attention_mask.1, found user specified input dtype as Long, however when inspecting the graph, the input type expected was inferred to be Double\r\nThe compiler is going to use the user setting Long\r\nThis conflict may cause an error at runtime due to partial compilation being enabled and therefore\r\ncompatibility with PyTorch's data type convention is required.\r\nIf you do indeed see errors at runtime either:\r\n- Remove the dtype spec for attention_mask.1\r\n- Disable partial compilation by setting require_full_compilation to True\r\nWARNING: [Torch-TensorRT] - Data types for input tensors have been modified by inserting aten::to operations which cast INT64 inputs to INT32. To disable this, please recompile using INT32 inputs\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars\r\nWARNING: [Torch-TensorRT] - Truncating weight (constant in the graph) from Float64 to Float32\r\nWARNING: [Torch-TensorRT] - There may be undefined behavior using dynamic shape and aten::size without setting allow_shape_tensors\r\nWARNING: [Torch-TensorRT] - Truncating weight (constant in the graph) from Int64 to Int32\r\nWARNING: [Torch-TensorRT] - There may be undefined behavior using dynamic shape and aten::size without setting allow_shape_tensors\r\nWARNING: [Torch-TensorRT] - There may be undefined behavior using dyn", "url": "https://github.com/pytorch/TensorRT/issues/2335", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-09-21T07:50:12Z", "updated_at": "2024-05-07T06:37:23Z", "user": "HenryYuen128" }, { "repo": "pytorch/text", "number": 2205, "title": "Declaring _MapStyleDataset inside function makes it unpicklable", "body": "## \ud83d\udc1b Bug\r\n\r\n**Describe the bug** \r\nWhen trying to use a Dataset that was converted to map-style using `data.functional.to_map_style_dataset`, I encountered the following error message: \r\n> ...\r\n> File \"/usr/lib/python3.8/multiprocessing/reduction.py\", line 60, in dump\r\n> ForkingPickler(file, protocol).dump(obj)\r\n> AttributeError: Can't pickle local object 'to_map_style_dataset.<locals>._MapStyleDataset'\r\n\r\nAfter some research, I found the list of what is picklable [here](https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled) and found that for a class to be picklable, it has to be from the top level of a module\r\n\r\nThis isn't the case for `_MapStyleDataset` as it is declared within the `to_map_style_dataset` function\r\n\r\nThe fix seems simple enough (declare `_MapStyleDataset` outside the function) so I would like to know if there was anything making it undesireable ? If not, I'll create a PR for it but I would like some opinions on it\r\n", "url": "https://github.com/pytorch/text/issues/2205", "state": "open", "labels": [], "created_at": "2023-09-20T12:27:34Z", "updated_at": "2023-09-20T12:27:34Z", "comments": 0, "user": "AnthoJack" }, { "repo": "pytorch/TensorRT", "number": 2327, "title": "\u2753 [Question] dynamc engines & interpolation align_corners=True", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\nI used the latest docker with tag 23.08-py3. When converting model doing interpolation with align_corners=True and dynamic input, I got error as below.\r\n```\r\nRuntimeError: [Error thrown at core/conversion/converters/impl/interpolate.cpp:412] Expected !(align_corners && ctx->input_is_dynamic) to be true but got false \r\nTorch-TensorRT currently does not support the compilation of dynamc engines from code using PyTorch [bi/tri]linear interpolation via scale factor and align_corners=True \r\n```\r\n\r\nAnd I found this check did exist in code with tag v1.4.0, but not in main branch. Will I need to clone the latest code and recompile torch-tensorrt to escape frome this error and will it work? Or any other simple way ? \r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\nnvcr.io/nvidia/pytorch:23.08-py3\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2327", "state": "open", "labels": [ "question", "component: converters" ], "created_at": "2023-09-20T07:25:34Z", "updated_at": "2023-11-30T10:57:37Z", "user": "ArtemisZGL" }, { "repo": "pytorch/tutorials", "number": 2563, "title": "Multiple GPU example limited to one GPU", "body": "https://github.com/pytorch/tutorials/blob/646c8b6368e4f43acc808e0ddddc569153d6a30f/beginner_source/blitz/data_parallel_tutorial.py#L60\r\n\r\nIsn't this line limiting the example to **one** GPU no matter how many GPUs are available?\n\ncc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2563", "state": "closed", "labels": [ "question", "easy", "docathon-h2-2023" ], "created_at": "2023-09-18T13:13:55Z", "updated_at": "2023-11-06T17:51:57Z", "user": "9cpluss" }, { "repo": "pytorch/xla", "number": 5599, "title": "Stubs or wheels for other OSes/architectures", "body": "## \u2753 Questions and Help\nI'm new to torch/xla. One development pattern which I use, and which I expect to be common, is to write software on one system (eg M-series Mac laptop) which is intended to be run elsewhere. Project docs for torch/xla regarding installation specify downloading a wheel which is Linux x86 specific. \n\nEven if my training and inference will run on Linux x86 systems, efficient and correct development strongly benefits from tools like type checkers, pylint, etc, which can quickly catch errors like incorrect methods or arguments -- but only work if _some_ amenable representation of libraries is available in the development environment.\n\nIn my current attempts to use torch/xla so far, merely following public docs has let me exercise distributed xla training in target environments, but my local branch, being unable to install the library, cannot do basic static analysis checks and certainly cannot run unit tests on modules which import xla code.\n\nAt a bare minimum the project could at least produce documentation recommending how developers on other platforms can develop against the library even if they cannot run its full range of behaviors, without having to do all development in a container.\n", "url": "https://github.com/pytorch/xla/issues/5599", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-17T19:03:47Z", "updated_at": "2025-04-29T13:22:51Z", "user": "abeppu" }, { "repo": "pytorch/torchx", "number": 766, "title": "Is this repository no longer maintained?", "body": "## \u2753 Questions and Help\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\nTorch elastic redirects to this repository but it doesn't seem very active, is there a slack/ discord channel? I want to run DDP on kubernetes, is there another way i am not aware of? If torchx is best way, i'd like to contribute! Any pointers where i could start?\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/766", "state": "closed", "labels": [], "created_at": "2023-09-15T10:37:43Z", "updated_at": "2023-09-15T22:03:01Z", "comments": 4, "user": "ccharest93" }, { "repo": "pytorch/TensorRT", "number": 2320, "title": "\u2753 [Question] How to use C++ bindings for torch tensorrt with CMake?", "body": "## \u2753 Question\r\n\r\nI would like to know how to use the examples provided [here](https://github.com/pytorch/TensorRT/tree/v1.4.0/examples/torchtrt_runtime_example) with CMake. The instructions seem to indicate only how to use it with a makefile. CMake is not able to find `torchtrt`, exactly as described in #1207, but unfortunately that issue has been closed without actually resolving it.\r\n\r\nI get the following error:\r\n```\r\nCMake Error at CMakeLists.txt:6 (find_package):\r\n By not providing \"Findtorchtrt.cmake\" in CMAKE_MODULE_PATH this project has\r\n asked CMake to find a package configuration file provided by \"torchtrt\",\r\n but CMake did not find one.\r\n\r\n Could not find a package configuration file provided by \"torchtrt\" with any\r\n of the following names:\r\n\r\n torchtrtConfig.cmake\r\n torchtrt-config.cmake\r\n\r\n Add the installation prefix of \"torchtrt\" to CMAKE_PREFIX_PATH or set\r\n \"torchtrt_DIR\" to a directory containing one of the above files. If\r\n \"torchtrt\" provides a separate development package or SDK, be sure it has\r\n been installed.\r\n```\r\n\r\n## What you have already tried\r\n\r\nI noticed that there is a `python3.8/dist-packages/torch/share/cmake/Torch/TorchConfig.cmake`, but there are no cmake files at all in my torch_tensorrt installation, which otherwise works perfectly fine:\r\n```\r\nroot@jetson:/opt/inference/TensorRT# find /usr/local/lib/python3.8/dist-packages -name *.cmake | grep Torch\r\n/usr/local/lib/python3.8/dist-packages/torch/share/cmake/Torch/TorchConfigVersion.cmake\r\n/usr/local/lib/python3.8/dist-packages/torch/share/cmake/Torch/TorchConfig.cmake\r\nroot@jetson:/opt/inference/TensorRT# find /usr/local/lib/python3.8/dist-packages -name *.cmake | grep tensorrt\r\nroot@jetson:/opt/inference/TensorRT# \r\n```\r\n\r\n I noticed that `torchtrtConfig.cmake` is [mentioned in the CMakeLists.txt](https://github.com/pytorch/TensorRT/blob/v1.4.0/CMakeLists.txt#L38), but it doesn't exist anywhere in my installation. Am I supposed to install Torch TensorRT with CMake in order to use the C++ API in a CMake project?\r\n \r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.0\r\n - CPU Architecture: aarch64\r\n - OS (e.g., Linux): L4T\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): L4T docker container\r\n - Build command you used (if compiling from source): Building from source as per instructions via bazel\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8\r\n - CUDA version: \r\n - GPU models and configuration: Jetson Orin NX 16GB\r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2320", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-09-14T18:42:13Z", "updated_at": "2023-12-28T22:10:34Z", "user": "janblumenkamp" }, { "repo": "pytorch/TensorRT", "number": 2319, "title": "\u2753 [Question] How do I load the torch tensorRT model on multiple gpus", "body": "## \u2753 Question\r\n\r\nIn [TorchServe](https://github.com/pytorch/serve), we have this concept of workers. In a multi-GPU node, we can assign each GPU to a worker.\r\n\r\nI am noticing that tensorRT model is getting loaded on GPU 0 even though we specify the correct GPU ID \r\n for each worker.```torch.jit.load(model_pt_path, map_location=self.device)``` \r\n \r\n How do we load a tensorRT model in a a device id which is not 0 ?\r\n\r\n## What you have already tried\r\n\r\nI have tried loading a torchscript model, Here, it loads on all 4 GPUs\r\n\r\nUsing ```torch.jit.load(model_pt_path, map_location=self.device)``` to load the same model on each of the 4 GPUs\r\n\r\n```\r\n2023-09-14T18:32:19,333 [INFO ] W-9000-resnet-18_1.0-stdout MODEL_LOG - cuda:1\r\n2023-09-14T18:32:19,333 [INFO ] W-9000-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!\r\n2023-09-14T18:32:19,355 [INFO ] W-9003-resnet-18_1.0-stdout MODEL_LOG - Torch TensorRT enabled\r\n2023-09-14T18:32:19,356 [INFO ] W-9003-resnet-18_1.0-stdout MODEL_LOG - cuda:0\r\n2023-09-14T18:32:19,356 [INFO ] W-9003-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!\r\n2023-09-14T18:32:19,357 [INFO ] W-9002-resnet-18_1.0-stdout MODEL_LOG - Torch TensorRT enabled\r\n2023-09-14T18:32:19,357 [INFO ] W-9002-resnet-18_1.0-stdout MODEL_LOG - cuda:3\r\n2023-09-14T18:32:19,357 [INFO ] W-9002-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!\r\n2023-09-14T18:32:19,359 [INFO ] W-9001-resnet-18_1.0-stdout MODEL_LOG - Torch TensorRT enabled\r\n2023-09-14T18:32:19,359 [INFO ] W-9001-resnet-18_1.0-stdout MODEL_LOG - cuda:2\r\n2023-09-14T18:32:19,359 [INFO ] W-9001-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!\r\n```\r\n\r\n<img width=\"843\" alt=\"Screenshot 2023-09-14 at 11 39 36 AM\" src=\"https://github.com/pytorch/TensorRT/assets/16617092/c5f9c16b-1866-4c80-b105-9fca3219a78d\">\r\n\r\n### Have a simpler repro\r\n\r\n```\r\nimport torch\r\nimport torch_tensorrt\r\nmodel = torch.jit.load(\"trt_model_fp16.pt\",\"cuda:1\")\r\n```\r\n<img width=\"839\" alt=\"Screenshot 2023-09-14 at 1 28 20 PM\" src=\"https://github.com/pytorch/TensorRT/assets/16617092/f5be8d91-491f-4efd-ad09-3e22118cc56a\">\r\n\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):3.9\r\n - CPU Architecture: \r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: pip\r\n - Python version: 3.9\r\n - CUDA version: 11.7\r\n - GPU models and configuration: T4\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2319", "state": "closed", "labels": [ "question", "component: runtime", "bug: triaged [verified]" ], "created_at": "2023-09-14T18:41:36Z", "updated_at": "2023-09-27T19:55:28Z", "user": "agunapal" }, { "repo": "pytorch/xla", "number": 5569, "title": "Questions about the return value of lazyTensor pytorch xla subgraph", "body": "Using lazyTensor, pytorch xla will generate an xla subgraph, and its subgraph will add relevant conditions to the liveTensor trained in the current step as the return of the subgraph.\r\nmy question is:\r\n1. What does the LiveTensor here mean, and what is the design basis for the value returned by the xla diagram? That is, what can be returned as an xla diagram. The return here refers to the ROOT node in xla.\r\n2. What is the concept of xla image return here? Is it the return from the XLA device to the HOST device? Or what does it mean?\r\nBelow I have given a section on using xm.mark_step() to trigger a compile and run code in the training process.\r\n\r\nstd::shared_ptr<XLAGraphExecutor::Async>\r\nXLAGraphExecutor::SyncTensorsGraphInternal(\r\n std::vector<XLATensorPtr>* tensors, absl::Span<const std::string> devices,\r\n const SyncTensorsConfig& config, bool warm_up_cache_only) {\r\n tensorflow::profiler::TraceMe activity(\r\n \"SyncTensorsGraphInternal\", tensorflow::profiler::TraceMeLevel::kInfo);\r\n SyncTensorCollection coll = CollectSyncTensors(*tensors, config);\r\n if (coll.indices.empty()) {\r\n /* Enure previous execution is complete before exiting this\r\n * function */\r\n TensorCollectionBarrier(&coll);\r\n return nullptr;\r\n }\r\n DebugUtil::SaveTensorsGraphInfo(\"ScheduleSyncTensorsGraph\", *tensors,\r\n &coll.indices);\r\n std::vector<torch::lazy::Value> ir_values;\r\n std::vector<torch::lazy::BackendDataPtr> tensor_data_vec;\r\n ExtractIRAndPrepareXlaData_(tensors, coll.config, coll.indices, ir_values,\r\n tensor_data_vec);\r\n PostOrderData po_data = RunPostOrder(ir_values, &coll);\r\n coll.hash = torch::lazy::HashCombine(\r\n coll.hash, torch::lazy::Hash(po_data.parameter_sequence));\r\n TF_VLOG(4) << \"Parameter sequence graph hash \"\r\n << torch::lazy::HashToString(coll.hash);\r\n std::shared_ptr<Async> async =\r\n TryRunCachedSync(tensors, &coll, &po_data, tensor_data_vec);\r\n if (async != nullptr) {\r\n return async;\r\n }\r\n CompilationResult compile_result =\r\n Compile(*tensors, devices, coll, &po_data, ir_values);\r\n TORCH_LAZY_VALUE_METRIC(\"TensorsGraphSize\", compile_result.emitted_nodes);\r\n TF_VLOG(5) << \"TensorsGraphSize=\" << compile_result.emitted_nodes;\r\n\r\n auto cached_computation = std::make_shared<CachedComputation>(\r\n std::move(compile_result.computation), compile_result.is_sharded);\r\n GetComputationCache()->Add(coll.hash, cached_computation);\r\n\r\n if (warm_up_cache_only) {\r\n return nullptr;\r\n } else {\r\n return ScheduleSyncTensorsGraph(\r\n tensors, &coll, std::move(compile_result.parameters_data),\r\n compile_result.device.toString(), std::move(cached_computation),\r\n tensor_data_vec);\r\n }\r\n}\r\n", "url": "https://github.com/pytorch/xla/issues/5569", "state": "open", "labels": [ "question", "runtime" ], "created_at": "2023-09-14T13:08:17Z", "updated_at": "2025-04-29T13:46:42Z", "user": "ckfgihub" }, { "repo": "pytorch/TensorRT", "number": 2318, "title": "\u2753 Why cant't I compile Torch-TensorRT 1.0.0? ", "body": "## \u2753Why cant't I compile Torch-TensorRT 1.0.0? \r\n\r\n## What you have already tried\r\n\r\nI've been trying to compile versions 1.0.0 and 1.1.0 of Torch-TensorRT in my Jetson Xavier NX 16GB, I had followed the official guides of installation mentioned in this [issue](https://github.com/pytorch/TensorRT/discussions/1077).\r\n\r\n## Environment\r\nI have the next environment:\r\n- Jetpack 4.6\r\n- Python 3.6.9\r\n- Pytorch 1.10.0\r\n- Torchvision 0.11.1\r\n- CUDA 10.2\r\n- CUDNN 8.2.1\r\n- TensorRT 8.0.1.6\r\n- GPU models and configuration: Jetson Xavier NX with JetPack 4.6\r\n\r\n\r\n## The error\r\nFinally, when I launch python3 py/setup.py install --use-cxx11-abi the error happend\r\n\r\n`running install\r\nusing CXX11 ABI build\r\nJetpack version: 4.6\r\nbuilding libtorchtrt\r\nINFO: Analyzed target //:libtorchtrt (0 packages loaded, 0 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /home/iovi/SW/TensorRT/core/lowering/BUILD:10:11: Compiling core/lowering/register_trt_placeholder_ops.cpp failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 61 arguments skipped)\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox\r\ncore/lowering/register_trt_placeholder_ops.cpp:16:34: error: invalid user-defined conversion from 'torch::jit::<lambda(torch::jit::Stack&)>' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}' [-fpermissive]\r\n aliasAnalysisFromSchema()),\r\n ^\r\ncore/lowering/register_trt_placeholder_ops.cpp:15:24: note: candidate is: torch::jit::<lambda(torch::jit::Stack&)>::operator void (*)(torch::jit::Stack&)() const <near match>\r\n [](Stack& stack) { /*noop*/ },\r\n ^\r\ncore/lowering/register_trt_placeholder_ops.cpp:15:24: note: no known conversion from 'void (*)(torch::jit::Stack&) {aka void (*)(std::vector<c10::IValue>&)}' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}'\r\nIn file included from external/libtorch/include/torch/csrc/jit/runtime/custom_operator.h:5:0,\r\n from core/lowering/register_trt_placeholder_ops.cpp:1:\r\nexternal/libtorch/include/torch/csrc/jit/runtime/operator.h:98:3: note: initializing argument 2 of 'torch::jit::Operator::Operator(std::__cxx11::string, torch::jit::OperationCreator, c10::AliasAnalysisKind)'\r\n Operator(\r\n ^~~~~~~~\r\nTarget //:libtorchtrt failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 225,037s, Critical Path: 54,01s\r\nINFO: 46 processes: 13 internal, 33 processwrapper-sandbox.\r\nFAILED: Build did NOT complete successfully\r\n`\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2318", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-09-14T09:30:20Z", "updated_at": "2024-01-01T00:02:46Z", "user": "VictorIOVI" }, { "repo": "pytorch/kineto", "number": 804, "title": " Will PyTorch Profiler TensorBoard Plugin continue to evolve? It seems that it cannot support PyTorch 2.0", "body": "", "url": "https://github.com/pytorch/kineto/issues/804", "state": "closed", "labels": [ "question", "plugin" ], "created_at": "2023-09-14T02:21:09Z", "updated_at": "2023-12-28T16:44:59Z", "user": "BadTrasher" }, { "repo": "pytorch/rl", "number": 1522, "title": "[BUG] It's not clear how to call an advantage module with batched envs and pixel observations.", "body": "## Describe the bug\r\n\r\nWhen you get a tensordict rollout of shape `(N_envs, N_steps, C, H, W)` out of a collector and you want to apply an advantage module that starts with `conv2d` layers:\r\n1. directly applying the module will crash with the `conv2d` layer complaining about the input size e.g. `RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [2, 128, 4, 84, 84]`\r\n2. flattening the tensordict first with `rollout.reshape(-1)` so that it has shape `[B, C, H, W]` and then calling the advantage module will run but issue the warning `torchrl/objectives/value/advantages.py:99: UserWarning: Got a tensordict without a time-marked dimension, assuming time is along the last dimension.` leaving you unsure of wether the advantages were computed correctly.\r\n\r\nSo it's not clear how one should proceed.\r\n\r\n- [x] I have checked that there is no similar issue in the repo (**required**)\r\n- [x] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)\r\n- [x] I have provided a minimal working example to reproduce the bug (**required**)\r\n", "url": "https://github.com/pytorch/rl/issues/1522", "state": "open", "labels": [ "bug" ], "created_at": "2023-09-13T21:04:29Z", "updated_at": "2024-03-27T16:37:49Z", "user": "skandermoalla" }, { "repo": "pytorch/examples", "number": 1190, "title": "main.py: TensorBoard in case of Multi-processing Distributed Data Parallel Training", "body": "Dear developers\r\nIt is so great that you've provided a examples/imagenet/main.py script which looks amazing. \r\nI'm looking how to setup a _Multi-processing Distributed Data Parallel Training_, for instance 8 GPUs on a single node but I can also use multi-nodes multi-gpus. I must say that I have never had so great infrastructure that I'm discovering at the same times. \r\n\r\nNow, I was used to view the evolution of the Accuracies (Top 1, Top 5, train/val) during the training (rather common isn't it), but looking at the code (main.py) I do not see the \r\n```python \r\nfrom torch.utils.tensorboard import SummaryWriter\r\n...\r\n writer = SummaryWriter(logs_dir)\r\n...\r\n```\r\nand similar code used in the train/validate routines like\r\n```python\r\n if writer is not None:\r\n suffix = \"train\"\r\n writer.add_scalar(f'top5_{suffix}', top5.avg, global_step=epoch)\r\n writer.add_scalar(f'top1_{suffix}', top1.avg, global_step=epoch)\r\n```\r\nNow, in the multi-gpus processing I would imagine that one has to deal with \"which gpu among the whole sets of gpus should/must do the job\". But I am pretty sure that many experts are doing such things routinely. \r\n\r\nIs there a foreseen new version of main.py that would integrate such TensorBoard features in case of Multi-processing Distributed Data Parallel Training? In the mean while may be someone can help to setup such modifications.\r\n\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/1190", "state": "open", "labels": [], "created_at": "2023-09-13T11:19:44Z", "updated_at": "2023-09-13T11:19:44Z", "comments": 0, "user": "jecampagne" }, { "repo": "pytorch/vision", "number": 7947, "title": "Why image shape different between Image.open and torchvision.io.read_image", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nEXIF image:\r\n![1](https://github.com/pytorch/vision/assets/28288770/65fe56e2-6724-4996-8fa6-04e51e110b90)\r\n\r\nI have a JPEG image above with EXIF information and I tried to load this image into pytorch for augmentation.\r\n\r\n1. try with opencv\r\n```\r\nimport cv2\r\nimg = cv2.imread(\"1.jpg\")\r\nprint(img.shape[0], img.shape[1])\r\n```\r\n\r\nthe result is\r\n```\r\n201 151\r\n```\r\n\r\n2. try with pillow\r\n```\r\nfrom PIL import Image\r\nimg3 = Image.open(\"1.jpg\")\r\nprint(img3.size)\r\n```\r\n\r\nthe result is\r\n```\r\n(201, 151)\r\n```\r\n\r\n3. try with torchvison.io\r\n```\r\nimport torchvision as tv\r\nimg4 = tv.io.read_image(\"1.jpg\")\r\nprint(img4.shape)\r\n```\r\n\r\nthe result is\r\n```\r\ntorch.Size([3, 151, 201])\r\n```\r\n\r\nThe result of torchvison.io is in [image_channels, image_height, image_width] format, which means the image is not rotated. However, opencv and pillow will deal with the EXIF information and rotate the image to the correct orientation.\r\n\r\nI wonder if torchvision.io.read_image misses the EXIF information in jpeg or not?\r\n\r\n### Versions\r\n\r\nName: torchvision\r\nVersion: 0.9.1\r\nSummary: image and video datasets and models for torch deep learning\r\nHome-page: https://github.com/pytorch/vision\r\n\r\nName: Pillow\r\nVersion: 9.4.0\r\nSummary: Python Imaging Library (Fork)\r\nHome-page: https://python-pillow.org", "url": "https://github.com/pytorch/vision/issues/7947", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-08T10:17:45Z", "updated_at": "2023-09-25T09:40:25Z", "user": "kero-ly" }, { "repo": "pytorch/tutorials", "number": 2554, "title": "Autograd - M factor missing in Matrix Vector Multiplication?", "body": "In [this](https://github.com/pytorch/tutorials/blob/main/beginner_source/blitz/autograd_tutorial.py) tutorial, once the vector v is multiplied by the Jacobian, shouldn't there be an additional factor of M in the results? \n\ncc @albanD @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2554", "state": "closed", "labels": [ "question", "core", "medium" ], "created_at": "2023-09-08T08:51:18Z", "updated_at": "2023-11-02T19:30:44Z", "user": "sudz123" }, { "repo": "pytorch/serve", "number": 2569, "title": "Failure in loading Deepspeed large model example", "body": "### \ud83d\udc1b Describe the bug\n\nI am trying to follow the example to perform inference with the OPT-30B model according to this example: https://github.com/pytorch/serve/tree/master/examples/large_models/deepspeed\r\n\r\nHowever, as specified in the [model-config.yaml](https://github.com/pytorch/serve/blob/master/examples/large_models/deepspeed/opt/model-config.yaml) file, a `checkpoints.json` file is required. This file gets used here: https://github.com/pytorch/serve/blob/master/ts/handler_utils/distributed/deepspeed.py#L40\r\n\r\nAs a result, the model fails to load. The error logs are attached below.\n\n### Error logs\n\n```\r\n2023-09-05T23:22:14,652 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - Failed to load model opt, exception Cannot copy out of meta tensor; no data!\r\n2023-09-05T23:22:14,652 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - Traceback (most recent call last):\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py\", line 131, in load_model\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - service = model_loader.load(\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/ts/model_loader.py\", line 135, in load\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - initialize_fn(service.context)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/home/model-server/tmp/models/c1130e4b01c345b9be913ef8414518cb/custom_handler.py\", line 55, in initialize\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - ds_engine = get_ds_engine(self.model, ctx)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/ts/handler_utils/distributed/deepspeed.py\", line 35, in get_ds_engine\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - ds_engine = deepspeed.init_inference(\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py\", line 342, in init_inference\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - engine = InferenceEngine(model, config=ds_inference_config)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/deepspeed/inference/engine.py\", line 154, in __init__\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - self.module.to(device)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 2053, in to\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - return super().to(*args, **kwargs)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1145, in to\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - return self._apply(convert)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 797, in _apply\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - module._apply(fn)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 797, in _apply\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - module._apply(fn)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 797, in _apply\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - module._apply(fn)\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 820, in _apply\r\n2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - param_applied = fn(param)\r\n2023-09-05T23:22:14,654 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1143, in convert\r\n2023-09-05T23:22:14,654 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n2023-09-05T23:22:14,654 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - NotImplementedError: Cannot copy out of meta tensor; no data!\r\n```\n\n### Installation instructions\n\nDocker image URI: `763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:2.0.1-gpu-py310-cu118-ubuntu20.04-ec2`\r\nEC2 instance: `g5dn.24xlarge`\n\n### Model Packaing\n\nCreated model artifact by following this example:\r\nhttps://github.com/pytorch/serve/tree/master/examples/large_models/deepspeed\n\n### config.properties\n\n_No response_\n\n### Versions\n\n```\r\n---------------------------", "url": "https://github.com/pytorch/serve/issues/2569", "state": "open", "labels": [ "question", "triaged", "example" ], "created_at": "2023-09-05T23:35:46Z", "updated_at": "2023-09-11T17:35:14Z", "user": "sachanub" }, { "repo": "pytorch/TensorRT", "number": 2284, "title": "\u2753 [Question] Timeline for TensorRT 9.0 support", "body": "## \u2753 Question\r\n\r\nWhat is the timeline to support TensorRT 9.0 ?\r\n\r\n## What you have already tried\r\n\r\nUsing Nvidia's 9.0 TensorRT [release](https://github.com/NVIDIA/TensorRT/tree/release/9.0) is incompatible with the latest version of torch-tensorrt (which requires TensorRT 8.6).\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2284", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-04T07:26:02Z", "updated_at": "2023-09-06T16:56:33Z", "user": "tdeboissiere" }, { "repo": "pytorch/serve", "number": 2564, "title": "[Docs] More information regarding text generation & LLM inference", "body": "### \ud83d\udcda The doc issue\n\nI am new to TorchServe and was looking for some features that I need to be able to consider using TorchServe for LLM text generation.\r\n\r\nToday, there are a couple inference serving solutions out there, including [text-generation-inference](https://github.com/huggingface/text-generation-inference) and [vLLM](https://vllm.ai). It would be great if the documentation can mention how TorchServe compares with these at the moment. For instance,\r\n\r\n- Does TorchServe support continuous batching?\r\n- Does TorchServe support paged attention?\r\n- Does TorchServe support streaming generated text through its inference API?\r\n- What are some LLMs that TorchServe is known to work well with, e.g. Llama2, Falcon? Apart from the Hugging Face integration example provided.\n\n### Suggest a potential alternative/fix\n\nA dedicated page for text generation and LLM inference could make sense given that there would be a lot of people interested in this.", "url": "https://github.com/pytorch/serve/issues/2564", "state": "open", "labels": [ "documentation", "question", "llm" ], "created_at": "2023-09-03T17:40:16Z", "updated_at": "2023-09-05T17:45:08Z", "user": "jaywonchung" }, { "repo": "pytorch/xla", "number": 5525, "title": "Query bazel deps of XLAC.so?", "body": "## \u2753 Questions and Help\r\nI'm trying to see bazel dependencies of `//:_XLAC.so` target by running the following command (as described in [bazel guide](https://bazel.build/query/guide))\r\n```\r\nbazel query \"deps(//:_XLAC.so)\"\r\n```\r\nIt shows me the following errors:\r\n```bash\r\nERROR: An error occurred during the fetch of repository 'mkl_dnn_acl_compatible'\r\nERROR: no such package '@mkl_dnn_acl_compatible//': Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl.\r\nERROR: Evaluation of query \"deps(//:_XLAC.so)\" failed\r\n```\r\nFull output:\r\n```bash\r\nroot@dd45b88976fe:~/workspace/pytorch/xla# bazel query \"deps(//:_XLAC.so)\"\r\nStarting local Bazel server and connecting to it...\r\nDEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/third_party/repo.bzl:132:14: \r\nWarning: skipping import of repository 'tf_runtime' because it already exists.\r\nDEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/third_party/repo.bzl:132:14: \r\nWarning: skipping import of repository 'llvm-raw' because it already exists.\r\nDEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:132:14: \r\nWarning: skipping import of repository 'pybind11_bazel' because it already exists.\r\nDEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:132:14: \r\nWarning: skipping import of repository 'pybind11' because it already exists.\r\nINFO: Repository mkl_dnn_acl_compatible instantiated at:\r\n /root/workspace/pytorch/xla/WORKSPACE:76:15: in <toplevel>\r\n /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/workspace2.bzl:90:19: in workspace\r\n /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/workspace2.bzl:636:21: in workspace\r\n /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/workspace2.bzl:165:20: in _tf_repositories\r\n /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:136:21: in tf_http_archive\r\nRepository rule _tf_http_archive defined at:\r\n /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:89:35: in <toplevel>\r\nERROR: An error occurred during the fetch of repository 'mkl_dnn_acl_compatible':\r\n Traceback (most recent call last):\r\n File \"/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl\", line 55, column 31, in _tf_http_archive_impl\r\n link_dict = _get_link_dict(ctx, ctx.attr.link_files, ctx.attr.build_file)\r\n File \"/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl\", line 47, column 54, in _get_link_dict\r\n link_dict[ctx.path(\"BUILD.bazel\")] = ctx.path(Label(build_file))\r\nError in path: Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package.\r\nERROR: /root/workspace/pytorch/xla/WORKSPACE:76:15: fetching _tf_http_archive rule //external:mkl_dnn_acl_compatible: Traceback (most recent call last):\r\n File \"/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl\", line 55, column 31, in _tf_http_archive_impl\r\n link_dict = _get_link_dict(ctx, ctx.attr.link_files, ctx.attr.build_file)\r\n File \"/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl\", line 47, column 54, in _get_link_dict\r\n link_dict[ctx.path(\"BUILD.bazel\")] = ctx.path(Label(build_file))\r\nError in path: Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package.\r\nERROR: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/xla/service/cpu/BUILD:1008:11: no such package '@mkl_dnn_acl_compatible//': Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package. and referenced by '@xla//xla/service/cpu:runtime_matmul_mkl'\r\nERROR: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/xla/service/cpu/BUILD:944:11: no such package '@mkl_dnn_acl_compatible//': Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package. and referenced by '@xla//xla/service/cpu:runtime_con", "url": "https://github.com/pytorch/xla/issues/5525", "state": "open", "labels": [ "question", "build" ], "created_at": "2023-08-30T21:27:58Z", "updated_at": "2025-04-30T12:34:57Z", "user": "apivovarov" }, { "repo": "pytorch/xla", "number": 5510, "title": "Kaggle Pytorch/XLA notebooks. How to import torch_xla?", "body": "I tried to use Kaggle [Pytorch/XLA notebooks](https://www.kaggle.com/code/aivovarov/pytorch-xla-2-0-on-kaggle/edit) with \"Pin to original env\" and \"Always use the latest env\" (in notebook options).\r\n- pin to original env (2023-04-04_ uses python 3.7 , pytorch 1.13.0-cpu \r\n- the latest env uses python 3.10, pytorch 2.0.0-cpu\r\n\r\nBoth envs do not have torch_xla package .\r\n\r\nI tried to download [torch_xla-nightly wheel](https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl) but got error `wget: unable to resolve host address \u2018storage.googleapis.com\u2019`\r\n\r\nDo we have any proven solution on how to use Pytorch/XLA with Kaggle?", "url": "https://github.com/pytorch/xla/issues/5510", "state": "open", "labels": [ "question" ], "created_at": "2023-08-28T20:15:19Z", "updated_at": "2025-04-29T13:52:29Z", "user": "apivovarov" }, { "repo": "pytorch/rl", "number": 1473, "title": "[Feature Request] How to create a compound actor?", "body": "## Motivation\r\n\r\nI created an environment with a compound action space: a list of continuous values (robot joint angles) and a boolean value (suction gripper on or off).\r\n\r\nIn [the PPO tutorial](https://pytorch.org/rl/tutorials/coding_ppo.html) the policy_module is a ProbabilisticActor which takes \"loc\" and \"scale\" inputs. I want to make an actor which is a combination of this (for the joint angles) and something else that uses a Bernoulli distribution to generate boolean action values for the gripper.\r\n\r\nIt kind of looks like this may already be supported by using a TensorDictSequential, but it's not clear how that would work.\r\n\r\n## Solution\r\n\r\nI would like to see an example in the docs of a compound action space like this.\r\n\r\n## Alternatives\r\n\r\nMaybe there's another way where one actor is created for each type of action space? Then how to combine them for use with a DataCollector?\r\n\r\n## Additional context\r\n\r\nThe environment is a robot arm manipulation scenario using box2d.\r\n\r\n## Checklist\r\n\r\n- [x] I have checked that there is no similar issue in the repo (**required**)\r\n", "url": "https://github.com/pytorch/rl/issues/1473", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-08-27T15:49:38Z", "updated_at": "2023-11-03T17:54:54Z", "user": "hersh" }, { "repo": "pytorch/pytorch", "number": 107580, "title": "Doc is unclear on how to install pytorch with Cuda via pip", "body": "### \ud83d\udcda The doc issue\n\n![image](https://github.com/pytorch/pytorch/assets/35759490/17b506aa-ff3a-40cf-baac-63bb66c486ac)\r\n\r\nI've been looking on how to install torch with CUDA via pip for almost one day and the doc is absolutely not helping on how to do so.\n\n### Suggest a potential alternative/fix\n\nExplain clearly how to install pytorch using pip with CUDA or not.\r\n\r\n```\r\nTo install pytorch with CUDA using pip, you first need to install CUDA on your system if it is compatible with it and then install pytorch with the following command in your shell:\r\n\r\n`pip install ...........`\r\n```", "url": "https://github.com/pytorch/pytorch/issues/107580", "state": "open", "labels": [ "triaged", "topic: docs" ], "created_at": "2023-08-21T09:57:56Z", "updated_at": "2023-08-22T08:42:08Z", "user": "MidKnightXI" }, { "repo": "pytorch/torchx", "number": 753, "title": "Feature: Support for Multiple NodeSelectors and Tolerations in TorchX for Kubernetes", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nI\u2019m currently working with TorchX in conjunction with Volcano scheduling for my training jobs on an Amazon EKS cluster. I\u2019ve also integrated Karpenter autoscaler for effective node scaling. Additionally, I\u2019m using managed node groups with labeled nodes that have specific taints applied.\r\n\r\nOur internal data and machine learning teams have the requirement to specify NodeSelectors and Tolerations to target jobs on particular nodes or managed node groups. While referring to the documentation provided here: [TorchX Specifications](https://pytorch.org/torchx/main/specs.html), I observed that capabilities={\u201c[node.kubernetes.io/instance-type](http://node.kubernetes.io/instance-type)\u201d: \u201c\u201d} are used as NodeSelectors when the job is created through Volcano. However, this approach doesn\u2019t seem to allow for sending a list of labels, which our use case demands.\r\n\r\nFurthermore, I\u2019m also interested in incorporating tolerations into these jobs to ensure proper scheduling and execution in our environment. If any of you have experience in implementing NodeSelectors and Tolerations in TorchX within an Amazon EKS setup, I would highly appreciate your insights and advice.\r\n\r\nIf there\u2019s no previous experience with this scenario, I\u2019m considering raising a feature request to address these needs. Your guidance and input would be greatly valued.\r\n\r\n**_NOTE TO MAINTAINERS_**\r\n_I'm eager to contribute by creating a pull request for this exciting new feature, even though I'm still getting familiar with the repository and the whole PyTorch environment. Since I'm new to the process, I'd really appreciate some guidance on how to set up and run TorchX locally, as well as how to carry out unit and integration tests. This knowledge will be invaluable in making sure my contributions align well with the existing code and testing procedures. Thanks a lot for your support!_\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\nIn our current setup, we are utilizing TorchX, Volcano scheduling, and Karpenter autoscaling to manage training jobs on our Amazon EKS cluster. We have specific requirements to target jobs on nodes with certain labels and taints due to the nature of our workloads. However, the existing TorchX functionality only allows for specifying a single NodeSelector label, which is limiting for our use case. Additionally, we need the ability to incorporate tolerations into our job specifications for effective scheduling.\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nI propose enhancing the TorchX functionality to allow users to provide multiple `NodeSelector` labels as a `Dict[str, str]` and `tolerations` as a list of `V1Toleration` in the pod definition. This will enable users to precisely target nodes and managed node groups based on a wider range of labels and handle scheduling constraints effectively.\r\n\r\nThe changes will involve modifying the `role_to_pod` method to accept two new parameters:\r\n\r\n**node_selectors: Dict[str, str]**: This parameter will allow users to provide multiple node selector labels for their jobs. Modifying the existing one to accept more than one.\r\n**tolerations: List[V1Toleration]**: This parameter will allow users to provide tolerations to handle node taints effectively.\r\n\r\nThese parameters will be included in the pod specification when creating a new pod using TorchX and Volcano.\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\nAn alternative approach would be to manually modify the generated pod specification after it's created using TorchX. However, this approach would require additional steps and could lead to inconsistencies between the job definition and the actual pod specification.\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->", "url": "https://github.com/meta-pytorch/torchx/issues/753", "state": "open", "labels": [], "created_at": "2023-08-15T21:55:30Z", "updated_at": "2023-08-15T22:02:33Z", "comments": 0, "user": "vara-bonthu" }, { "repo": "pytorch/pytorch", "number": 107238, "title": "How to export GNN with dict inputs correctly?", "body": "## Problem description\r\n\r\nI am having an issue when exporting of PyTorch GNN model to ONNX. Here is my export code:\r\n\r\n```\r\ntorch.onnx.export(\r\n model=model,\r\n args=(x_dict, edge_index_dict, edge_attr_dict, {}),\r\n f=save_path,\r\n verbose=False,\r\n input_names=[\"x_dict\", \"edge_index_dict\", \"edge_attr_dict\"],\r\n output_names=[\"out\"],\r\n)\r\n```\r\n\r\n`x_dict, edge_index_dict, edge_attr_dict` are of type `Dict[str, torch.Tensor]` (hetero_data is formed [like this](https://github.com/emnigma/VSharp/blob/408ba9800362285f420b3d9b51116f4b2cbb3391/VSharp.ML.AIAgent/ml/data_loader_compact.py#L30))\r\n\r\nIn addition to 3 inputs in my [model](https://github.com/emnigma/VSharp/blob/408ba9800362285f420b3d9b51116f4b2cbb3391/VSharp.ML.AIAgent/ml/models.py#L654)'s [forward](https://github.com/emnigma/VSharp/blob/408ba9800362285f420b3d9b51116f4b2cbb3391/VSharp.ML.AIAgent/ml/models.py#L659) , torch.onnx.export generates 4 additional inputs and when I try to use exported model with onnxruntime I get ValueError:\r\n\r\n`ValueError: Required inputs (['edge_index', 'edge_index.5', 'edge_index.3', 'onnx::Reshape_9']) are missing from input feed (['x_dict', 'edge_index_dict', 'edge_attr_dict']).`\r\n\r\nI am getting a feeling I am doing something wrong, how can i export my model correctly?\r\n\r\n## Reproduction\r\n\r\nhere is a minimal reproduction script and dummy_data for it:\r\n\r\nscript: https://gist.github.com/emnigma/0b98cfbf3fff47be417c64489d83a2a2\r\n\r\ndata: https://gist.github.com/emnigma/e3ea559fe4db0adde886708f402473bb\r\n\r\n## JIT trace output\r\n\r\nI also tried to trace model compilation, here is the jit trace results with strict=False .code output:\r\n\r\n```\r\ndef forward(self,\r\n argument_1: Dict[str, Tensor],\r\n argument_2: Dict[str, Tensor],\r\n argument_3: Dict[str, Tensor]) -> Dict[str, Tensor]:\r\n state_encoder = self.state_encoder\r\n x = argument_1[\"game_vertex\"]\r\n x0 = argument_1[\"state_vertex\"]\r\n edge_index = argument_2[\"game_vertex to game_vertex\"]\r\n edge_index0 = argument_2[\"game_vertex in state_vertex\"]\r\n edge_index1 = argument_2[\"game_vertex history state_vertex\"]\r\n edge_index2 = argument_2[\"state_vertex parent_of state_vertex\"]\r\n edge_weight = argument_3[\"game_vertex history state_vertex\"]\r\n _0 = (state_encoder).forward(x, edge_index, x0, edge_index2, edge_index1, edge_weight, edge_index0, )\r\n _1 = {\"state_vertex\": _0, \"game_vertex\": x}\r\n return _1\r\n```\r\n\r\n## System Info\r\nPyTorch version: 2.0.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 13.4.1 (arm64)\r\nGCC version: Could not collect\r\nClang version: 14.0.3 (clang-1403.0.22.14.1)\r\nCMake version: version 3.26.4\r\nLibc version: N/A\r\n\r\nPython version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ] (64-bit runtime)\r\nPython platform: macOS-13.4.1-arm64-arm-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nApple M1\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.25.0\r\n[pip3] torch==2.0.1\r\n[pip3] torch-geometric==2.3.1\r\n[pip3] torch-scatter==2.1.1\r\n[pip3] torch-sparse==0.6.17\r\n[pip3] torchaudio==2.0.2\r\n[pip3] torchvision==0.15.2a0\r\n[conda] numpy 1.25.0 py311he598dae_0 \r\n[conda] numpy-base 1.25.0 py311hfbfe69c_0 \r\n[conda] pytorch 2.0.1 py3.11_0 pytorch\r\n[conda] torch-geometric 2.3.1 pypi_0 pypi\r\n[conda] torch-scatter 2.1.1 pypi_0 pypi\r\n[conda] torch-sparse 0.6.17 pypi_0 pypi\r\n[conda] torchaudio 2.0.2 py311_cpu pytorch\r\n[conda] torchvision 0.15.2 cpu_py311he74fb5d_0\r\n", "url": "https://github.com/pytorch/pytorch/issues/107238", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2023-08-15T15:43:12Z", "updated_at": "2024-03-27T21:47:06Z", "user": "emnigma" }, { "repo": "pytorch/pytorch", "number": 107225, "title": "Is pytorch version 1.10.2 still maintained? What is the official EOM(End of Maintenance) date?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nIs pytorch version 1.10.2 still maintained? What is the official EOM(End of Maintenance) date?\r\n\r\n### Versions\r\n\r\npytorch v1.10.2\n\ncc @seemethere @malfet @svekars @carljparker", "url": "https://github.com/pytorch/pytorch/issues/107225", "state": "closed", "labels": [ "module: binaries", "module: docs", "oncall: releng", "triaged" ], "created_at": "2023-08-15T12:36:25Z", "updated_at": "2023-08-15T18:49:57Z", "user": "reBiocoder" }, { "repo": "pytorch/benchmark", "number": 1825, "title": "how to run torchbenchmark in dynamo mode", "body": "Hi,\r\n 1. I want to test benchmark in dynamo mode, how can I run test_bench.py script?\r\n 2. When I add code:\r\n `self.model = torch.compile(self.model)`\r\n in BERT_pytorch __init__.py, then run:\r\n`pytest test_bench.py -k \"test_train[BERT_pytorch-cuda-eager]\" --ignore_machine_config --benchmark-autosave`, it raises below errors:\r\n![image](https://github.com/pytorch/benchmark/assets/68674291/e92121ae-aa89-4558-bff3-17ee3ec10213)\r\nhow can I fix it? Thank you for you help~ @ezyang @orionr @romovpa @kostmo @zdevito ", "url": "https://github.com/pytorch/benchmark/issues/1825", "state": "closed", "labels": [], "created_at": "2023-08-15T12:12:20Z", "updated_at": "2023-08-16T05:46:53Z", "user": "Godlovecui" }, { "repo": "pytorch/pytorch", "number": 107146, "title": "\u3010libtorch c++ \u3011 how to make libtorch model distribute train and infer \uff0cplease show me one tutorial or example", "body": "### \ud83d\udc1b Describe the bug\n\nHI\uff0c for libtorch I found distribute package ,but I don't know how to declare distribute param to make the libtorch model train and infer on distribute machines .need our team help, thanks,pleaase show me one example distribute train model code. thanks\n\n### Versions\n\nlibtorch 2.0", "url": "https://github.com/pytorch/pytorch/issues/107146", "state": "closed", "labels": [], "created_at": "2023-08-14T15:54:56Z", "updated_at": "2023-08-14T18:34:09Z", "user": "mullerhai" }, { "repo": "pytorch/kineto", "number": 799, "title": "pytorch.profiler cannot profile aten:mm on GPU", "body": "I use pytorch.profiler to profile a program of matmul on GPU, it seems profiler does not record aten.mm correctly. There is stats in GPU kernel View,\r\n\r\n<img width=\"2118\" alt=\"image\" src=\"https://github.com/pytorch/kineto/assets/11534916/dc126d48-1517-4af2-9200-8fd37aeaa6a4\">\r\n\r\n but no GPU kernel stats in Trace view.\r\n\r\n<img width=\"1903\" alt=\"image\" src=\"https://github.com/pytorch/kineto/assets/11534916/d4d747af-9b88-47b3-88eb-a3e3a9d00ef1\">\r\n\r\nSample code:\r\n```python\r\nimport torch\r\na = torch.rand([1, 1024, 2048], device='cuda')\r\nb = torch.rand([2048, 2048], device='cuda')\r\nwith torch.profiler.profile(\r\n activities=[\r\n torch.profiler.ProfilerActivity.CPU,\r\n torch.profiler.ProfilerActivity.CUDA,\r\n ],\r\n on_trace_ready=torch.profiler.tensorboard_trace_handler(\"./mm-profile\")\r\n):\r\n torch.matmul(a, b)\r\n```", "url": "https://github.com/pytorch/kineto/issues/799", "state": "closed", "labels": [ "question", "plugin" ], "created_at": "2023-08-10T08:13:15Z", "updated_at": "2024-04-23T15:50:55Z", "user": "scse-l" }, { "repo": "pytorch/xla", "number": 5424, "title": "How can I use torch_xla fsdp with AMP on GPU?", "body": "## \u2753 Questions and Help\r\nHello, how can I ues torch_xla fsdp + AMP on GPU? Does the torch_xla fsdp support AMP\uff1f\r\n\r\nI've read the the following code carefully. Can I forcibly fuse them together ?\r\n\r\ntest/test_train_mp_imagenet_fsdp.py\r\ntest/test_train_mp_imagenet_amp.py\r\n\r\nThanks.", "url": "https://github.com/pytorch/xla/issues/5424", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2023-08-09T08:21:40Z", "updated_at": "2025-04-29T13:58:58Z", "user": "Pluto1944" }, { "repo": "pytorch/android-demo-app", "number": 331, "title": "What is IValue type? It is a Tensor?", "body": "What is the diff of IValue and Tensor?\r\nCould you please share some references?\r\n\r\nThx.", "url": "https://github.com/pytorch/android-demo-app/issues/331", "state": "open", "labels": [], "created_at": "2023-08-08T00:30:45Z", "updated_at": "2023-08-08T00:30:45Z", "user": "NeighborhoodCoding" }, { "repo": "pytorch/text", "number": 2197, "title": "Does DataLoader(shuffle=True) really shuffle DBpedia dataset correctly?", "body": "According to [the docs][1], DBpedia dataset has 14 classes (labels) and 40000 texts for each class. Hence, if I create batches using `DataLoader(shuffle=True)` as follows:\r\n\r\n```python\r\nimport torchtext.datasets as d\r\nfrom torch.utils.data.dataloader import DataLoader\r\n\r\ntrain = DataLoader(\r\n d.DBpedia(split=\"train\", root=\".cache\"),\r\n batch_size=10000,\r\n shuffle=True,\r\n)\r\n```\r\n\r\nthe labels should be uniformly distributed in each batch. But in practice, it seems that only a few labels are in each batch.\r\n\r\n```python\r\nfor labels, texts in train:\r\n print(len(set(labels.tolist())))\r\n```\r\nThe output of the above code is:\r\n```\r\n1\r\n1\r\n1\r\n2\r\n2\r\n2\r\n2\r\n3\r\n3\r\n3\r\n3\r\n4\r\n4\r\n3\r\n3\r\n.\r\n.\r\n.\r\n```\r\n\r\nHow can I fix this? Or is my implementation wrong?\r\n\r\nP.S.\r\nInteractive code is available on [GoogleColab][2]\r\n\r\n [1]: https://pytorch.org/text/stable/datasets.html#dbpedia\r\n [2]: https://colab.research.google.com/drive/10524PcR3_spf3fAh37hNbXdLeRVD6Sog?usp=sharing", "url": "https://github.com/pytorch/text/issues/2197", "state": "open", "labels": [], "created_at": "2023-08-04T10:34:52Z", "updated_at": "2023-08-04T10:37:18Z", "comments": 0, "user": "fujidaiti" }, { "repo": "pytorch/text", "number": 2196, "title": "torchtext.datasets - requests.exceptions.ConnectionError", "body": "## \ud83d\udc1b Bug\r\n\r\n**Description of the bug**\r\n\r\nWhen I try to use Multi30k dataset, I get this error:\r\n\r\n```\r\nrequests.exceptions.ConnectionError:\r\nThis exception is thrown by __iter__ of HTTPReaderIterDataPipe(skip_on_error=False, source_datapipe=OnDiskCacheHolderIterDataPipe, timeout=None)\r\n```\r\n\r\n**To Reproduce**\r\n\r\n```\r\nfrom torchtext.datasets import Multi30k\r\n\r\nSRC_LANGUAGE = 'de'\r\nTGT_LANGUAGE = 'en'\r\n\r\ntrain_iter = Multi30k(split='train', language_pair=(SRC_LANGUAGE, TGT_LANGUAGE))\r\n\r\nnext(iter(train_iter))\r\n```\r\n\r\n**Expected behavior**\r\n\r\nReturn a proper iterable where I can iterate over the dataset.\r\n\r\n**Environment**\r\n\r\nPyTorch version: 1.13.1+cpu\r\nIs debug build: False\r\nCUDA used to build PyTorch: Could not collect\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Microsoft Windows 11 Enterprise\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: N/A\r\n\r\nPython version: 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)\r\nPython platform: Windows-10-10.0.22621-SP0\r\nIs CUDA available: False\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: GPU 0: GeForce GTX 1650\r\nNvidia driver version: 442.23\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture=9\r\nCurrentClockSpeed=2592\r\nDeviceID=CPU0\r\nFamily=198\r\nL2CacheSize=1536\r\nL2CacheSpeed=\r\nManufacturer=GenuineIntel\r\nMaxClockSpeed=2592\r\nName=Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz\r\nProcessorType=3\r\nRevision=\r\n\r\nVersions of relevant libraries:\r\n[pip3] flake8==6.0.0\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.5\r\n[pip3] numpydoc==1.5.0\r\n[pip3] torch==1.13.1\r\n[pip3] torchdata==0.5.1\r\n[pip3] torchtext==0.14.1\r\n[conda] Could not collect\r\n\r\n**Additional context**\r\n\r\nI've been running into issues with the Multi30K dataset for some time now. The issue that was occurring before was resolved by installing specific versions and combinations of the relevant torch libraries I specified. However, even this solution doesn't work anymore. Can you please fix what's broken with this cursed dataset?\r\n\r\nThank you.\r\n", "url": "https://github.com/pytorch/text/issues/2196", "state": "open", "labels": [], "created_at": "2023-08-04T09:25:28Z", "updated_at": "2024-01-11T07:53:51Z", "comments": 2, "user": "afurkank" }, { "repo": "pytorch/TensorRT", "number": 2167, "title": "\u2753 [Question] Is a INT8 calibrator specific to a given model or just specific to a dataset?", "body": "## \u2753 Question\r\n\r\nIs a INT8 calibrator specific to a given model or just specific to a dataset?\r\n\r\nINT8 calibrators can be cached to accelerate further usage, which is nice. However, it's not clear from the documentation if the cached calibrator can only be used to calibrate the model it was used for TensorRT conversion or any model that uses the same calibration dataset.\r\n\r\nAs a practical example, let say that I'm training and comparing two classification neural networks A and B on the same dataset and with the same data preprocessing. I converted network A for TensorRT using INT8 quantization and saved the calibrator cache file. to disk. Can I use this calibrator to convert model B to TensorRT (which otherwise would have used the same calibration dataset as A)?\r\n\r\nMy intuition is that a calibrator is specific to given dataset **and** network and it cannot be reused for a different network.", "url": "https://github.com/pytorch/TensorRT/issues/2167", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-03T11:38:16Z", "updated_at": "2023-08-15T19:53:12Z", "user": "laclouis5" }, { "repo": "pytorch/torchx", "number": 749, "title": "Passing additional build arguments to Dockerfile.torchx", "body": "## \u2753 Questions and Help\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\nUse case:\r\nMy team uses torchx to submit the job to remote scheduler such as AWS Batch. While building the docker image, we want to use a private PyPi repository to install the python dependncies.\r\n\r\n\r\nIt seems that Dockerfile doesn't allow passing additional build arguments, besides `Image` and `Workspace` ([reference](https://github.com/pytorch/torchx/blob/966c96f092bc89ad067b0bdb9eed8f7002dbcb46/torchx/workspace/docker_workspace.py#L122-L125)). We need to pass additional build arguments such as pip `index-url` to point to our private PyPi repository during the image build process.\r\n\r\nDoes the torchx team have any recommendations on how to achieve our use case of passing additional build args, while building the docker", "url": "https://github.com/meta-pytorch/torchx/issues/749", "state": "open", "labels": [], "created_at": "2023-08-02T20:05:02Z", "updated_at": "2023-10-04T22:35:48Z", "comments": 4, "user": "anjali-chadha" }, { "repo": "pytorch/examples", "number": 1179, "title": "How to load Transformer model once using FSDP", "body": "## \ud83d\udcda Documentation\r\n@HamidShojanazeri, I'm following your [FSDP example](https://github.com/pytorch/examples/tree/main/distributed/FSDP) and swapped in a bigger model, `google/flan-t5-xxl`, and am a little unclear on what happens when the script starts up. I'm running on a server with 8 V100s so I run the launch command as listed in the README.md file:\r\n`torchrun --nnodes 1 --nproc_per_node 8 T5_training.py`\r\n\r\nNext, I was having trouble downloading the model weights because I think with 8 processes, each one was trying to download the weights and they were removing each others' file locks, so I changed the [`setup_model`](https://github.com/pytorch/examples/blob/741de70c4a20d9c83f811b946c186c4f83abcccb/distributed/FSDP/utils/train_utils.py#L99-L102) function so that only rank 0 downloads the weights and then all other processes will read from the local cache.\r\n\r\nFinally, my big question for you is - as the `setup_model` function is currently written, is it fair to say that we're loading a copy of the model weights for every process running (e.g. in my case, 8 processes)? If so, how can we load the model once and broadcast the weights to all other processes? I ask because this will become a blocker at bigger model scales because we'll eventually run out of CPU memory trying to do this.\r\n\r\nHere's my modified `setup_model` function for reference:\r\n```\r\ndef setup_model(model_name, model_max_length=512, cache_dir=None, rank=None):\r\n # TODO: is this loading the model on all processes?\r\n # 1) this seems time consuming, and 2) it seems like it would use way too much memory\r\n # ensure weights are only downloaded by one process\r\n if rank == 0:\r\n model = T5ForConditionalGeneration.from_pretrained(model_name, cache_dir=cache_dir)\r\n # set model_max_length to avoid warnings\r\n tokenizer = T5Tokenizer.from_pretrained(model_name, model_max_length=model_max_length, cache_dir=cache_dir)\r\n dist.barrier()\r\n if rank != 0:\r\n model = T5ForConditionalGeneration.from_pretrained(model_name, cache_dir=cache_dir)\r\n # set model_max_length to avoid warnings\r\n tokenizer = T5Tokenizer.from_pretrained(model_name, model_max_length=model_max_length, cache_dir=cache_dir)\r\n return model, tokenizer\r\n```\r\n\r\nI imagine this all gets easier and more memory efficient once we start saving the model in the formats you've specified in the model_checkpointing directory but we have to get there in the first place.\r\n\r\nI should also note, in case it makes a difference, that I'm setting up the distributed process group (within `T5_training.py`) before calling `setup_model`, whereas you call `setup_model` before setting up the distributed process group in your example. ", "url": "https://github.com/pytorch/examples/issues/1179", "state": "open", "labels": [], "created_at": "2023-08-01T22:01:24Z", "updated_at": "2023-08-01T22:01:24Z", "user": "ToddMorrill" }, { "repo": "pytorch/TensorRT", "number": 2159, "title": "\u2753 [Question] Could torch-tensorrt support mixed-precision inference?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nHello, in my PyTorch inference, I initially set the entire model to fp16 and provided fp16 inputs. Considering the output will become `NAN` (transformer model) , and then I used `.to()` to switch certain weight layers and inference parameters back to fp32. \r\n\r\nHowever, if I export it to ONNX and convert to TensorRT, I would need to make those settings again in TensorRT, which can be quite complicated.\r\n\r\n I would like to know if the torch_tensorrt export includes these dependence and if it can automatically perform mixed-precision export to TensorRT based on my settings. Thank you!", "url": "https://github.com/pytorch/TensorRT/issues/2159", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-01T10:32:24Z", "updated_at": "2023-08-16T01:34:08Z", "user": "sanbuphy" }, { "repo": "pytorch/cpuinfo", "number": 169, "title": "How to cross-compile arm64 on linux", "body": "", "url": "https://github.com/pytorch/cpuinfo/issues/169", "state": "closed", "labels": [], "created_at": "2023-07-21T03:11:25Z", "updated_at": "2023-07-21T19:09:06Z", "user": "HongxiaoMa" }, { "repo": "pytorch/TensorRT", "number": 2122, "title": "\u2753 Why the speed (time) in PTQ and QAT are different? ", "body": "## \u2753 Why the speed (time) in PTQ and QAT are different? \r\n\r\n\r\nI used your sample notebook.\r\nThe link is https://github.com/pytorch/TensorRT/blob/main/notebooks/qat-ptq-workflow.ipynb.\r\nI also performed this approach on some other models. In all cases like your example the PTQ converted model is faster than QAT converted model.\r\n\r\nI think they must have the same speed because their process is the same just some weights are different. Speeds must be the same. Is this for your implementation or this is typical?\r\nCan I make QAT converted model faster like PTQ?", "url": "https://github.com/pytorch/TensorRT/issues/2122", "state": "closed", "labels": [ "question", "No Activity", "component: quantization" ], "created_at": "2023-07-18T13:18:58Z", "updated_at": "2023-11-02T00:02:20Z", "user": "panahikhas" }, { "repo": "pytorch/pytorch.github.io", "number": 1410, "title": "Website front page does not say what PyTorch is", "body": "## \ud83d\udcda Documentation\r\n\r\nI came across PyTorch because I was installing some software and it appeared in the logs, so I decided to look it up and arrived on https://pytorch.org/. Unfortunately this was not enlightening, as the front page of the website does not clarify what PyTorch is. It does list: membership availability notice; links to featured reads, PyTorch 2.0, upcoming events; feature highlights; installation instructions; featured projects; community discussion channel links; but nowhere does it actually say what PyTorch is, which seems to me like quite important information for the front page of a project.\r\n", "url": "https://github.com/pytorch/pytorch.github.io/issues/1410", "state": "closed", "labels": [], "created_at": "2023-07-16T23:21:04Z", "updated_at": "2023-07-21T15:11:52Z", "user": "zopsicle" }, { "repo": "pytorch/TensorRT", "number": 2117, "title": "\u2753 Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled? ", "body": "## \u2753 RuntimeError: [Error thrown at core/conversion/converters/converter_util.cpp:251] Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled:\r\n1. A pre-trained Torch model like Resnet18 was loaded\r\n2. The model was quantized using `pytorch_quantization.quant_modules.initialize()`\r\n3. The quantized model was calibrated\r\n4. The model was fine-tuned (QAT)\r\n5. I tried to convert the fine-tuned model to TensorRT using\r\n`trt_mod = torch_tensorrt.compile(qat_model,\r\n inputs=[torch_tensorrt.Input([32, 3, 32, 32])],\r\n enabled_precisions={torch.int8})`\r\nbut I encountered the error below:\r\n\r\n`File \"/home/i2027/anaconda3/envs/p/lib/python3.10/site-packages/torch_tensorrt/_compile.py\", line 133, in compile`\r\n` return torch_tensorrt.ts.compile(`\r\n`File \"/home/i2027/anaconda3/envs/p/lib/python3.10/site-packages/torch_tensorrt/ts/_compiler.py\", line 139, in compile`\r\n` compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))`\r\n`RuntimeError: [Error thrown at core/conversion/converters/converter_util.cpp:251] Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled`\r\n\r\nI have checked the model parameters and all of them were of type float32. I don't know why TorchTensorRT complains about Int64/Float64! Please note that I have managed to convert a simple CNN to TensorRT using the method described above successfully. However, I failed to convert an existing torchvision model using the steps above. I will be grateful for any hint.\r\n\r\n## Environment\r\n\r\n - PyTorch Version: 2.0.1+cu118\r\n - CPU Architecture: x86\r\n - OS: Ubuntu 20.04\r\n - How you installed PyTorch: pip\r\n - Python version: 3.10.11\r\n - CUDA version: 12.1\r\n - GPU models and configuration: GeForce GTX 1080 - 12 GB\r\n - All pckages versions:\r\n\r\n- - torch==2.0.1+cu118\r\n- - torch_tensorrt==1.4.0\r\n- - torchvision==0.15.2+cu118\r\n- - pytorch_quantization==2.1.2\r\n- - torchvision==0.15.2+cu118\r\n\r\n## Additional context\r\n\r\nThe code is available at https://github.com/panahikhas/TensorRT-QAT/blob/main/torch-tensorrt-QAT.py to reproduce the results.\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2117", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-15T14:46:23Z", "updated_at": "2023-07-18T13:00:07Z", "user": "panahikhas" }, { "repo": "pytorch/xla", "number": 5307, "title": "About deepspeed support for \"xla\"", "body": "## \u2753 Questions and Help\r\n\r\n[distributed support of deepspeed on xla] Hello, does deepspeed support distributed training for xla? If not, can you provide support in this regard?\r\n", "url": "https://github.com/pytorch/xla/issues/5307", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-14T02:56:46Z", "updated_at": "2025-04-29T14:03:05Z", "user": "zhuziaaa" }, { "repo": "pytorch/TensorRT", "number": 2108, "title": "\u2753 [Question] How can I learn to convert an intermediate format IR to the TensorRT target?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nHello, I am also currently working on something similar to PyTorch FX. I would like to convert an intermediate format graph into a target engine (which can be any inference framework, using TensorRT as an example). I wanted to ask how Torch TRT accomplishes this operation. Are there any source code or documentation resources that I can refer to? Thank you very much!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2108", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-13T04:58:16Z", "updated_at": "2023-08-11T08:08:23Z", "user": "sanbuphy" }, { "repo": "pytorch/pytorch", "number": 105047, "title": "I don't how to bulid pytorch in my cpu", "body": "If you have a question or would like help and support, please ask at our\r\n[forums](https://discuss.pytorch.org/).\r\n\r\nIf you are submitting a feature request, please preface the title with [feature request].\r\nIf you are submitting a bug report, please fill in the following details.\r\n\r\n\r\nmy cpu is ppcle64\r\n\r\n\r\n- PyTorch or Caffe2: i want to bulid pytorch\r\n- How you installed PyTorch (conda, pip, source): pip\r\n- Build command you used (if compiling from source):\r\n- OS: Contens8\r\n- PyTorch version:\r\n- Python version:\r\n- CUDA/cuDNN version:\r\n- GPU models and configuration:\r\n- GCC version (if compiling from source):\r\n- CMake version:\r\n- Versions of any other relevant libraries:\r\n", "url": "https://github.com/pytorch/pytorch/issues/105047", "state": "closed", "labels": [], "created_at": "2023-07-12T08:09:31Z", "updated_at": "2023-07-14T03:15:15Z", "user": "miaowahexiaohuolong" }, { "repo": "pytorch/text", "number": 2190, "title": "Missing documentation for T5 model", "body": "## \ud83d\udcda Documentation\r\n\r\n**Description**\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/text/stable/index.html is an issue. -->\r\n\r\nAs per title. There is no documentation on T5 model although it exists\r\n\r\nhttps://pytorch.org/text/stable/models.html\r\n", "url": "https://github.com/pytorch/text/issues/2190", "state": "open", "labels": [], "created_at": "2023-07-11T10:40:37Z", "updated_at": "2023-07-11T10:40:37Z", "comments": 0, "user": "gau-nernst" }, { "repo": "pytorch/pytorch", "number": 104764, "title": "How to integrate the new cpp file with Pytorch geometric? ", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI am using neighbour loader function in my code, which uses sample_adj_cpu function to sample neighbours. I am making some changes in this function which is present in the following file.\r\n\r\nFile link:\r\n[[pytorch_sparse](https://github.com/rusty1s/pytorch_sparse/tree/master)/[csrc](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc)/[cpu](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc/cpu)\r\n/sample_cpu.cpp](url)\r\n\r\nHow to integrate these changes in Pytorch geometric?\r\n\r\nAlternatives\r\nNo response\r\n\r\nAdditional context\r\nNo response\r\n\r\ncc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/104764", "state": "closed", "labels": [], "created_at": "2023-07-07T07:48:04Z", "updated_at": "2023-07-07T16:32:01Z", "user": "shivanisankhyan" }, { "repo": "pytorch/TensorRT", "number": 2082, "title": "\u2753 [Question] How to decrease the latency of the inference? ", "body": "## \u2753 Question\r\nHi. I convert pytorch retinaface and arcface model to TensorRT via torch_tensorrt library. Everything is okay but after some iterations inference is freezing and the time for handling the image is badly increased (>10x).\r\nSnippet of inference simulation is here:\r\n\r\n## Environment\r\n\r\nTensorRT Version: 8.4.2\r\nGPU Type: A100\r\nNvidia Driver Version: 465.19.01\r\nCUDA Version: 11.3\r\nCUDNN Version: 8\r\nOperating System + Version: SLES \u201c15-SP2\u201d in host machine\r\nPython Version (if applicable): 3.8\r\nPyTorch Version (if applicable): 1.13.0a0+d321be6\r\nBaremetal or Container (if container which image + tag): [nvcr.io/nvidia/pytorch:22.08-py3](http://nvcr.io/nvidia/pytorch:22.08-py3)\r\n\r\n## Code\r\n```\r\n\r\nimport torch\r\nimport torch_tensorrt\r\nimport time\r\n\r\nDEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n\r\n\r\nretinaface_model = torch.jit.load('../jit_retinaface_trt.torch-tensorrt') \r\nretinaface_model.eval()\r\nretinaface_model.to(DEVICE)\r\n\r\n\r\narcface_model = torch.jit.load('../arcface_bs1_torch.float32.torch-tensorrt')\r\narcface_model.eval()\r\narcface_model.to(DEVICE)\r\n\r\nretinaface_tensor = torch.rand(1, 3, 360, 640).to(DEVICE)\r\narcface_tensor = torch.rand(1, 3, 112, 112).to(DEVICE)\r\n\r\nfor _ in range(100):\r\n global_start = time.time()\r\n start_time = time.time()\r\n with torch.no_grad():\r\n ret_out = retinaface_model(retinaface_tensor)\r\n torch.cuda.synchronize()\r\n end_time = time.time()\r\n ret_time = end_time - start_time\r\n start_time = time.time()\r\n with torch.no_grad():\r\n arc_out = arcface_model(arcface_tensor)\r\n torch.cuda.synchronize()\r\n end_time = time.time()\r\n arc_time = end_time - start_time\r\n global_end = time.time()\r\n global_time = global_end - global_start\r\n # if global_time > 0.1:\r\n print(f'ret time is : {ret_time}')\r\n print(f'arc time is : {arc_time}')\r\n print(f'global time is : {global_end-global_start}')\r\n print('-'*40)\r\n```\r\n\r\n## Outputs\r\nOutputs:\r\nNormally output is like this:\r\nret time is : 0.0009617805480957031\r\narc time is : 0.0019981861114501953\r\nglobal time is : 0.002961874008178711\r\nret time is : 0.0008959770202636719\r\narc time is : 0.0019989013671875\r\nglobal time is : 0.002896547317504883\r\nret time is : 0.0009148120880126953\r\narc time is : 0.0020008087158203125\r\nglobal time is : 0.0029172897338867188\r\nret time is : 0.0008985996246337891\r\narc time is : 0.001995086669921875\r\nglobal time is : 0.002894878387451172\r\nret time is : 0.00446009635925293\r\narc time is : 0.002003192901611328\r\nglobal time is : 0.006464719772338867\r\nret time is : 0.0009562969207763672\r\narc time is : 0.0020017623901367188\r\nglobal time is : 0.0029592514038085938\r\nret time is : 0.0009098052978515625\r\narc time is : 0.002006053924560547\r\nglobal time is : 0.002917051315307617\r\nret time is : 0.0009250640869140625\r\narc time is : 0.001997709274291992\r\nglobal time is : 0.002924203872680664\r\nret time is : 0.0009291172027587891\r\narc time is : 0.001995086669921875\r\nglobal time is : 0.002925395965576172\r\nret time is : 0.0009377002716064453\r\narc time is : 0.0020194053649902344\r\nglobal time is : 0.0029582977294921875\r\nret time is : 0.0009005069732666016\r\narc time is : 0.0019958019256591797\r\nglobal time is : 0.0028977394104003906\r\nret time is : 0.0009152889251708984\r\narc time is : 0.001996755599975586\r\nglobal time is : 0.0029134750366210938\r\nret time is : 0.0009534358978271484\r\narc time is : 0.0019991397857666016\r\nglobal time is : 0.0029540061950683594\r\nret time is : 0.0009467601776123047\r\narc time is : 0.0020117759704589844\r\nglobal time is : 0.002960205078125\r\nret time is : 0.0008974075317382812\r\narc time is : 0.0019989013671875\r\nglobal time is : 0.0028977394104003906\r\nret time is : 0.0009267330169677734\r\narc time is : 0.002001523971557617\r\nglobal time is : 0.0029296875\r\n\r\n\r\nBut after some iterations and time return this:\r\n\r\nret time is : 0.0030410289764404297\r\narc time is : 0.10997724533081055 <-----\r\nglobal time is : 0.11302065849304199\r\nret time is : 0.002657651901245117\r\narc time is : 0.1075441837310791 <-----\r\nglobal time is : 0.11020350456237793\r\nret time is : 0.1104578971862793 <-----\r\narc time is : 0.0020885467529296875\r\nglobal time is : 0.1125497817993164\r\nret time is : 0.11419057846069336 <-----\r\narc time is : 0.0020301342010498047\r\nglobal time is : 0.11622214317321777\r\nret time is : 0.10733747482299805 <-----\r\narc time is : 0.0020294189453125\r\nglobal time is : 0.10936880111694336\r\nret time is : 0.1150820255279541 <-----\r\narc time is : 0.0020606517791748047\r\nglobal time is : 0.11714410781860352\r\n\r\n\r\nI try changing the clock freq to the max of A100(1410MHz) but nothing changes from the default(765MHz).\r\nIn real-time handling after 26-28 iterations this happens.\r\nIt will be great if you support fixing this. Thanks in advance!!!", "url": "https://github.com/pytorch/TensorRT/issues/2082", "state": "closed", "labels": [ "question", "No Activity", "component: runtime", "performance" ], "created_at": "2023-07-07T05:51:35Z", "updated_at": "2023-10-16T00:02:22Z", "user": "hvildan" }, { "repo": "pytorch/serve", "number": 2446, "title": "is TS_JOB_QUEUE_SIZE a valid environment variable?", "body": "### \ud83d\udcda The doc issue\r\n\r\n[This page](https://pytorch.org/serve/configuration.html) says environment variables are equivalent to server configuration set in `config.properties`\r\nSetting `TS_JOB_QUEUE_SIZE` as an environment variable has no effect in Docker version 0.8.0\r\n\r\n```\r\nTorchserve version: 0.8.0\r\nTS Home: /home/venv/lib/python3.9/site-packages\r\nCurrent directory: /app\r\nTemp directory: /home/model-server/tmp\r\nMetrics config path: /app/config/metrics.yaml\r\nNumber of GPUs: 0\r\nNumber of CPUs: 4\r\nMax heap size: 7952 M\r\nPython executable: /home/venv/bin/python\r\nConfig file: /app/config/config.properties\r\nInference address: http://0.0.0.0:8080\r\nManagement address: http://0.0.0.0:8081\r\nMetrics address: http://0.0.0.0:8082\r\nModel Store: /app/model_store\r\nInitial Models: ALL\r\nLog dir: /app/logs\r\nMetrics dir: /app/logs\r\nNetty threads: 0\r\nNetty client threads: 0\r\nDefault workers per model: 1\r\nBlacklist Regex: N/A\r\nMaximum Response Size: 6553500\r\nMaximum Request Size: 6553500\r\nLimit Maximum Image Pixels: true\r\nPrefer direct buffer: false\r\nAllowed Urls: [file://.*|http(s)?://.*]\r\nCustom python dependency for model allowed: false\r\nEnable metrics API: true\r\nMetrics mode: prometheus\r\nDisable system metrics: false\r\nWorkflow Store: /app/model_store\r\nModel config: N/A\r\n```\r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/2446", "state": "closed", "labels": [ "question", "triaged", "docker" ], "created_at": "2023-07-06T01:18:47Z", "updated_at": "2023-10-28T19:43:36Z", "user": "sreeprasannar" }, { "repo": "pytorch/torchx", "number": 737, "title": "-j vs --cpu/--gpu in ddp ", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\n[https://pytorch.org/torchx/latest/components/distributed.html](https://pytorch.org/torchx/latest/components/distributed.html)\r\n\r\n## What does it currently say?\r\nNot clear whether --cpu, --gpu arguments are overrided by -j arguments, although in my testing (launch then run top, etc.) it seems they are?\r\n\r\n## What should it say?\r\nBoth the docs and the --help output for dist.ddp could be more clear on this front. More generally, I am wondering if there exists a torchx equivalent of `torchrun --standalone --nnodes=1 --nproc_per_node=auto ...`.\r\n\r\n## Why?\r\nClearly I wouldn't want `--gpu=0` with `-j 1x2`, right? As such the listed defaults in docs --help are a little confusing.\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/737", "state": "open", "labels": [], "created_at": "2023-07-05T15:57:56Z", "updated_at": "2023-07-12T20:47:24Z", "comments": 1, "user": "godfrey-cw" }, { "repo": "pytorch/pytorch", "number": 104617, "title": "How to integrate the new cpp file with Pytorch geometroic?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI am using neighbour loader function in my code, which uses sample_adj_cpu function to sample neighbours. I am making some changes in this function which is present in the following file. \r\n\r\nFile link:\r\n[[pytorch_sparse](https://github.com/rusty1s/pytorch_sparse/tree/master)/[csrc](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc)/[cpu](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc/cpu)\r\n/sample_cpu.cpp](url) \r\n\r\nHow to integrate these changes in Pytorch geometric?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer", "url": "https://github.com/pytorch/pytorch/issues/104617", "state": "closed", "labels": [ "module: sparse", "triaged" ], "created_at": "2023-07-05T06:47:12Z", "updated_at": "2023-07-12T22:10:30Z", "user": "shivanisankhyan" }, { "repo": "pytorch/pytorch", "number": 104450, "title": "Numpy/scipy module works fine with Torch modules, but not TorchScript. How to torchscript a numpy/scipy module?", "body": "### \ud83d\udc1b Numpy module works fine with Torch modules, but not TorchScript.\r\n\r\n```python\r\nfrom scipy.signal import find_peaks\r\n\r\nbatch_size = 1\r\ninput_data_shape = 1000\r\ninput_shape = (batch_size, input_data_shape)\r\n\r\nreference_inputs = numpy.random.random(input_shape)\r\nreference_outputs, _ = find_peaks(reference_inputs[0, :])\r\n\r\nclass FindPeaks(torch.nn.Module):\r\n def __init__(self):\r\n super(FindPeaks, self).__init__()\r\n\r\n def forward(self, xs):\r\n xs_numpy = xs.numpy()[0, :]\r\n peaks, _ = find_peaks(xs_numpy)\r\n return torch.tensor(peaks, dtype=int)\r\n\r\ninputs = torch.tensor(reference_inputs, dtype=float)\r\ntorch_model = FindPeaks()\r\ntorch_outputs = torch_model(inputs)\r\n\r\ntorchscript_model = torch.jit.trace(torch_model, example_inputs=[inputs])\r\ntorchscript_model.save(f\"./artifacts/{torch_model.__class__.__name__}.pt\")\r\n\r\ntorchscript_outputs = torchscript_model(inputs).detach()\r\nassert isinstance(torchscript_outputs, torch.Tensor)\r\nassert torchscript_outputs.shape == reference_outputs.shape\r\nassert numpy.allclose(\r\n reference_outputs, torchscript_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5\r\n)\r\n\r\nfor i in range(5):\r\n reference_inputs = numpy.random.random(input_shape)\r\n reference_outputs, _ = find_peaks(reference_inputs[0, :])\r\n\r\n inputs = torch.tensor(reference_inputs, dtype=float)\r\n\r\n torch_outputs = torch_model(inputs).detach()\r\n assert isinstance(torch_outputs, torch.Tensor)\r\n assert torch_outputs.shape == reference_outputs.shape # works fine\r\n assert numpy.allclose(\r\n reference_outputs, torch_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5\r\n ) # works fine\r\n\r\n torchscript_outputs = torchscript_model(inputs).detach()\r\n assert isinstance(torchscript_outputs, torch.Tensor)\r\n assert torchscript_outputs.shape == reference_outputs.shape, \\\r\n (torchscript_outputs, reference_outputs) # not working, seems memorizing the input/output when compiling the model.\r\n assert numpy.allclose(\r\n reference_outputs, torchscript_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5\r\n )\r\n```\r\n\r\n### Versions\r\n\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.12.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 13.3 (x86_64)\r\nGCC version: Could not collect\r\nClang version: 16.0.3\r\nCMake version: version 3.21.1\r\nLibc version: N/A\r\n\r\nPython version: 3.8.16 (default, Dec 7 2022, 01:39:17) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)\r\nPython platform: macOS-13.3-x86_64-i386-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nIntel(R) Core(TM) i9-9980HK CPU @ 2.40GHz\r\n\r\nVersions of relevant libraries:\r\n[pip3] flake8==4.0.1\r\n[pip3] mypy==0.910\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.21.0\r\n[pip3] torch==1.12.1\r\n[pip3] torchaudio==0.12.1\r\n[pip3] torchvision==0.13.1\r\n[conda] Could not collect\r\n```\n\ncc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel", "url": "https://github.com/pytorch/pytorch/issues/104450", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2023-06-30T00:29:43Z", "updated_at": "2023-08-02T17:55:14Z", "user": "kzhai" }, { "repo": "pytorch/tutorials", "number": 2495, "title": "[BUG] - Only one trial completes on Ax NAS", "body": "### Add Link\r\n\r\nhttps://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html\r\n\r\n### Describe the bug\r\n\r\nHi,\r\n\r\nI was able to get the tutorial notebook working, and now I am trying to implement Ax-based NAS on my own model. However, only one of the trials complete and all the others fail. I have one objective which is to maximize the val_accuracy. The training script runs fine without any problem when I run it on terminal as well. This is the error I am getting:\r\n\r\n![image](https://github.com/pytorch/tutorials/assets/66868163/0bd80fc9-4fbf-4338-9181-e15692c3205f)\r\n\r\n![image](https://github.com/pytorch/tutorials/assets/66868163/a320fc3f-4e86-4e85-9ad7-97735a4da732)\r\n\r\n--------------\r\nFull log:\r\n---------------------------------------------------------------------------\r\nFailureRateExceededError Traceback (most recent call last)\r\nCell In[10], line 1\r\n----> 1 scheduler.run_all_trials()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999), in Scheduler.run_all_trials(self, timeout_hours, idle_callback)\r\n 992 if self.options.total_trials is None:\r\n 993 # NOTE: Capping on number of trials will likely be needed as fallback\r\n 994 # for most stopping criteria, so we ensure `num_trials` is specified.\r\n 995 raise ValueError( # pragma: no cover\r\n 996 \"Please either specify `num_trials` in `SchedulerOptions` input \"\r\n 997 \"to the `Scheduler` or use `run_n_trials` instead of `run_all_trials`.\"\r\n 998 )\r\n--> 999 for _ in self.run_trials_and_yield_results(\r\n 1000 max_trials=not_none(self.options.total_trials),\r\n 1001 timeout_hours=timeout_hours,\r\n 1002 idle_callback=idle_callback,\r\n 1003 ):\r\n 1004 pass\r\n 1005 return self.summarize_final_result()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:899](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:899), in Scheduler.run_trials_and_yield_results(self, max_trials, ignore_global_stopping_strategy, timeout_hours, idle_callback)\r\n 893 return\r\n 895 yield self.wait_for_completed_trials_and_report_results(\r\n 896 idle_callback, force_refit=True\r\n 897 )\r\n--> 899 yield self._complete_optimization(\r\n 900 num_preexisting_trials=n_existing, idle_callback=idle_callback\r\n 901 )\r\n 902 return\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:1278](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:1278), in Scheduler._complete_optimization(self, num_preexisting_trials, idle_callback)\r\n 1273 res = self.wait_for_completed_trials_and_report_results(\r\n 1274 idle_callback=idle_callback, force_refit=True\r\n 1275 )\r\n 1276 # Raise an error if the failure rate exceeds tolerance at the\r\n 1277 # end of the optimization.\r\n-> 1278 self.error_if_failure_rate_exceeded(force_check=True)\r\n 1279 self._record_run_trials_status(\r\n 1280 num_preexisting_trials=num_preexisting_trials,\r\n 1281 status=RunTrialsStatus.SUCCESS,\r\n 1282 )\r\n 1283 return res\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779), in Scheduler.error_if_failure_rate_exceeded(self, force_check)\r\n 771 if self._num_trials_bad_due_to_err > num_bad_in_scheduler [/](https://file+.vscode-resource.vscode-cdn.net/) 2:\r\n 772 self.logger.warn(\r\n 773 \"MetricFetchE INFO: Sweep aborted due to an exceeded error rate, \"\r\n 774 \"which was primarily caused by failure to fetch metrics. Please \"\r\n 775 \"check if anything could cause your metrics to be flakey or \"\r\n 776 \"broken.\"\r\n 777 )\r\n--> 779 raise self._get_failure_rate_exceeded_error(\r\n 780 num_bad_in_scheduler=num_bad_in_scheduler,\r\n 781 num_ran_in_scheduler=num_ran_in_scheduler,\r\n 782 )\r\n\r\nFailureRateExceededError: Failure rate exceeds the tolerated trial failure rate of 0.5 (at least 2 out of first 3 trials failed). Checks are triggered both at the end of a optimization and if at least 5 trials have failed.\r\n\r\n-----------\r\n\r\n![image](https://github.com/pytorch/tutorials/assets/66868163/b0103097-e93f-4bf0-8dac-12198a3884f3)\r\n\r\nI don't set any objective thresholds. When I run the script from the terminal, it works fine every time, and val_accuracy never becomes NaN. What might be the reason for such behavior in trials? \r\n\r\nI also have another question. Does Ax support trying differen", "url": "https://github.com/pytorch/tutorials/issues/2495", "state": "closed", "labels": [ "bug", "question", "ax" ], "created_at": "2023-06-28T23:02:31Z", "updated_at": "2023-10-30T17:00:14Z", "user": "ekurtgl" }, { "repo": "pytorch/kineto", "number": 775, "title": "Profile particular functions / lines", "body": "Hey, is there a way to profile particular functions or code lines with one profiler i.e. not to have separate `with profile as..`statements around each of them? \r\nSomething similar to the [NVIDIA nvtx markers](https://docs.nvidia.com/cuda/profiler-users-guide/).\r\n\r\nUse case:\r\nWant to profile only particular activity such as `optimizer.step()` or `loss.backward()` in a training loop, and not the entire loop.", "url": "https://github.com/pytorch/kineto/issues/775", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-28T02:03:02Z", "updated_at": "2023-06-29T16:50:57Z", "user": "shradhasehgal" }, { "repo": "pytorch/kineto", "number": 774, "title": "Question about step time graph in Overview page", "body": "Hi, I am wondering what 'step' on the X axis represents in the step-time graph on the overview page. \r\nI set my profiling schedule with 5 steps for 'active', yet the profiling results only include time for step 0 only and not steps 0 - 4. \r\nCould you clarify what 'step' here refers to if not each of the step numbers the profiler was 'active' for?\r\n\r\n<img width=\"1343\" alt=\"Screenshot 2023-06-27 at 6 19 45 PM\" src=\"https://github.com/pytorch/kineto/assets/13078034/28f47356-9c6f-42ed-9ecc-1f6e1ed79513\">\r\n", "url": "https://github.com/pytorch/kineto/issues/774", "state": "closed", "labels": [ "question", "plugin" ], "created_at": "2023-06-28T01:22:30Z", "updated_at": "2024-04-23T15:28:39Z", "user": "shradhasehgal" }, { "repo": "pytorch/tutorials", "number": 2493, "title": "[BUG] - ax_multiobjective_nas_tutorial.ipynb fails", "body": "\r\nhttps://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html\r\n\r\n### Describe the bug\r\n\r\nHi,\r\n\r\nI am trying to get the [ax_multiobjective_nas_tutorial.ipnb tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) running on my local machine. I came until experiment running part without any problem, but when I start running the experiment, all the trials fail. I didn't change anything in the original notebook. This is the output:\r\n\r\n![image](https://github.com/pytorch/tutorials/assets/66868163/337308c1-308f-41b8-9f81-05dedb4cb37b)\r\n\r\nI tried running it on Google colab but got the same error.\r\n\r\n![image](https://github.com/pytorch/tutorials/assets/66868163/5a7be01b-63da-49a6-b9d0-84a6c940088c)\r\n\r\nFull log:\r\n\r\n---------------------------------------------------------------------------\r\nFailureRateExceededError Traceback (most recent call last)\r\nCell In[11], line 1\r\n----> 1 scheduler.run_all_trials()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999), in Scheduler.run_all_trials(self, timeout_hours, idle_callback)\r\n 992 if self.options.total_trials is None:\r\n 993 # NOTE: Capping on number of trials will likely be needed as fallback\r\n 994 # for most stopping criteria, so we ensure `num_trials` is specified.\r\n 995 raise ValueError( # pragma: no cover\r\n 996 \"Please either specify `num_trials` in `SchedulerOptions` input \"\r\n 997 \"to the `Scheduler` or use `run_n_trials` instead of `run_all_trials`.\"\r\n 998 )\r\n--> 999 for _ in self.run_trials_and_yield_results(\r\n 1000 max_trials=not_none(self.options.total_trials),\r\n 1001 timeout_hours=timeout_hours,\r\n 1002 idle_callback=idle_callback,\r\n 1003 ):\r\n 1004 pass\r\n 1005 return self.summarize_final_result()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:854](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:854), in Scheduler.run_trials_and_yield_results(self, max_trials, ignore_global_stopping_strategy, timeout_hours, idle_callback)\r\n 849 n_remaining_to_run = max_trials\r\n 850 while (\r\n 851 not self.should_consider_optimization_complete()[0]\r\n 852 and n_remaining_to_run > 0\r\n 853 ):\r\n--> 854 if self.should_abort_optimization():\r\n 855 yield self._abort_optimization(num_preexisting_trials=n_existing)\r\n 856 return\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:712](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:712), in Scheduler.should_abort_optimization(self)\r\n 707 \"\"\"Checks whether this scheduler has reached some intertuption [/](https://file+.vscode-resource.vscode-cdn.net/) abort\r\n 708 criterion, such as an overall optimization timeout, tolerated failure rate, etc.\r\n 709 \"\"\"\r\n 710 # if failure rate is exceeded, raise an exception.\r\n 711 # this check should precede others to ensure it is not skipped.\r\n--> 712 self.error_if_failure_rate_exceeded()\r\n 714 # if optimization is timed out, return True, else return False\r\n 715 timed_out = (\r\n 716 self._timeout_hours is not None\r\n 717 and self._latest_optimization_start_timestamp is not None\r\n (...)\r\n 720 >= not_none(self._timeout_hours) * 60 * 60 * 1000\r\n 721 )\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779), in Scheduler.error_if_failure_rate_exceeded(self, force_check)\r\n 771 if self._num_trials_bad_due_to_err > num_bad_in_scheduler [/](https://file+.vscode-resource.vscode-cdn.net/) 2:\r\n 772 self.logger.warn(\r\n 773 \"MetricFetchE INFO: Sweep aborted due to an exceeded error rate, \"\r\n 774 \"which was primarily caused by failure to fetch metrics. Please \"\r\n 775 \"check if anything could cause your metrics to be flakey or \"\r\n 776 \"broken.\"\r\n 777 )\r\n--> 779 raise self._get_failure_rate_exceeded_error(\r\n 780 num_bad_in_scheduler=num_bad_in_scheduler,\r\n 781 num_ran_in_scheduler=num_ran_in_scheduler,\r\n 782 )\r\n\r\nFailureRateExceededError: Failure rate exceeds the tolerated trial failure rate of 0.5 (at least 8 out of first 8 trials failed). Checks are triggered both at the end of a optimization and if at least 5 trials have failed.\r\n\r\n\r\nWhat do you think might be the problem here? Thank you.\r\n\r\nBest,\r\nEmre\r\n\r\n### Describe your environment\r\n\r\nUbuntu ", "url": "https://github.com/pytorch/tutorials/issues/2493", "state": "closed", "labels": [ "question", "ax" ], "created_at": "2023-06-27T23:09:05Z", "updated_at": "2023-06-28T17:46:51Z", "user": "ekurtgl" }, { "repo": "pytorch/TensorRT", "number": 2062, "title": "\u2753 [Question] \"When the performance of an int8 model improves compared to an fp32 model after QAT\"", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI have a question because there is something I do not understand during the QAT.\r\n\r\ncode ref: https://pytorch.org/TensorRT/_notebooks/vgg-qat.html#4\r\n\r\nPhenomenon: The model with QAT applied and the simple TRT-converted model without QAT show higher accuracy than the fp32 model.\r\nData: 3-class dataset with approximately 210,000 images.\r\nModel architecture: ResNet18.\r\n\r\nCan the int8 converted TRT model perform better than the fp32 model?\r\n![image](https://github.com/pytorch/TensorRT/assets/54762817/95276682-3525-4697-bd8e-29d6ea5cf7e7)\r\n\r\n\r\n** Another question\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): v1.3.0\r\n - CPU Architecture: intel i9-10980\r\n - OS (e.g., Linux): ubuntu 20.04.3\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip \r\n - Build command you used (if compiling from source): \r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8\r\n - CUDA version: 11.6\r\n - GPU models and configuration: \r\n - Any other relevant information:\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2062", "state": "closed", "labels": [ "question", "No Activity", "component: quantization" ], "created_at": "2023-06-27T08:20:34Z", "updated_at": "2023-10-09T00:02:22Z", "user": "JongSeok553" }, { "repo": "pytorch/data", "number": 1192, "title": "Is torchdata still being actively developed? ", "body": "No commits since June 7 (3 weeks ago). And @ejguan mentioned in https://github.com/pytorch/data/issues/1184#issuecomment-1593476769 they and @NivekT, the primary contributors, are no longer working on it. \r\n\r\nCan anyone comment on whether torchdata will continue to be developed or supported?", "url": "https://github.com/meta-pytorch/data/issues/1192", "state": "closed", "labels": [], "created_at": "2023-06-26T21:51:48Z", "updated_at": "2023-07-24T02:41:31Z", "comments": 6, "user": "lendle" }, { "repo": "pytorch/pytorch", "number": 104159, "title": "how to optimize torch.argwhere?", "body": "`t0 = time.time()\r\nxx = torch.argwhere(x) ## x.shape = (15120,150) x.device = cuda:0 and the gpu is gtx1050\r\nprint(time.time() - t0)`\r\n\r\nthe output is always near 0.15s,how can i reduce the cost time ? or there is other high efficient methods to replace argwhere? \n\ncc @albanD", "url": "https://github.com/pytorch/pytorch/issues/104159", "state": "closed", "labels": [ "module: performance", "triaged", "module: python frontend" ], "created_at": "2023-06-25T15:12:53Z", "updated_at": "2023-06-28T18:10:17Z", "user": "Soikie" }, { "repo": "pytorch/torchx", "number": 735, "title": "With Volcano, why or when to use TorchX?", "body": "## \u2753 Questions and Help\r\n\r\n### Question\r\n\r\nWe can run Pytorch DDP or elastic with just Volcano, right? What does TorchX offer differently from Volcano?\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/735", "state": "closed", "labels": [], "created_at": "2023-06-25T07:54:40Z", "updated_at": "2023-07-12T20:41:59Z", "comments": 2, "user": "zxcware" }, { "repo": "pytorch/tutorials", "number": 2487, "title": "[BUG] No ways provided to replicate fps on retrained models.", "body": "### Add Link\r\n\r\nhttps://pytorch.org/tutorials/intermediate/realtime_rpi.html\r\n\r\n### Describe the bug\r\n\r\nI am getting 25-30fps on my rpi4 with provided snippet.\r\nHowever, after finetuning mobilenet_v2 and applying:\r\n```\r\n# Quantize the model\r\nquantized_model = torch.quantization.quantize_dynamic(\r\n model, {torch.nn.Linear}, dtype=torch.qint8\r\n)\r\n\r\n# Convert the quantized model to TorchScript\r\nscript_model = torch.jit.script(quantized_model)\r\n```\r\nI am only getting 2.5fps.\r\nThe tutorial suggests:\r\n\r\n```\r\nYou can create your own model or fine tune an existing one. If you fine tune on one of the models from [torchvision.models.quantized](https://pytorch.org/vision/stable/models.html#quantized-models) most of the work to fuse and quantize has already been done for you so you can directly deploy with good performance on a Raspberry Pi.\r\n```\r\nBut provides no guidance on how to do it.\r\nMy attempts to do so failed:\r\n```\r\ntorch.backends.quantized.engine = 'qnnpack'\r\nmodel = models.quantization.mobilenet_v2(pretrained=True, quantize=True) # INT\r\n\r\nnum_classes = 3\r\nmodel.classifier[1] = torch.nn.Linear(model.last_channel, num_classes)\r\n``` \r\nwould result in \r\n```\r\n---------------------------------------------------------------------------\r\n\r\nRuntimeError Traceback (most recent call last)\r\n\r\n[<ipython-input-48-ddcd2d77aac5>](https://localhost:8080/#) in <cell line: 24>()\r\n 39 \r\n 40 # Forward pass\r\n---> 41 outputs = model(inputs)\r\n 42 loss = criterion(outputs, labels)\r\n 43 \r\n\r\n6 frames\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py](https://localhost:8080/#) in forward(self, input)\r\n 112 \r\n 113 def forward(self, input: Tensor) -> Tensor:\r\n--> 114 return F.linear(input, self.weight, self.bias)\r\n 115 \r\n 116 def extra_repr(self) -> str:\r\n\r\nRuntimeError: mat1 and mat2 must have the same dtype\r\n```\r\nMultiple attempts to create custom Linear layer that supports int8 dtype also failed.\r\n\r\n### Describe your environment\r\n\r\nnot relevant\n\ncc @datumbox @nairbv @fmassa @NicolasHug @YosuaMichael", "url": "https://github.com/pytorch/tutorials/issues/2487", "state": "open", "labels": [ "bug", "module: vision" ], "created_at": "2023-06-24T12:04:23Z", "updated_at": "2023-06-26T20:29:24Z", "comments": 2, "user": "Huxwell" }, { "repo": "pytorch/TensorRT", "number": 2044, "title": "\u2753 [Question] How can I install the latest version of python API? Torch and Tensorrt's CUDA dependencies conflict with each other.", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- -->\r\nI have already create a python=3.9 env, when I use the command 'pip install torch-tensorrt', I find that the torch version that the latest torch-tensorrt needs is 2.0.1 and the tensorrt version it needs is 8.6.1, but these two packages need different cuda versions(which one is cu11 and another is cu12). When I run a simple model(input) example python code, torch can't resolve the environment. \r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.0.1\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): pip install torch-tensorrt\r\n - Are you using local sources or building from archives: archives\r\n - Python version: 3.9\r\n - tensorrt version: 8.6.1\r\n\r\n## Additional context\r\nexample python code:\r\n```\r\nimport torch\r\n#import torch_tensorrt #No tensorrt and torch_tensorrt installed, this code will run successfully.\r\nconv=torch.nn.Conv2d(3,32,3,1,0,bias=False)\r\ninput=torch.randn(1,3,224,224)\r\nconv.cuda()\r\ninput.cuda()\r\nprint(conv(input).shape)\r\n```\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2044", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-06-21T17:12:54Z", "updated_at": "2023-10-16T00:02:24Z", "user": "1585231086" }, { "repo": "pytorch/pytorch", "number": 103962, "title": "How to unwrap after auto_wrap in FSDP?", "body": "I am currently fine-tuning a LLM (LLaMA) and would like to retrieve the gradients of each weight (parameter) after every gradient update. However, I notice that weights are (auto) wrapped into stuff like \u201c_fsdp_wrapped_module._flat_param\u201d during training. I need to map these wrapped weights to the original LLaMA architecture such as \u201cself_attn.v_proj\u201d. Any code examples?\r\n\r\nI guess \u201csummon_full_params()\u201d might be the function that I look for, but I am not sure if that is correct. I also have difficulty using this function. Thanks a lot for any help!\n\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu", "url": "https://github.com/pytorch/pytorch/issues/103962", "state": "open", "labels": [ "oncall: distributed", "triaged", "module: fsdp" ], "created_at": "2023-06-21T11:27:10Z", "updated_at": "2023-10-27T15:16:22Z", "user": "ZN1010" }, { "repo": "pytorch/pytorch", "number": 103958, "title": "How to modify gradients of an FSDP model?", "body": "### \ud83d\udcda The doc issue\r\n\r\nI've initially posted the question on [forum](https://discuss.pytorch.org/t/modify-gradients-of-an-fsdp-model/182159) 7 days ago, but crossposting here as well for better visibility since I couldn't get any answers there. \r\n\r\nHi everyone,\r\nI have an FSDP model which has zeros in some of the `torch.nn.Linear.weight` parameters. During the training I would like to keep those parameters fixed to zeros, and to zero-out their gradients during backward as well. The specific use-case is: I am loading a pruned model and I want to fine-tune it with FSDP while keeping the pruning mask fixed. \r\n\r\nTo achieve this I need to do two things: \r\n1) multiply parameters with the mask before the forward pass (so that all pruned weights remain pruned), \r\n2) multiply gradients of pruned parameters after the backward pass (so that gradients of pruned weights are zeros)\r\n \r\nIn the standard DDP training I would achieve this by:\r\n1) registering forward pre-hook on `torch.nn.Linear` modules and multiplying weights with the mask before each forward pass,\r\n2) registering a hook on the parameter `torch.nn.Linear.weight` and multiplying its gradient with the mask. \r\n\r\nFor example:\r\n```python\r\ndef keep_param_pruned(mask, module, input):\r\n with torch.no_grad():\r\n module.weight.data.mul_(mask.to(module.weight.device))\r\n\r\ndef keep_grad_pruned(mask, grad):\r\n return grad.mul_(mask.to(grad.device))\r\n\r\nfor n, m in model.named_modules():\r\n if isinstance(m, torch.nn.Linear):\r\n mask = m.weight > threshold\r\n m.register_forward_pre_hook(partial(keep_param_pruned, mask))\r\n m.weight.register_hook(partial(keep_grad_pruned, mask))\r\n```\r\n\r\nHowever, I am struggling to modify this idea to work with FSDP. Any suggestions/ideas on what I am doing wrong or if there is a simpler way to achieve this without playing with hooks?\r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_\r\n\r\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu", "url": "https://github.com/pytorch/pytorch/issues/103958", "state": "closed", "labels": [ "oncall: distributed", "module: fsdp" ], "created_at": "2023-06-21T09:33:32Z", "updated_at": "2025-04-03T23:45:25Z", "user": "eldarkurtic" }, { "repo": "pytorch/TensorRT", "number": 2028, "title": "\u2753 [Question] Torch-TensorRT 1.3.0 uses cuDNN 8.6.0 instead of 8.5.0", "body": "## \u2753 Question\r\n\r\nHi, I am using torch-tensorRT 1.3.0, it seems it is linked to cuDNN 8.6.0 instead of 8.5.0 as described in the release note? Please find my environment setup below\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.13.1 with cu117\r\n - OS (e.g., Linux): Linux (ubuntu 20.04)\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117\r\n - Python version:3.8\r\n - CUDA version: 11.7\r\n - TensorRT: 8.5.3.1\r\n - torch-tensorrt: 1.3.0\r\n\r\nI got the warning: tensorrt is linked to cuDNN 8.6.0 but cuDNN 8.5.0 is loaded --- when i print torch.backend.cudnn.version() it says 8500, so I assume if torch-tensorrt is linked with cuDNN as it is described in the release note, there should not be such warning?\r\nCould you please let me know if there is something I'm doing wrong? Thank you!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/2028", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-06-20T16:00:55Z", "updated_at": "2023-09-30T00:02:07Z", "user": "akaimody123" }, { "repo": "pytorch/data", "number": 1190, "title": "Dataloader2 with FullSyncIterDataPipe throws error during initilization", "body": "### \ud83d\udc1b Describe the bug\n\nHi, we found some strange during using Dataloader2. Here's some details about the issue.\r\n\r\n- We are a long run training job with 8 AWS P4 nodes. It's using HuggingFace trainer.\r\n- In HuggingFace training, it will call evaluation every `traininig_args.eval_steps` training steps.\r\n- I overrided the HF trainer to use Dataloader2 with training, evaluation and test dataset loading. At the same time, on the dataset part, I'm using `IterableDataPipe` with `ShardingFilterIterDataPipe`\r\n- The issue that listed the log happens **randomly**. And most time it happens after the job runs for a long time (e.g. 20+ hours)\r\n\r\nCan you help provide some context on what could be the root cause and how to fix this? Thanks!\r\n\r\nLog:\r\n```\r\n\r\n\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/transformers/trainer.py\", line 1633, in train\r\n-- | -- | --\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | return inner_training_loop(\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/transformers/trainer.py\", line 1979, in _inner_training_loop\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/transformers/trainer.py\", line 2236, in _maybe_log_save_evaluate\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/transformers/trainer.py\", line 2932, in evaluate\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | output = eval_loop(\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/workspace/mfive/mfive/trainer.py\", line 236, in evaluation_loop\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | for step, inputs in enumerate(dataloader):\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/torchdata/dataloader2/dataloader2.py\", line 46, in __next__\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | next_val = next(self.dataloader._datapipe_iter) # type: ignore[arg-type]\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/torch/utils/data/datapipes/_hook_iterator.py\", line 173, in wrap_generator\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | response = gen.send(None)\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/torchdata/datapipes/iter/util/distributed.py\", line 178, in __iter__\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | self._process_group = dist.new_group(backend=\"gloo\")\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py\", line 3520, in new_group\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | pg = _new_process_group_helper(\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | File \"/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py\", line 1009, in _new_process_group_helper\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:176] bind: Address already in use\r\n\u00a0 | 2023-06-08T08:51:15.973-07:00 | This exception is thrown by __iter__ of FullSyncIterDataPipe(datapipe=CollatorIterDataPipe, timeout=1800)\r\n\r\n```\r\n\n\n### Versions\n\n```\r\nVersions of relevant libraries:\r\n[pip3] flake8==6.0.0\r\n[pip3] mypy==0.991\r\n[pip3] mypy-boto3-batch==1.26.103\r\n[pip3] mypy-boto3-ec2==1.26.136\r\n[pip3] mypy-boto3-iam==1.26.97\r\n[pip3] mypy-boto3-s3==1.26.127\r\n[pip3] mypy-boto3-sagemaker==1.26.141\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.24.3\r\n[pip3] torch==2.0.1\r\n[pip3] torch-tb-profiler==0.4.1\r\n[pip3] torchdata==0.6.1\r\n[pip3] torchmetrics==0.11.4\r\n[pip3] torchsnapshot-nightly==2023.3.15\r\n[pip3] torchvision==0.15.2\r\n[pip3] torchx-nightly==2023.5.25\r\n[pip3] triton==2.0.0\r\n[conda] numpy 1.24.3 pypi_0 pypi\r\n[conda] torch 2.0.1 pypi_0 pypi\r\n[conda] torch-tb-profiler 0.4.1 pypi_0 pypi\r\n[conda] torchdata 0.6.1 pypi_0 pypi\r\n[conda] torchmetrics 0.11.4 pypi_0 pypi\r\n[conda] torchsnapshot-nightly 2023.3.15 pypi_0 pypi\r\n[conda] torchvision 0.15.2 pypi_0 pypi\r\n[conda] torchx-nightly 2023.5.25 pypi_0 pypi\r\n[conda] triton 2.0.0 pypi_0 pypi\r\n```", "url": "https://github.com/meta-pytorch/data/issues/1190", "state": "open", "labels": [], "created_at": "2023-06-19T18:25:36Z", "updated_at": "2023-06-22T17:30:46Z", "comments": 3, "user": "chenxingyu-cs" }, { "repo": "pytorch/text", "number": 2183, "title": "ImportError: cannot import name 'Field' from 'torchtext.data' ", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\nI'm using pytorch2.0.0, the version of torchtext is 0.15.2, when I import \"Field\" and \"BucketIterator\" in the code(`from torchtext.data import Field, BucketIterator`), I got an error from this sentence: `ImportError: cannot import name 'Field' from ' torchtext.data' (D:\\ML_Pytorch\\venv\\lib\\site-packages\\torchtext\\data\\__init__.py)`\r\n\r\nMay I ask where did the `Field `go? ? If `Field `disappears, is there any other similar functionality that can be imported?", "url": "https://github.com/pytorch/text/issues/2183", "state": "open", "labels": [], "created_at": "2023-06-19T11:28:42Z", "updated_at": "2023-08-20T06:14:30Z", "comments": 2, "user": "MrMoe830" }, { "repo": "pytorch/tutorials", "number": 2478, "title": "TransformerEncoder is not causal", "body": "### Add Link\n\nhttps://pytorch.org/tutorials/beginner/transformer_tutorial.html\r\n\r\n![image](https://github.com/pytorch/tutorials/assets/11831785/285e0fed-1f34-419d-935b-029d43414c37)\r\n\r\nfor language modeling\uff0c src_mask should be mask future words\r\n\n\n### Describe the bug\n\nis there anything wrong?\n\n### Describe your environment\n\n colab\n\ncc @pytorch/team-text-core @Nayef211 @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2478", "state": "closed", "labels": [ "bug", "module: torchtext", "medium", "docathon-h2-2023" ], "created_at": "2023-06-18T15:26:46Z", "updated_at": "2023-11-10T22:31:04Z", "comments": 10, "user": "bigheary" }, { "repo": "pytorch/text", "number": 2182, "title": "Explicit dependend on portalocker?", "body": "Shouldn't torch/text add an explicit dependency on portalocker now? Without it, I get:\r\n```\r\n= 979 failed, 204 passed, 12 skipped, 1 deselected, 6 warnings in 495.47s (0:08:15) =\r\n```\r\nthat's >80% failed tests, and probably does not represent a functional torchtext?\r\n\r\n_Originally posted by @h-vetinari in https://github.com/pytorch/text/issues/2056#issuecomment-1593761158_\r\n ", "url": "https://github.com/pytorch/text/issues/2182", "state": "open", "labels": [], "created_at": "2023-06-15T21:45:32Z", "updated_at": "2023-06-15T21:45:32Z", "comments": 0, "user": "h-vetinari" }, { "repo": "pytorch/kineto", "number": 770, "title": "On demand profiling example / code changes", "body": "Hi, is there an example for how we can enable on demand profiling with kineto? \r\nThe [libkineto README](https://github.com/pytorch/kineto/tree/main/libkineto) mentions that we can send a 'signal' or 'trigger' on demand profiling, but I am unclear on how we can do so from outside the PyTorch script. \r\n\r\nWould highly appreciate if somebody could provide an example or point me to the relevant APIs / source files. Thank you!!", "url": "https://github.com/pytorch/kineto/issues/770", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-15T04:12:22Z", "updated_at": "2024-04-23T15:27:23Z", "user": "shradhasehgal" }, { "repo": "pytorch/xla", "number": 5188, "title": "Slow Device To Host Transfers", "body": "## \u2753 Questions and Help\r\n\r\nRecently I tried ResNet-50 on TPUs using this repo and TensorFlow / Keras. The performance difference between the two was about 15% (2844.4 img/s per TPU vs 3283.52 img/s) in favor of TensorFlow / Keras. These results were with logging every _300_ iterations. When I removed the logging, the TensorFlow / Keras performance stayed the same while this repo caught up within a few percent (3193.6 img/s). I think this is somewhat expected, as in a previous issue and in the troubleshooting guide, these transfers are generally seen as bad for performance. However, TensorFlow / Keras didn't have a change in their performance, so I did some digging, and it seems they use a separate thread and [device-specific outfeed queue](https://github.com/tensorflow/estimator/blob/7d846da87ed70f9a6c21a33a1c7178697844d9c0/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#LL450C19-L450C19) that lets them asynchronously transfer data (like the loss) to the host and display a progress bar and other metrics without any hit to TPU performance. Is there a reason they're able to do that and PyTorch XLA cannot?\r\n\r\nMore details:\r\n- [PyTorch XLA script](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet.py) (taken from this repo's tests)\r\n- [TensorFlow / Keras script](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) (taken from tensorflow/tpu)\r\n- Performance statistics were taken in the exact same way across both scripts by measuring time right after each step (includes data loading time)\r\n- For torch_xla, `add_step_closure` was tried with `run_async=False` and `run_async=True`.\r\n- Both were run on the exact same v4-8 TPU VM with the latest version of torch_xla and tensorflow.\r\n", "url": "https://github.com/pytorch/xla/issues/5188", "state": "closed", "labels": [ "question", "runtime" ], "created_at": "2023-06-14T23:01:59Z", "updated_at": "2025-04-30T12:53:54Z", "user": "MikeynJerry" }, { "repo": "pytorch/data", "number": 1184, "title": "Roadmap for mixed chain of multithread and multiprocessing pipelines?", "body": "### \ud83d\ude80 The feature\r\n\r\n[pypeln](https://cgarciae.github.io/pypeln/#mixed-pipelines) has a nice feature to chain pipelines which may run on different kind of workers including process, thread or asyncio.\r\n```python\r\ndata = (\r\n range(10)\r\n | pl.process.map(slow_add1, workers=3, maxsize=4)\r\n | pl.thread.filter(slow_gt3, workers=2)\r\n | pl.sync.map(lambda x: print x)\r\n | list\r\n)\r\n```\r\n![image](https://github.com/pytorch/data/assets/11533479/5ebed02e-148e-4990-9186-b16b47a6aec5)\r\n\r\nI remembered that in the first proposal of pytorch/data, it claims to support something alike. I'd like to ask if it's still planed and the concrete roadmap.\r\n\r\n### Motivation, pitch\r\n\r\nInitial proposed\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1184", "state": "open", "labels": [], "created_at": "2023-06-14T07:12:36Z", "updated_at": "2023-06-15T17:32:46Z", "comments": 2, "user": "npuichigo" }, { "repo": "pytorch/serve", "number": 2412, "title": "How to identify \"full\" torchserve instances on Google Kubernetes Engine", "body": "We're currently trying to deploy torchserve on scale on Kubernetes. We have highly fluctuating requests, basically every 5 minutes some requests come in with nothing in-between, and sometimes there'll be huge spikes. Therefore we want small pods that scale aggressively as soon as load comes in.\r\n\r\nHere comes the issues: based on what metric can we scale and is there a way to identify pods that are at their limit?\r\n\r\nFor scaling we currently just use cpu usage, `queueLength` would be ideal. For that we probably have to wait on #2101, right?\r\n\r\nOnce scaling has happened, k8s has no way of knowing which pods can actually serve requests (one request can take up to 10 seconds, so a full queue will stay full for a while). Again, readiness probe on `queueLength` would be ideal. `queueTime` will only tell us that we should have scaled x seconds ago.\r\n\r\nWe've come up with the solution of using the `readinessProbe` to send a dummy request to the handler to check whether it gets denied immediately. But that can't be it, right? Surely, this problem can't be so unique that there is no better solution.\r\n\r\n\r\nI apologize in advance if this is not the right place to ask this question, I couldn't find anything better.", "url": "https://github.com/pytorch/serve/issues/2412", "state": "open", "labels": [ "triaged", "kubernetes" ], "created_at": "2023-06-13T20:06:20Z", "updated_at": "2023-06-26T17:16:00Z", "user": "tsteffek" }, { "repo": "pytorch/pytorch", "number": 103506, "title": "How to add testing capabilities for third party devices", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe current community test cases are all cpu and cuda based, there is no ability to look after third party devices, for example many test cases use the @onlycuda decorator, any suggestions for improvements for the privateuse1 device\uff1f\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/103506", "state": "closed", "labels": [ "triaged", "module: third_party", "module: testing" ], "created_at": "2023-06-13T12:37:13Z", "updated_at": "2023-06-26T17:07:54Z", "user": "Bin1024" }, { "repo": "pytorch/data", "number": 1181, "title": "Does Collator need to exist? ", "body": "### \ud83d\udcda The doc issue\r\n\r\nDocs for [Collator](https://pytorch.org/data/0.6/generated/torchdata.datapipes.iter.Collator.html#torchdata.datapipes.iter.Collator) leave a lot of questions. \r\n\r\n> Collates samples from DataPipe to Tensor(s) by a custom collate function\r\nWhat does collate mean in this context? What is the collate function applied to? In the torch Dataloader docs, it's clear that collate_fn is meant to be applied to a batch of data, but that's not explained here at all. Looking at the implementation I think the input datapipe is supposed to be batched here too, but that's not clear. \r\n\r\nWhat's the difference between this and Mapper? Sort of seems like the only difference is that the output of `collate_fn` is supposed to be tensors? Or collections of Tensors? I have used it with a function that returns a list of ints though, so there doesn't seem to be anything enforcing that the output is Tensors.\r\n\r\n\r\n### Suggest a potential alternative/fix\r\n\r\nGet rid of Collator if it doesn't add anything over Mapper, it's confusing\r\n\r\nIf keeping it:\r\n\r\n* If it's basically Mapper with a default mapping function that converts things to tensors, don't allow specifying the function. \r\n* Or explain why this is different than mapper.\r\n* State that input should be batched\r\n* Document the `conversion` argument\r\n", "url": "https://github.com/meta-pytorch/data/issues/1181", "state": "open", "labels": [], "created_at": "2023-06-12T15:02:52Z", "updated_at": "2023-07-18T00:38:02Z", "comments": 1, "user": "lendle" }, { "repo": "pytorch/tutorials", "number": 2453, "title": "\ud83d\udca1 [REQUEST] - Add ABI=1 compilation instruction to README", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nUnder certain usage circumstances, PyTorch needs to have C++11 ABI enabled. Currently there's no docs in README for introducing how to get it enabled.\r\nLink https://github.com/pytorch/pytorch/pull/95177 to enable this request.\r\n\r\n\n\n### Existing tutorials on this topic\n\nhttps://github.com/pytorch/pytorch\n\n### Additional context\n\nWe aim to complete the document as part of PyTorch Docathon 2023. cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233 @CaoE", "url": "https://github.com/pytorch/tutorials/issues/2453", "state": "closed", "labels": [], "created_at": "2023-06-09T07:53:48Z", "updated_at": "2023-06-15T07:13:34Z", "comments": 1, "user": "jingxu10" }, { "repo": "pytorch/tutorials", "number": 2435, "title": "How can we contribute with videos", "body": "How can we contribute videos to GitHub in PyTorch? The video will likely be long and is a link enough to be contributed or should I send with a link", "url": "https://github.com/pytorch/tutorials/issues/2435", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-06T09:09:59Z", "updated_at": "2023-06-12T16:19:56Z", "user": "Killpit" }, { "repo": "pytorch/pytorch", "number": 102966, "title": "how to workaround the error \"don't have an op for vulkan_prepack::create_linear_context\" ?", "body": "### \ud83d\udc1b Describe the bug\n\nI have a modified resnet-50 network, which I want to run on android using vulkan backend.\r\n\r\nThe custom build of pytorch with USE_VULKAN=1 works fine, but I got the error message \"We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case.\" during \"optimize_for_mobile\" API invocation.\r\n\r\nWhat's the problem here, and how to deal with it?\r\n(I tried on both release 1.13 and release v2.0.1 tags, but got the same error message above).\r\n\r\n\r\n```\r\ngit clone -b release/1.13 --recursive https://github.com/pytorch/pytorch\r\ncd pytorch\r\ngit submodule sync\r\ngit submodule update --init --recursive\r\n\r\nexport CMAKE_PREFIX_PATH=${CONDA_PREFIX:-\"$(dirname $(which conda))/../\"}\r\npython setup.py build --cmake-only\r\nccmake build # or cmake-gui build\r\n\r\nBUILD_LITE_INTERPRETER=0 USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python setup.py develop\r\n\r\nBUILD_LITE_INTERPRETER=0 ANDROID_ABI=arm64-v8a USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 bash ./scripts/build_android.sh\r\n\r\nBUILD_LITE_INTERPRETER=0 ANDROID_ABI=arm64-v8a USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 bash ./scripts/build_pytorch_android.sh\r\n```\r\n\r\n```\r\n\r\n>>> import torch\r\n>>> import os\r\n>>> \r\n>>> from torch.utils.mobile_optimizer import optimize_for_mobile\r\n>>> \r\n>>> #file_dir = '.'\r\n>>> file_dir = '../pytorch-script/'\r\n>>> model = torch.jit.load(file_dir + '/modified-resnet50-image.pt')\r\n>>> model.eval()\r\nRecursiveScriptModule(original_name=ImageModel)\r\n>>> script_model = torch.jit.script(model)\r\n>>> script_model_vulkan = optimize_for_mobile(script_model, backend='vulkan')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/mnt/DataExt/devroot/src/pytorch/torch/utils/mobile_optimizer.py\", line 67, in optimize_for_mobile\r\n optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: 0 INTERNAL ASSERT FAILED at \"/mnt/DataExt/devroot/src/pytorch/torch/csrc/jit/ir/alias_analysis.cpp\":615, please report a bug to PyTorch. We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case. Argument types: Tensor, Tensor, \r\n\r\nCandidates:\r\n>>> exit()\r\n\r\n```\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 2.0.0a0+gite9ebda2\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.6 LTS (x86_64)\r\nGCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.11.3 (main, Apr 19 2023, 23:54:32) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.31\r\nIs CUDA available: True\r\nCUDA runtime version: 12.1.105\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA RTX A4000\r\nNvidia driver version: 530.30.02\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8\r\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 45 bits physical, 48 bits virtual\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nThread(s) per core: 1\r\nCore(s) per socket: 16\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 85\r\nModel name: Intel(R) Xeon(R) Gold 5218N CPU @ 2.30GHz\r\nStepping: 7\r\nCPU MHz: 2294.609\r\nBogoMIPS: 4589.21\r\nHypervisor vendor: VMware\r\nVirtualization type: full\r\nL1d cache: 512 KiB\r\nL1i cache: 512 KiB\r\nL2 cache: 16 MiB\r\nL3 cache: 22 MiB\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\r\nVulnerability Retbleed: Mitigation", "url": "https://github.com/pytorch/pytorch/issues/102966", "state": "open", "labels": [ "module: build", "triaged", "module: vulkan", "ciflow/periodic" ], "created_at": "2023-06-05T09:53:28Z", "updated_at": "2023-09-12T00:19:52Z", "user": "ldfandian" }, { "repo": "pytorch/pytorch", "number": 102939, "title": "Not sure what is wrong, ", "body": "### \ud83d\udc1b Describe the bug\n\nIt was working the last time I ran it, I ran an update and now i'm getting this when trying to train a lora\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\nFor effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link\r\n================================================================================\r\nCUDA SETUP: Loading binary C:\\Users\\newpc_53bcer\\Documents\\Lora\\kohya_ss\\venv\\lib\\site-packages\\bitsandbytes\\libbitsandbytes_cuda116.dll...\r\nuse 8-bit AdamW optimizer | {}\r\nrunning training / \u5b66\u7fd2\u958b\u59cb\r\n num train images * repeats / \u5b66\u7fd2\u753b\u50cf\u306e\u6570\u00d7\u7e70\u308a\u8fd4\u3057\u56de\u6570: 4700\r\n num reg images / \u6b63\u5247\u5316\u753b\u50cf\u306e\u6570: 0\r\n num batches per epoch / 1epoch\u306e\u30d0\u30c3\u30c1\u6570: 2350\r\n num epochs / epoch\u6570: 1\r\n batch size per device / \u30d0\u30c3\u30c1\u30b5\u30a4\u30ba: 2\r\n total train batch size (with parallel & distributed & accumulation) / \u7dcf\u30d0\u30c3\u30c1\u30b5\u30a4\u30ba\uff08\u4e26\u5217\u5b66\u7fd2\u3001\u52fe\u914d\u5408\u8a08\u542b\u3080\uff09: 2\r\n gradient ccumulation steps / \u52fe\u914d\u3092\u5408\u8a08\u3059\u308b\u30b9\u30c6\u30c3\u30d7\u6570 = 1\r\n total optimization steps / \u5b66\u7fd2\u30b9\u30c6\u30c3\u30d7\u6570: 2350\r\nsteps: 0%| | 0/2350 [00:00<?, ?it/s]\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 C:\\Users\\newpc_53bcer\\Documents\\Lora\\kohya_ss\\train_db.py:477 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 474 \u2502 args = parser.parse_args() \u2502\r\n\u2502 475 \u2502 args = train_util.read_config_from_file(args, parser) \u2502\r\n\u2502 476 \u2502 \u2502\r\n\u2502 \u2771 477 \u2502 train(args) \u2502\r\n\u2502 478 \u2502\r\n\u2502 \u2502\r\n\u2502 C:\\Users\\newpc_53bcer\\Documents\\Lora\\kohya_ss\\train_db.py:245 in train \u2502\r\n\u2502 \u2502\r\n\u2502 242 \u2502 ) \u2502\r\n\u2502 243 \u2502 \u2502\r\n\u2502 244 \u2502 if accelerator.is_main_process: \u2502\r\n\u2502 \u2771 245 \u2502 \u2502 accelerator.init_trackers(\"dreambooth\" if args.log_tracker_name is None else arg \u2502\r\n\u2502 246 \u2502 \u2502\r\n\u2502 247 \u2502 loss_list = [] \u2502\r\n\u2502 248 \u2502 loss_total = 0.0 \u2502\r\n\u2502 \u2502\r\n\u2502 C:\\Users\\newpc_53bcer\\Documents\\Lora\\kohya_ss\\venv\\lib\\site-packages\\accelerate\\accelerator.py:5 \u2502\r\n\u2502 48 in _inner \u2502\r\n\u2502 \u2502\r\n\u2502 545 \u2502 \u2502 \u2502 \u2502 ) \u2502\r\n\u2502 546 \u2502 \u2502 \u2502\r\n\u2502 547 \u2502 \u2502 def _inner(*args, **kwargs): \u2502\r\n\u2502 \u2771 548 \u2502 \u2502 \u2502 return PartialState().on_main_process(function)(*args, **kwargs) \u2502\r\n\u2502 549 \u2502 \u2502 \u2502\r\n\u2502 550 \u2502 \u2502 return _inner \u2502\r\n\u2502 551 \u2502\r\n\u2502 \u2502\r\n\u2502 C:\\Users\\newpc_53bcer\\Documents\\Lora\\kohya_ss\\venv\\lib\\site-packages\\accelerate\\accelerator.py:2 \u2502\r\n\u2502 031 in init_trackers \u2502\r\n\u2502 \u2502\r\n\u2502 2028 \u2502 \u2502 \u2502 \u2502 if getattr(tracker_init, \"requires_logging_directory\"): \u2502\r\n\u2502 2029 \u2502 \u2502 \u2502 \u2502 \u2502 # We can skip this check since it was done in `__init__` \u2502\r\n\u2502 2030 \u2502 \u2502 \u2502 \u2502 \u2502 self.trackers.append( \u2502\r\n\u2502 \u2771 2031 \u2502 \u2502 \u2502 \u2502", "url": "https://github.com/pytorch/pytorch/issues/102939", "state": "closed", "labels": [], "created_at": "2023-06-04T23:13:41Z", "updated_at": "2023-06-05T15:28:14Z", "user": "NeVeREire" }, { "repo": "pytorch/data", "number": 1177, "title": "what is the right way to serialize DataLoader2 so that pipeline with shuffle can resume from the right place?", "body": "### \ud83d\udc1b Describe the bug\n\nI tried all these versions, the only version that worked was the last one, but it's too hacky. Is there a better way? \r\n\r\n```py\r\n dp = IterableWrapper(list(range(20)))\r\n dp = dp.shuffle()\r\n items = []\r\n rs = InProcessReadingService()\r\n dl = DataLoader2(dp, reading_service=rs)\r\n iter1 = iter(dl)\r\n for _ in range(4):\r\n next(iter1)\r\n\r\n # 16 elements left in dl\r\n state = dl.state_dict()\r\n dl2 = DataLoader2.from_state(state, reading_service=rs)\r\n # assert len(list(dl2)) == 20 - 4 # got 20\r\n\r\n dp2 = deserialize_datapipe(serialize_datapipe(dl.datapipe))\r\n # assert len(list(dp2)) == 20 - 4 # got 20\r\n\r\n dp3 = deserialize_datapipe(serialize_datapipe(dl.datapipe))\r\n _simple_graph_snapshot_restoration(dp3, dp3._number_of_samples_yielded)\r\n ret3 = list(dp3)\r\n assert len(ret3) == 20 - 4\r\n # but content is not the same\r\n\r\n dl4 = DataLoader2.from_state(state, reading_service=rs)\r\n _simple_graph_snapshot_restoration(dl4.datapipe, dl.datapipe._number_of_samples_yielded)\r\n ret4 = list(dl4)\r\n assert len(ret4) == 20 - 4\r\n # but content is not the same\r\n\r\n dp5 = deserialize_datapipe(serialize_datapipe(dl.datapipe))\r\n pipes = get_all_pipes(dp5)\r\n for pipe in pipes:\r\n if isinstance(pipe, ShufflerIterDataPipe):\r\n buffer_cache = pipe._buffer[:]\r\n assert len(buffer_cache) == 20 - 4\r\n rng_state = pipe._rng.getstate()\r\n _simple_graph_snapshot_restoration(dp5, dl.datapipe._number_of_samples_yielded)\r\n dp5._buffer = buffer_cache[:]\r\n dp5._rng.setstate(rng_state)\r\n it5 = iter(dp5)\r\n ret5 = list(it5)\r\n assert len(ret5) == 20 - 4\r\n\r\n expected = list(iter1)\r\n # ret5 is the only method that worked\r\n # assert ret3 == expected\r\n # assert ret4 == expected\r\n assert ret5 == expected\r\n\r\n```\n\n### Versions\n\n```\r\nPyTorch version: 2.0.0a0+gite9ebda2\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.0\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\r\nClang version: 12.0.1 (https://github.com/conda-forge/clangdev-feedstock d44358f44aef33e9fa7c5f93e2481ee8f1a04ab6)\r\nCMake version: version 3.19.1\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)\r\nPython platform: Linux-5.4.0-64-generic-x86_64-with-glibc2.10\r\nIs CUDA available: False\r\nCUDA runtime version: 12.0.140\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: False\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] mypy-protobuf==3.3.0\r\n[pip3] numpy==1.23.5\r\n[pip3] pytorch3d==0.6.2\r\n[pip3] torch==2.0.1+1684801906.cuda120.cudnn891.nccl218.ap\r\n[pip3] torch-mlir==1684442443\r\n[pip3] torch-scatter==2.1.0\r\n[pip3] torch-tb-profiler==0.4.1\r\n[pip3] torchdata==0.7.0.dev20230601\r\n[pip3] torchfile==0.1.0\r\n[pip3] torchvision==0.15.1a0+42759b1\r\n[conda] magma-cuda121 2.6.1 1 pytorch\r\n[conda] mkl 2020.4 h726a3e6_304 conda-forge\r\n[conda] mkl-include 2023.1.0 h84fe81f_48680 conda-forge\r\n[conda] numpy 1.23.5 py38h7042d01_0 conda-forge\r\n[conda] pytorch3d 0.6.2 pypi_0 pypi\r\n[conda] torch 2.0.1+1684801906.cuda120.cudnn891.nccl218.ap pypi_0 pypi\r\n[conda] torch-mlir 1684442443 pypi_0 pypi\r\n[conda] torch-scatter 2.1.0 pypi_0 pypi\r\n[conda] torch-tb-profiler 0.4.1 pypi_0 pypi\r\n[conda] torchfile 0.1.0 pypi_0 pypi\r\n[conda] torchvision 0.15.1a0+42759b1 pypi_0 pypi\r\n```", "url": "https://github.com/meta-pytorch/data/issues/1177", "state": "open", "labels": [], "created_at": "2023-06-02T06:52:14Z", "updated_at": "2023-06-08T17:31:18Z", "comments": 2, "user": "zhengwy888" }, { "repo": "pytorch/pytorch", "number": 102718, "title": "How to support AMD GPU on Mac", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nMy computer is running macOS, with intel9900k cpu and amd Rx6600xt gpu. \r\nCan I build to support this gpu?\r\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/102718", "state": "closed", "labels": [], "created_at": "2023-06-01T09:03:42Z", "updated_at": "2024-06-21T14:05:02Z", "user": "Aiden-Dong" }, { "repo": "pytorch/benchmark", "number": 1707, "title": "How to execute with docker?", "body": "I'm using ARG BASE_IMAGE=ghcr.io/pytorch/torchbench:latest \r\nbut I am having problems with this container.\r\nor should use ghcr.io/pytorch:pytorch-nightly or [ghcr.io/pytorch:pytorch-nightly](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)", "url": "https://github.com/pytorch/benchmark/issues/1707", "state": "closed", "labels": [], "created_at": "2023-06-01T07:43:32Z", "updated_at": "2023-06-13T03:31:41Z", "user": "johnnynunez" }, { "repo": "pytorch/data", "number": 1175, "title": "Mux with MPRS causes operations after sharding_round_robin_dispatcher to run on the same worker", "body": "### \ud83d\udcda The doc issue\n\nThis doesn't seem to be mentioned in the docs, but if you have two datapipes that use `sharding_round_robin_dispatcher` and then `mux` them together:\r\n1. Any steps between `sharding_round_robin_dispatcher` and `mux` will take place on the same worker process.\r\n2. Only the steps after the `mux` will take place on separate workers.\r\n\r\nFor example, with the below graph, the `Mapper` nodes in between the `ShardingRoundRobinDispatcher` nodes and `Multiplexer` run on the same worker process. The `Mapper` node after `Multiplexer` will run across multiple processes as they're fed data in a round-robin fashion.\r\n![image](https://github.com/pytorch/data/assets/3038603/2b59dbbf-0068-4253-bac7-1a1b27962eaf)\r\n\r\nMy incorrect expectation was that the dispatching process would distribute data to worker processes immediately after `sharding_round_robin_dispatch` as usual, and then everything after `mux` would take place on either one or multiple worker processes.\r\n\n\n### Suggest a potential alternative/fix\n\nThe documentation for `Multiplexer`, `ShardingRoundRobinDispatcher`, and/or `MultiProcessingReadingService` should be updated to clarify what the intended behavior is here.", "url": "https://github.com/meta-pytorch/data/issues/1175", "state": "open", "labels": [], "created_at": "2023-05-30T20:36:43Z", "updated_at": "2023-05-31T07:48:21Z", "comments": 3, "user": "JohnHBrock" }, { "repo": "pytorch/data", "number": 1174, "title": "Support for proper Distributed & Multiprocessing Sharding", "body": "### \ud83d\ude80 The feature\r\n\r\nIn MPI-based training, each process is independent from each other. Each training process might want to speed up dataloading using multiprocessing (MP). This requires data sharding to take place on two levels:\r\n\r\nA. On a distributed level, usually resulting in big(ger) shards.\r\nB. On a MP level later on, further splitting those big shards among worker processes.\r\n\r\nWhile (A.) might potentially shard on a coarser, logical scale (e.g. on years or months if working with climatological data), (B.) might potentially shard directly on already loaded data (e.g. on indices of the previous shards).\r\n\r\nRight now, combining distributed & MP sharding in torchdata faces two hurdles that need addressing:\r\n\r\n1. Due to optional check in , there can only be a single `sharding_pipe()`. This check however does not take into account if a sharding pipe only operates on a specific sharding group / priority. This issue is already tracked by https://github.com/pytorch/data/issues/1082. A simple fix for this is to drop the check all together.\r\n2. torchdata assumes a single sharding (and distribution) model: Namely that distributed & MP shards are on the same logical level and that those are distributed in a round-robin fashion to worker processes. This is enforced in https://github.com/pytorch/data/blame/main/torchdata/dataloader2/utils/worker.py#L82 which prevents more general sharding strategies.\r\n\r\nOverall, these two hurdles need addressing via monkey patching at the moment to enable more general sharding strategies (see motivation for an use case and example of such a strategy). https://github.com/sehoffmann/atmodata/blob/6a7c2974a5de1354a7156d427bf53899fc6c0177/atmodata/patching.py shows what patches need to be done.\r\nSpecifically:\r\n- The check in `apply_sharding()` needs to be removed\r\n- `process_init_fn()` should call `apply_sharding()` on the whole pipe, not only on non-dispatching branches.\r\n- `pipe.repeat(n_workers).sharding_round_robin_dispatch()` needs to be used as a workaround to distribute the same shard to all workers. For this, an additional pipe should be introduced (just `dispatch()`).\r\n\r\nInstead of having to monkey-patch, torchdata should be less restrictive wrt. sharding and distribution strategies.\r\n\r\n### Motivation, pitch\r\n\r\nI'm working with climatological timeseries data on the terabyte scale. The sharding strategy and MP strategy that, in my humble opinion, makes the most sense for this use case looks like this:\r\n\r\n1. Shard (distributed) across the time-dimension on a logical level. Single shards could e.g. represent a single month, be contained in a single file, and be multiple gigabytes in size. These shards are pre loaded by the main process via network and in parallel.\r\n2. The **same** shard is distributed to each worker process via shared memory (to reduce memory overhead). E.g. each worker process sees the same shard/month. Now this \"super-shard\" is sharded further among worker processes by accessing only a subset of the indices. The time-resolution could e.g. be 1h.\r\n3. Batches from individual workers are aggregated by the main thread again.\r\n\r\nOverall, this pipelines roughly looks like this:\r\n\r\n```\r\n# Main Thread - Pre-loading\r\nmonths = IterableWrapper([\"1979-Jan\", \"1979-Feb\", ..., \"2020-Dec\"])\r\npipe = months.shuffle().sharding_filter(DISTRIBUTED)\r\npipe = pipe.load_data().prefetch()\r\npipe = pipe.repeat(n_workers).round_robin_dispatch()\r\n\r\n# Worker Process\r\npipe = pipe.unroll_indices() # -> yields (idx, data) tuples where data is the whole shard and idx are akin to enumerate()\r\npipe = pipe.shuffle().sharding_filter(MULTIPROCESSING)\r\npipe = pipe.do_work_on_sample()\r\npipe = pipe.batch()\r\n\r\n# Main Thread - Post-process\r\npipe = pipe.non_replicable() # non-replicable No-Op pipeline to force transfer to main thread\r\npipe = pipe.post_process()\r\n```\r\n\r\n#### Why can't individual worker processes operate independently on the same shards as in (1.), i.e. months?\r\nShards can be fairly big in size. If every worker would operate on independent shards then memory consumption might explode. Furthermore, worker processes might compete for shared network IO bandwidth. Also, depending on the shard size, there are potentially not that many shards in the dataset. This would then imposes a maximum on the number of GPUs for training.\r\n\r\n#### Why can't you reduce the shard size then? E.g. weeks instead of months\r\nWe are cropping timeseries from those shards. We thus always have some data waste at the end (or start) of each shard from which we can't crop. Reducing the shard size would increase the amount of data we would need to throw away. Furthermore, loading a few big shards via network is much more efficient than loading many small shards, and we want to utilize our network interface as much as possible for maximum throughput.\r\n\r\n#### Why can't you shard directly on index level and then distribut in a round-robin fashion?\r\nThis would be horrendously slow.\r\n\r\nOverall, the difficulties with this kind ", "url": "https://github.com/meta-pytorch/data/issues/1174", "state": "open", "labels": [], "created_at": "2023-05-30T16:33:59Z", "updated_at": "2023-05-30T16:40:35Z", "comments": 0, "user": "sehoffmann" }, { "repo": "pytorch/tutorials", "number": 2355, "title": "\ud83d\udca1 [REQUEST] - Write a tutorial about how to leverage AMX with PyTorch on the 4th Gen of Xeon", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nThe 4th Generation Intel\u00ae Xeon\u00ae Scalable Processor platform is an unique, scalable platform optimized for different workloads acceleration on AI. The new built-in AI acceleration engine, Intel\u00ae Advanced Matrix Extensions (AMX) is able to accelerate a variety of AI Inference and Training workloads (NLP, recommendation systems, image recognition\u2026) with BF16 and INT8 datatype.\r\n\r\nPyTorch has enabled AMX support for computation intensive operators, e.g. Conv2d, ConvTranspose2d, Linear, MatMul, bmm with `torch.bfloat16` datatype and int8 on the quantization backend. It is better to write a tutorial to tell users how to leverage AMX on PyTorch.\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\nWe aim to complete the document as part of PyTorch Docathon 2023. cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233 @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen @caoe", "url": "https://github.com/pytorch/tutorials/issues/2355", "state": "closed", "labels": [ "docathon-h1-2023", "advanced", "intel" ], "created_at": "2023-05-30T03:02:23Z", "updated_at": "2023-11-02T19:30:05Z", "user": "mingfeima" }, { "repo": "pytorch/android-demo-app", "number": 322, "title": "I have a Whisper-based model. How can I convert it to fairseq.dict format ?", "body": "model https://huggingface.co/openai/whisper-large-v2", "url": "https://github.com/pytorch/android-demo-app/issues/322", "state": "open", "labels": [], "created_at": "2023-05-29T08:52:30Z", "updated_at": "2023-05-29T09:00:13Z", "user": "Roland-Du" }, { "repo": "pytorch/tutorials", "number": 2352, "title": "\ud83d\udca1 [REQUEST] - Port TorchRL `Pendulum` tutorial from pytorch.org/rl to pytorch.org/tutorials", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nFor historical reasons, TorchRL privately hosts a bunch of tutorials.\r\nWe'd like to bring the most significant ones to pytorch tutorials for more visibility.\r\n\r\nHere is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/pendulum.py).\r\n\r\nEnvironments (or simulators) are a core part of many RL algorithms. The OpenAI Gym API has had a great success in the past years and paved the way for RL researchers to quickly test ideas with an easy-to-use tool.\r\nAs a PyTorch-first library, torchrl aims at being (1) oblivious to the simulator (gym or other), (2) rely on pytorch for anything we can in the simulation process, (3) a good integration within the library and (4) a coverage of many different types of environments (simulators, real-life hardware, model-based, RLHF etc). For these reasons, TorchRL propose its own class of environments. We have a dedicated tutorial that covers their design and usage: you can help us port it where it belongs!\r\n\r\nSteps:\r\n\r\n1. Port the tutorial from the RL repo to the tutorials repo.\r\n2. Fix any formatting issues or typos.\r\n3. Make sure the tutorial follows the tutorial template ([template_tutorial.py](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py))\r\n4. Preserve the original author\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\nThe tutorial should not require extra dependencies beyond those already present in requirements.txt\n\ncc @nairbv @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233", "url": "https://github.com/pytorch/tutorials/issues/2352", "state": "closed", "labels": [ "medium", "docathon-h2-2023" ], "created_at": "2023-05-26T19:50:31Z", "updated_at": "2023-11-09T20:47:06Z", "comments": 4, "user": "vmoens" }, { "repo": "pytorch/tutorials", "number": 2351, "title": "\ud83d\udca1 [REQUEST] - Port TorchRL \"Coding a DDPG loss\" from pytorch.org/rl to pytorch.org/tutorials", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nFor historical reasons, TorchRL privately hosts a bunch of tutorials.\r\nWe'd like to bring the most significant ones to pytorch tutorials for more visibility.\r\n\r\nHere is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/coding_ddpg.py).\r\nTorchRL splits down what is commonly referred to as Agents in other frameworks into various pieces that echo what can be found in other domains: data collection, datasets, transforms and losses. A dedicated class named LossModule covers this last functionality. We have a tutorial that instructs users on how to build and use such classes, you can help us port it to pytorch tutorials!\r\n\r\nSteps:\r\n\r\n1. Port the tutorial from the RL repo to the tutorials repo.\r\n2. Fix any formatting issues or typos.\r\n3. Make sure the tutorial follows the tutorial template ([template_tutorial.py](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py))\r\n4. Preserve the original author\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\nThe tutorial should not require extra dependencies beyond those already present in requirements.txt\n\ncc @nairbv", "url": "https://github.com/pytorch/tutorials/issues/2351", "state": "closed", "labels": [ "docathon-h1-2023", "medium" ], "created_at": "2023-05-26T19:45:04Z", "updated_at": "2023-06-13T16:15:45Z", "comments": 2, "user": "vmoens" }, { "repo": "pytorch/tutorials", "number": 2350, "title": "~PyTorch Docathon H1 2023~", "body": "# \ud83c\udf89 It's a wrap! \ud83c\udf89\r\n\r\nSee our [leaderboard](https://github.com/pytorch/tutorials/blob/main/docathon-leaderboard.md) and [blog post](https://pytorch.org/blog/docathon-h1-2023-wrap-up/). Thank you to everyone who contributed and congrats to the winners! \r\n\r\nWe have a large backlog of issues that we want to address and it's a great opportunity for you to start contributing to PyTorch. We have limited this docathon to the [pytorch/tutorials](https://github.com/pytorch/tutorials) and [pytorch/examples](https://github.com/pytorch/examples) repositories, so please work on the issues from these two repositories.\r\n\r\n# Date and location \r\n**WHEN:** The docathon starts on May 31st 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on June 11th.\r\n**WHERE:** Virtual\r\n**WHAT:** Issues with the **docathon-h1-2023** label - will be posted on May 31.\r\n\r\nWatch our intro video to learn more details about the event.\r\n\r\n[![Watch the docathon intro](https://github-production-user-asset-6210df.s3.amazonaws.com/5317992/242342554-2a0d5489-0f16-4db0-b3c7-67a9ada9abe6.png)](https://youtu.be/qNAZtYowAM0)\r\n\r\n# Can everyone participate?\r\n\r\nWe encourage everyone to consider participating in the docathon but there are a few things we expect from the participants:\r\n\r\n- You must have a GitHub account and know how to use Git and GitHub, how to submit or rebase your PR on the latest main branch, how to fork or clone the repo, how to view errors in the CI and troubleshoot. We reserve the right to reject incorrectly submitted PRs.\r\n- You must be familiar with Python, the basics of Machine Learning, and have at least a basic knowledge of PyTorch. Familiarity with Sphinx, sphinx-gallery, and reStructuredText is a plus.\r\n\r\nBefore you start contributing make sure to read [Linux Foundation Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/).\r\n\r\n# What contributions are we looking for?\r\n\r\nAll issues for this docathon are tagged with the **docathon-h1-2023** label. Please note that contributions that address other issues won't be counted. We are primarily looking for the following contributions: \r\n\r\n**NOTE:** Please avoid working on issues with **intel**, **amd**, and **nvidia** labels which are reserved for our partners.\r\n\r\n- Bug fixes in the [pytorch/tutorials](https://github.com/pytorch/tutorials) repo tagged with the docathon-h1-2023 label - see [the list](https://github.com/pytorch/tutorials/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h1-2023).\r\n- New examples in the [pytorch/examples](https://github.com/pytorch/examples) repo tagged with the docathon-h1-2023 label - see [the issue](https://github.com/pytorch/examples/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h1-2023). \r\n\r\n**NOTE:** Due to the large number of RSVPs, the tasks are provided on a first come first serve basis \u2014 please don't hoard the tasks!\r\n\r\n# Difficulty Levels\r\n\r\nThe issues have three levels of difficulty: **easy**, **medium**, and **advanced**. If this is your first time contributing to PyTorch, we recommend that you start with an issue that is tagged as **easy**.\r\n\r\n# How to contribute to tutorials?\r\n\r\n1. Read [pytorch/tutorials/CONTRIBUTING.md](https://github.com/pytorch/tutorials/blob/main/CONTRIBUTING.md) for general guidelines on how the submission process works and overall style and voice. \r\n2. Pick an issue that is labeled as **docathon-h1-2023**. \r\n3. In the issue, add a comment with the text /assigntome. If the issue is already assigned, please find another issue to work on. We ask that you assign one issue at a time - we want to give everyone a fair chance to participate. When you are done with one issue and get it approved, you can assign another one to yourself and start working on it.\r\n4. If you are submitting a new tutorial, use [this template](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py).\r\n5. Fork or clone the PyTorch repository to your computer. For simple fixes, like incorrect URLs, you could use the GitHub UI as well.\r\n6. Create a branch and work on the fix.\r\n7. Test your fix by running the single tutorial locally. Don't run the whole build as it takes hours and requires a GPU. You can run one tutorial as a script python3 <tutorial-name.py> or GALLERY_PATTERN=\"neural_style_transfer_tutorial.py\" make html\r\n8. After you fix all the issues, you are ready to submit your PR.\r\n\r\n# Submit Your PR\r\n\r\n1. Submit your PR referencing the issue you've picked. For example:\r\n\r\n<img width=\"1058\" alt=\"s_pytorch_pr_example\" src=\"https://github.com/pytorch/tutorials/assets/5317992/f838571a-83d0-4908-94b6-3f7e3b200825\">\r\n \r\n3. If you have not yet, sign the Contributor License Agreement (CLA) - prompted as a check in the PR. We can't accept any PRs without a signed CLA.\r\n4. Watch for any CI errors and fix as needed - all checks must pass successfully. \r\n5. There are two ways to check the resulting HTML. For simple fixes and .rst files, you can check the ", "url": "https://github.com/pytorch/tutorials/issues/2350", "state": "closed", "labels": [ "docathon-h1-2023" ], "created_at": "2023-05-26T19:09:32Z", "updated_at": "2023-06-20T18:59:49Z", "comments": 14, "user": "svekars" }, { "repo": "pytorch/tutorials", "number": 2349, "title": "\ud83d\udca1 [REQUEST] - Port TorchRL `Recurrent DQN` tutorial from pytorch.org/rl to pytorch.org/tutorials", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\nFor historical reasons, TorchRL privately hosts a bunch of tutorials.\r\nWe'd like to bring the most significant ones to pytorch tutorials for more visibility.\r\n\r\nHere is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/dqn_with_rnn.py).\r\nIn RL, we often add a RNN to a model to account for past observations when executing a policy. This of it as this: if your policy just sees a single image when playing a computer game, it will have little context about what is really happening there. If you keep a memory of past events, your performance will drastically improve. \r\nThis is useful not only in the context of Partially Observable MDPs but more broadly than that.\r\n\r\nStoring recurrent values can be tricky, and torchrl brings its own solution to this problem. This tutorial explains this.\r\n\r\nSteps:\r\n1. Port the tutorial from the RL repo to the tutorials repo.\r\n2. Fix any formatting issues or typos.\r\n3. Make sure the tutorial follows the tutorial template ([template_tutorial.py](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py))\r\n4. Preserve the original author\r\n\r\n### Existing tutorials on this topic\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nThe tutorial should not require extra dependencies beyond those already present in requirements.txt.\n\ncc @nairbv @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2349", "state": "closed", "labels": [ "medium", "docathon-h2-2023" ], "created_at": "2023-05-26T16:27:51Z", "updated_at": "2023-11-08T16:40:10Z", "comments": 4, "user": "vmoens" }, { "repo": "pytorch/tutorials", "number": 2347, "title": "\ud83d\udca1 [REQUEST] - Tutorial on extending TorchX", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\nCreate a better tutorial showing how to extend torchx.\r\n\r\n### Existing tutorials on this topic\r\n\r\nhttps://pytorch.org/torchx/latest/custom_components.html\r\n\r\n### Additional context\r\n\r\n_No response_\n\ncc @msaroufim @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2347", "state": "open", "labels": [ "advanced", "module: torchx", "docathon-h2-2023" ], "created_at": "2023-05-25T22:32:28Z", "updated_at": "2023-11-19T17:51:58Z", "comments": 12, "user": "sekyondaMeta" }, { "repo": "pytorch/tutorials", "number": 2346, "title": "\ud83d\udca1 [REQUEST] - How to use TorchServe on Vertex", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nCreate a tutorial on how to use TorchServe on Vertex AI\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2346", "state": "closed", "labels": [ "torchserve", "advanced", "docathon-h2-2023" ], "created_at": "2023-05-25T19:54:42Z", "updated_at": "2023-11-15T00:29:15Z", "user": "sekyondaMeta" }, { "repo": "pytorch/tutorials", "number": 2345, "title": "\ud83d\udca1 [REQUEST] - How to use TorchServe on AWS SageMaker", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nCreate a tutorial on how to use TorchServe on AWS SageMaker\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2345", "state": "open", "labels": [ "torchserve", "advanced", "docathon-h2-2023" ], "created_at": "2023-05-25T19:53:36Z", "updated_at": "2023-11-09T23:01:20Z", "user": "sekyondaMeta" }, { "repo": "pytorch/tutorials", "number": 2341, "title": "\ud83d\udca1 [REQUEST] - How to use TorchServe Large Model Inference: walk through an example", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\r\n\r\nCreate a new tutorial showing a walk through example of TorchServe Large Model Inference\r\n\r\n### Additional context\r\n\r\nYou can find some content to use here:\r\nhttps://github.com/pytorch/serve/blob/master/docs/large_model_inference.md\r\nhttps://github.com/pytorch/serve/tree/master/examples/large_models/Huggingface_pippy\r\n\r\n\r\ncc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2341", "state": "open", "labels": [ "torchserve", "advanced", "docathon-h2-2023" ], "created_at": "2023-05-24T20:39:18Z", "updated_at": "2023-11-01T16:48:43Z", "user": "sekyondaMeta" }, { "repo": "pytorch/tutorials", "number": 2340, "title": "\ud83d\udca1 [REQUEST] - How to use TorchServe: Walk through an example", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nWe could use an updated tutorial/walk through example on how to use TorchServe. The closest thing we have is the TorchServe Getting Started page located [here](https://github.com/pytorch/serve/blob/master/docs/getting_started.md).\r\n\n\n### Existing tutorials on this topic\n\nTorchServe Getting started: https://github.com/pytorch/serve/blob/master/docs/getting_started.md\n\n### Additional context\n\n_No response_\n\ncc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/2340", "state": "open", "labels": [ "torchserve", "advanced", "docathon-h2-2023" ], "created_at": "2023-05-24T20:20:52Z", "updated_at": "2023-11-06T20:14:07Z", "user": "sekyondaMeta" }, { "repo": "pytorch/xla", "number": 5063, "title": "How can I use the flash attention in pytorch/xla GPU mode?", "body": "## \u2753 Questions and Help\r\nHello, [Flash Attention](https://arxiv.org/abs/2205.14135) is a method to produce tiled and fused kernels such that the tiled parameters can fit onto the device SRAM.\r\n\r\nMay I ask to what degree this technique has been applied to pytorch/XLA?\r\n\r\nAnd How do I use the `flash attention` library in Pytorch/XLA GPU mode?\r\n\r\nAnd How do I use the similar third_party custom operators libraries?\r\n\r\nThanks.\r\n\r\nResources\r\n\r\nTriton [example implementation](https://github.com/openai/triton/blob/main/python/tutorials/06-fused-attention.py)\r\nhttps://github.com/HazyResearch/flash-attention\r\nhttps://github.com/lucidrains/flash-attention-jax", "url": "https://github.com/pytorch/xla/issues/5063", "state": "closed", "labels": [ "question" ], "created_at": "2023-05-24T08:42:40Z", "updated_at": "2025-04-30T13:04:03Z", "user": "wbmc" }, { "repo": "pytorch/tutorials", "number": 2336, "title": "\ud83d\udca1 [REQUEST] - Write a Tutorial for PyTorch 2.0 Export Quantization Frontend (Quantizer and Annotation API)", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nIn PyTorch 2.0, we have a new quantization path that is built on top of the graph captured by torchdynamo.export, see an example flow here: https://github.com/pytorch/pytorch/blob/main/test/quantization/pt2e/test_quantize_pt2e.py#L907, it requires backend developers to write a quantizer, we have an existing quantizer object defined for QNNPack/XNNPack here: https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/_pt2e/quantizer/qnnpack_quantizer.py#L176.\r\n\r\nThe API that quantizer is interfacing with is called Annotation API, and we just finished design and implementation (WIP as of 05/22, but should be done this week) of this API, and would like to have a tutorial that walks through how to annotate nodes using this API.\r\n\r\nDesign Doc for Annotation API: https://docs.google.com/document/d/1tjIsL7-uVgm_1bv_kUK7iovP6G1D5zcbzwEcmYEG2Js/edit# please ping @jerryzh168 for access.\r\n\r\nGeneral Design Doc for the quantization path in pytorch 2.0: https://docs.google.com/document/d/1_jjXrdaPbkmy7Fzmo35-r1GnNKL7anYoAnqozjyY-XI/edit#\r\n\r\n\r\nWhat should the tutorial contain:\r\n1. overall introduction for pytorch 2.0 export flow, quantizer and annotation API\r\n2. how to annotate common operator patterns (https://docs.google.com/document/d/1tjIsL7-uVgm_1bv_kUK7iovP6G1D5zcbzwEcmYEG2Js/edit#heading=h.it9h4gjr7m9g), maybe use add as an example instead since bias is not properly handled in the example\r\n3. how to annotate sharing qparams operators, e.g. cat or add with two inputs sharing quantization parameters\r\n4. how to annotate fixed qparams operators, e.g. sigmoid (https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/backend_config/_common_operator_config_utils.py#L74)\r\n5. how to annotate bias for linear (DerivedQuantizationSpec)\r\n6. put everything together and play around with a toy model and check the output quantized model (after convert_pt2e)\r\n\n\n### Existing tutorials on this topic\n\nThe most relevant tutorial that we have written (by @andrewor14 ) is this:\r\n* https://pytorch.org/tutorials/prototype/backend_config_tutorial.html?highlight=fx%20graph%20mode%20quantization\n\n### Additional context\n\n_No response_\n\ncc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233", "url": "https://github.com/pytorch/tutorials/issues/2336", "state": "closed", "labels": [ "docathon-h1-2023", "advanced", "intel" ], "created_at": "2023-05-22T23:14:04Z", "updated_at": "2023-06-09T23:16:37Z", "comments": 2, "user": "jerryzh168" }, { "repo": "pytorch/xla", "number": 5043, "title": "graceful shutdown on TPU, the proper way to handle SIGINT / SIGTERM in TPU code (using PJRT runtime)?", "body": "## \u2753 Questions and Help\r\n\r\nHi,\r\n\r\nI would like to run some cleanup code (writing a final checkpoint, flushing a logger, etc) to run in the process that has `xm.is_master_ordinal() == True`. I am using the pjrt backend. I attempted this:\r\n\r\n```python\r\nif xm.is_master_ordinal():\r\n signal.signal(signal.SIGINT, my_handler)\r\n```\r\n\r\nor to register it for all processes but have the `xm.is_master_ordinal()` test inside the handler.\r\n\r\nUnfortunately, I see the error that a signal handler cannot be registered except on the main thread.\r\n\r\nIs there a recommended way to accomplish graceful shutdown of a training run on TPU?\r\n\r\n```\r\n File \"aiayn/train.py\", line 325, in main\r\n xmp.spawn(_mp_fn, args=(resume_ckpt, hps_overrides), nprocs=None)\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 386, in spawn\r\n return pjrt.spawn(fn, nprocs, start_method, args)\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py\", line 365, in spawn\r\n _run_multiprocess(spawn_fn, start_method=start_method)\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py\", line 92, in wrapper\r\n return fn(*args, **kwargs)\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py\", line 322, in _run_multiprocess\r\n replica_results = list(\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py\", line 323, in <genexpr>\r\n itertools.chain.from_iterable(\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/process.py\", line 484, in _chain_from_iterable_of_lists\r\n for element in iterable:\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield fs.pop().result()\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/_base.py\", line 444, in result\r\n return self.__get_result()\r\n File \"/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/_base.py\", line 389, in __get_result\r\n raise self._exception\r\nValueError: signal only works in main thread\r\n```\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/5043", "state": "open", "labels": [ "question", "needs reproduction" ], "created_at": "2023-05-22T19:18:43Z", "updated_at": "2025-04-30T13:13:59Z", "user": "hrbigelow" }, { "repo": "pytorch/xla", "number": 5039, "title": "nightly version/ kaggle tpu", "body": "## \u2753 Questions and Help\r\nHi I installed pytorch xla nightly on kaggle notebook tpu, it was working fine but a week ago it keeps giving this error\r\n[FileNotFoundError: [Errno 2] No such file or directory: 'gsutil']\r\n\r\n\r\n![Opera Snapshot_2023-05-21_120122_www kaggle com](https://github.com/pytorch/xla/assets/81977280/0d704aae-9378-425f-a859-d8a9a898856c)\r\n", "url": "https://github.com/pytorch/xla/issues/5039", "state": "open", "labels": [ "question" ], "created_at": "2023-05-21T09:31:40Z", "updated_at": "2025-04-30T13:17:50Z", "user": "dina-fahim103" }, { "repo": "pytorch/examples", "number": 1153, "title": "Just get a low accuracy of 75.8 with resnet50 on ImageNet", "body": "I train resnet50 on ImageNet with GPUs=8, batchsize=256, learning-rate=0.1, epochs=90, and momentum=0.90.\r\nThe attained top1 accuracy is 75.80, lower than the reported 76.15. The gap is not marginal on the large-scale ImageNet.\r\nWhy does the difference exist? ", "url": "https://github.com/pytorch/examples/issues/1153", "state": "open", "labels": [], "created_at": "2023-05-19T22:45:33Z", "updated_at": "2023-12-12T04:19:09Z", "comments": 2, "user": "mountain111" }, { "repo": "pytorch/tutorials", "number": 2326, "title": "TorchVision Instance Segmentation Finetuning Tutorial - No module named 'torch._six'", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nThe torch._six module was deprecated and removed from PyTorch starting from version 1.7.0. The code is not working because of that. How can I adjust it to make it work?\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/tutorials/issues/2326", "state": "closed", "labels": [], "created_at": "2023-05-19T14:41:15Z", "updated_at": "2023-08-04T12:00:23Z", "comments": 3, "user": "weronikawiera" }, { "repo": "pytorch/pytorch", "number": 101860, "title": "How to add/save parameters (metadata) to pytorch model", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWhen I working on pytorch model, its difficult for me to keep variables required to run the model.\r\nIf I can add metadata to my model, I am not required to save parameters separately.\r\n\r\nSo any one knows, how to add metadata to pytorch model? \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/101860", "state": "closed", "labels": [], "created_at": "2023-05-19T07:20:06Z", "updated_at": "2023-05-20T05:03:08Z", "user": "naseemap47" }, { "repo": "pytorch/xla", "number": 5034, "title": "How to recover from 'Exception in device=TPU:0' sickness without terminating session?", "body": "\r\n\r\n\r\nI ran all cells in the [mnist-training.ipynb](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/mnist-training.ipynb) colab successfully. However, during execution of the last cell:\r\n\r\n```python\r\ndef _mp_fn(rank, flags):\r\n global FLAGS\r\n FLAGS = flags\r\n torch.set_default_tensor_type('torch.FloatTensor')\r\n accuracy, data, pred, target = train_mnist()\r\n if rank == 0:\r\n # Retrieve tensors that are on TPU core 0 and plot.\r\n plot_results(data.cpu(), pred.cpu(), target.cpu())\r\n\r\nxmp.spawn(_mp_fn, args=(FLAGS,), nprocs=FLAGS['num_cores'],\r\n start_method='fork')\r\n```\r\n\r\nI interrupted execution before it was finished. On trying to restart that cell, I see the following exception. On further experimentation, the only way to recover from this situation is through:\r\n\r\n Runtime -> Manage Sessions -> Terminate Current Session\r\n\r\nand then restart the whole thing.\r\n\r\nThe 'Restart runtime' option does not work, nor does the 'Disconnect and Delete Runtime option'\r\n\r\nWould anyone know of a faster way to recover from this sick state without completely restarting from scratch? I've seen several issues posted about this but haven't seen a resolution.\r\n\r\n```\r\nException in device=TPU:0: INTERNAL: From /job:tpu_worker/replica:0/task:0:\r\n2 root error(s) found.\r\n (0) INTERNAL: stream did not block host until done; was already in an error state\r\n\t [[{{node XRTExecute}}]]\r\n\t [[XRTExecute_G12]]\r\n (1) INTERNAL: stream did not block host until done; was already in an error state\r\n\t [[{{node XRTExecute}}]]\r\n0 successful operations.\r\n0 derived errors ignored.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 334, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 328, in _start_fn\r\n fn(gindex, *args)\r\n File \"<ipython-input-5-8e919fc51ff8>\", line 6, in _mp_fn\r\n accuracy, data, pred, target = train_mnist()\r\n File \"<ipython-input-4-0bb5e5cb92ef>\", line 130, in train_mnist\r\n train_loop_fn(para_loader.per_device_loader(device))\r\n File \"<ipython-input-4-0bb5e5cb92ef>\", line 106, in train_loop_fn\r\n xm.get_ordinal(), x, loss.item(), tracker.rate(),\r\nRuntimeError: INTERNAL: From /job:tpu_worker/replica:0/task:0:\r\n2 root error(s) found.\r\n (0) INTERNAL: stream did not block host until done; was already in an error state\r\n\t [[{{node XRTExecute}}]]\r\n\t [[XRTExecute_G12]]\r\n (1) INTERNAL: stream did not block host until done; was already in an error state\r\n\t [[{{node XRTExecute}}]]\r\n0 successful operations.\r\n0 derived errors ignored.\r\n---------------------------------------------------------------------------\r\nProcessExitedException Traceback (most recent call last)\r\n[<ipython-input-5-8e919fc51ff8>](https://localhost:8080/#) in <cell line: 11>()\r\n 9 plot_results(data.cpu(), pred.cpu(), target.cpu())\r\n 10 \r\n---> 11 xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=FLAGS['num_cores'],\r\n 12 start_method='fork')\r\n\r\n2 frames\r\n[/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py](https://localhost:8080/#) in join(self, timeout)\r\n 147 )\r\n 148 else:\r\n--> 149 raise ProcessExitedException(\r\n 150 \"process %d terminated with exit code %d\" %\r\n 151 (error_index, exitcode),\r\n\r\nProcessExitedException: process 0 terminated with exit code 17\r\n```", "url": "https://github.com/pytorch/xla/issues/5034", "state": "closed", "labels": [], "created_at": "2023-05-19T01:32:17Z", "updated_at": "2023-05-19T19:52:59Z", "user": "hrbigelow" }, { "repo": "pytorch/examples", "number": 1151, "title": "How to run rpc/pipeline /main.py on two physical machines?", "body": "I want to run the Resnet on two different machines , how to run the main.py\r\nWhen i change the code by add the follow \r\n`# on rank 0\r\ndist.init_process_group(\r\n backend = \"gloo\",\r\n init_method = 'tcp://172.16.8.196:8864',\r\n rank = 0,\r\n world_size = 2\r\n)\r\n\r\n# on rank 1\r\ndist.init_process_group(\r\n backend = \"gloo\",\r\n init_method = 'tcp://172.16.8.196:8864',\r\n rank = 1,\r\n world_size = 2\r\n)`\r\nIn machine 1/2, the command is python main.py\r\nThen an error occurs, RuntimeError: Socket Timeout.\r\nHow to fix it ? ", "url": "https://github.com/pytorch/examples/issues/1151", "state": "open", "labels": [], "created_at": "2023-05-18T10:54:52Z", "updated_at": "2023-05-18T10:54:52Z", "user": "Unknown-Body" }, { "repo": "pytorch/examples", "number": 1150, "title": "input and output", "body": "I really want to know how to make the format of dataset.I have 30-demension variables as input and 0-1class as output .how can I put it into the SAC model?", "url": "https://github.com/pytorch/examples/issues/1150", "state": "open", "labels": [], "created_at": "2023-05-18T10:18:59Z", "updated_at": "2023-05-18T10:18:59Z", "comments": 0, "user": "luzi560" }, { "repo": "pytorch/xla", "number": 5022, "title": "torch.distributed.reduce vs torch_xla.core.xla_model.all_reduce", "body": "## \u2753 Questions and Help\r\nI am a bit confused here. Can we use torch_xla.core.xla_model.all_reduce in place of torch.distributed.reduce? If, yes\r\nIn torch.distributed.reduce we need a rank destination, how to change that if we use torch_xla.core.xla_model.all_reduce?", "url": "https://github.com/pytorch/xla/issues/5022", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2023-05-17T13:26:02Z", "updated_at": "2025-05-05T12:42:24Z", "user": "RishabhPandit-00" }, { "repo": "pytorch/PiPPy", "number": 801, "title": "How to run the gpt2 example on a single node with four GPU?", "body": "I am trying to reproduce the [gpt2 example](https://github.com/pytorch/PiPPy/tree/main/examples/hf/gpt2) in a single node without slurm for some performance metrics, but the code only provides slurm scripts. How should I modify the code to implement this example in a single node?", "url": "https://github.com/pytorch/PiPPy/issues/801", "state": "open", "labels": [], "created_at": "2023-05-16T11:49:37Z", "updated_at": "2023-05-16T11:49:37Z", "user": "lsder" }, { "repo": "pytorch/TensorRT", "number": 1920, "title": "how to convert itensor to pytorch tensor in torch-tensorrt fx mode?", "body": "Hi:\r\nI'm trying to create engine with custom plugin using torch-tensorrt fx. How do I convert ITensor to torch tensor?", "url": "https://github.com/pytorch/TensorRT/issues/1920", "state": "closed", "labels": [ "No Activity" ], "created_at": "2023-05-15T11:52:46Z", "updated_at": "2023-11-24T00:02:13Z", "user": "shuyuan-wang" }, { "repo": "pytorch/pytorch", "number": 101246, "title": "Tool for identifying where in eager model an operation is nondeterministic", "body": "### \ud83d\udc1b Describe the bug\n\nLet's say you have a model code and when you run it twice you get bitwise different results. Where did it diverge? We can use TorchFunctionMode/TorchDispatchMode to localize where the first divergence occurred.\n\n### Versions\n\nmaster\n\ncc @mruberry @kurtamohler", "url": "https://github.com/pytorch/pytorch/issues/101246", "state": "open", "labels": [ "triaged", "module: determinism" ], "created_at": "2023-05-12T02:50:04Z", "updated_at": "2023-05-12T14:21:45Z", "user": "ezyang" }, { "repo": "pytorch/TensorRT", "number": 1912, "title": "\u2753 [Question] How to correctly convert model by using torch-tensorrt", "body": "## \u2753 Question\r\n\r\nHi, I am trying to convert resnet_rmac_fpn model which is used for image retrieval. I am unable to convert it to tensorrt model by using torch-tensorrt. According to debug information, some of the operators are not supported by Torch-TensorRT.\r\n\r\nHowever, if I export the model into onnx and then convert it by using `trtexec ` command, the conversion works. Therefore, I was wondering if there are any possible ways for making this conversion possible? Here is the error prompt : \r\n\r\n```\r\nINFO: [Torch-TensorRT] - Method requested cannot be compiled end to end by Torch-TensorRT.TorchScript.\r\nUnsupported operators listed below:\r\n - profiler::_record_function_exit._RecordFunction(__torch__.torch.classes.profiler._RecordFunction _0) -> ()\r\n - aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor\r\n - prim::PythonOp(...) -> ...\r\n - profiler::_record_function_enter_new(str name, str? args=None) -> __torch__.torch.classes.profiler._RecordFunction\r\n\r\nDEBUG: [Torch-TensorRT] - Unsupported operator: aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor\r\n/usr/local/lib/python3.8/dist-packages/torch/functional.py(1519): norm\r\n/usr/local/lib/python3.8/dist-packages/torch/_tensor.py(647): norm\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py(4665): normalize\r\n/codebase/Deep_Image_Retrieval/dirtorch/nets/rmac_resnet.py(8): l2_normalize\r\n/codebase/Deep_Image_Retrieval/dirtorch/nets/rmac_resnet.py(68): forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(169): forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(794): trace\r\nbenchmark.py(71): create_torchtrt_model\r\nbenchmark.py(110): benchmark_torchtrt_model\r\nbenchmark.py(132): <module>\r\n\r\nDEBUG: [Torch-TensorRT] - Unsupported operator: prim::PythonOp(...) -> ...\r\n/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py(506): apply\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(27): scatter_map\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(31): scatter_map\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(44): scatter\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(52): scatter_kwargs\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(178): scatter\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(161): forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(794): trace\r\nbenchmark.py(71): create_torchtrt_model\r\nb\r\nenchmark.py(110): benchmark_torchtrt_model\r\nbenchmark.py(132): <module>\r\n\r\nDEBUG: [Torch-TensorRT] - Unsupported operator: profiler::_record_function_exit._RecordFunction(__torch__.torch.classes.profiler._RecordFunction _0) -> ()\r\n/usr/local/lib/python3.8/dist-packages/torch/_ops.py(316): __call__\r\n/usr/local/lib/python3.8/dist-packages/torch/autograd/profiler.py(507): __exit__\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(169): forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(794): trace\r\nbenchmark.py(71): create_torchtrt_model\r\nbenchmark.py(110): benchmark_torchtrt_model\r\nbenchmark.py(132): <module>\r\n\r\nDEBUG: [Torch-TensorRT] - Unsupported operator: profiler::_record_function_enter_new(str name, str? args=None) -> __torch__.torch.classes.profiler._RecordFunction\r\n/usr/local/lib/python3.8/dist-packages/torch/_ops.py(504): __call__\r\n/usr/local/lib/python3.8/dist-packages/torch/autograd/profiler.py(492): __enter__\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(151): forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module\r\n/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.p", "url": "https://github.com/pytorch/TensorRT/issues/1912", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-05-11T18:40:58Z", "updated_at": "2023-08-21T00:02:10Z", "user": "HtutLynn" }, { "repo": "pytorch/TensorRT", "number": 1898, "title": "\u2753 [Question] is there any example on how to convert T5 model that compatible with huggingace's generate function?", "body": "## \u2753 Question\r\n\r\nis there any example on how to convert T5 model that is compatible with huggingface's generate function? and able to handle dynamic shapes ?.\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1898", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-05-09T18:51:06Z", "updated_at": "2023-08-20T00:02:15Z", "user": "dathudeptrai" }, { "repo": "pytorch/xla", "number": 4994, "title": "Different Graph generations", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nThis code snippet is extracted from the AdamW optimizer. This optimizer for different ranges of learning rate and weight decay generates different graphs. This is causing unexpected compilations during the running of the application. The fix is also mentioned in the section. However, such scenarios can occur anywhere and we need a generic mechanism to make sure the same graph is generated. \r\n\r\n## To Reproduce\r\n```\r\nimport torch\r\nimport torch, random, os\r\nimport numpy as np\r\nimport torch_xla.core.xla_model as xm\r\n\r\nos.environ[\"NEURON_FRAMEWORK_DEBUG\"] = \"1\"\r\nos.environ[\"XLA_IR_DEBUG\"] = \"1\"\r\nos.environ[\"XLA_HLO_DEBUG\"]=\"1\"\r\nos.environ['XLA_USE_BF16']=\"1\"\r\nos.environ['XLA_NO_SPECIAL_SCALARS']=\"1\"\r\n\r\ndef func1():\r\n param = torch.FloatTensor([0.001]).to(xm.xla_device())\r\n lr = 2.9999999999999997e-06\r\n weight_decay = 0.01\r\n param.mul_(1 - lr * weight_decay)\r\n print(param)\r\n\r\ndef func2():\r\n param = torch.FloatTensor([0.001]).to(xm.xla_device())\r\n lr = 4.6874999999999995e-08\r\n weight_decay = 0.01\r\n param.mul_(1 - lr * weight_decay)\r\n print(param)\r\n\r\ndef func3():\r\n param = torch.FloatTensor([0.001]).to(xm.xla_device())\r\n lr = 2.9999999999999997e-06\r\n weight_decay = 0.01\r\n param.sub_(param * lr * weight_decay)\r\n print(param)\r\n\r\ndef func4():\r\n param1 = torch.FloatTensor([0.001]).to(xm.xla_device())\r\n lr1 = 4.6874999999999995e-08\r\n weight_decay1 = 0.01\r\n param1.sub_(param1 * lr1 * weight_decay1)\r\n print(param1)\r\n\r\nfunc1()\r\nfunc2()\r\nfunc3()\r\nfunc4()\r\n```\r\n\r\n<!--\r\nIt is really important for the team to have a quick repro, which requires no setup work.\r\n\r\nThe quicker is the repro to be run, the higher the chances the bug will be addressed sooner.\r\n\r\nThe best way to create quick repros is to create a Colab based on the following template:\r\n\r\n```\r\nhttps://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information\r\n\r\nThings to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.\r\n\r\nAnother example are Colab which mount user's Google Drive storages.\r\n\r\nUsing a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:\r\n\r\nhttps://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65\r\n\r\n-->\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.\r\n2.\r\n3.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->\r\n\r\n## Expected behavior\r\nfunc1 gives the graph:\r\n```\r\nHloModule SyncTensorsGraph.6, entry_computation_layout={(bf16[],bf16[1]{0})->(bf16[1]{0})}\r\n\r\nENTRY %SyncTensorsGraph.6 (p0: bf16[], p1: bf16[1]) -> (bf16[1]) {\r\n %p1 = bf16[1]{0} parameter(1), frontend_attributes={neff_input_name=\"input1\"}, metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n %p0 = bf16[] parameter(0), frontend_attributes={neff_input_name=\"input0\"}, metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n %broadcast = bf16[1]{0} broadcast(bf16[] %p0), dimensions={}, metadata={op_type=\"aten__mul\" op_name=\"aten__mul\"}\r\n %multiply = bf16[1]{0} multiply(bf16[1]{0} %p1, bf16[1]{0} %broadcast), metadata={op_type=\"aten__mul\" op_name=\"aten__mul\"}\r\n ROOT %tuple = (bf16[1]{0}) tuple(bf16[1]{0} %multiply), frontend_attributes={neff_output_names=\"output0\"}\r\n}\r\n\r\n```\r\n\r\nfunc2 gives a different graph:\r\n```\r\nHloModule SyncTensorsGraph.6, entry_computation_layout={(bf16[1]{0})->(bf16[1]{0})}\r\n\r\nENTRY %SyncTensorsGraph.6 (p0: bf16[1]) -> (bf16[1]) {\r\n %p0 = bf16[1]{0} parameter(0), frontend_attributes={neff_input_name=\"input0\"}, metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n %constant = bf16[] constant(1), metadata={op_type=\"prim__Constant\" op_name=\"prim__Constant\"}\r\n %broadcast = bf16[1]{0} broadcast(bf16[] %constant), dimensions={}, metadata={op_type=\"aten__mul\" op_name=\"aten__mul\"}\r\n %multiply = bf16[1]{0} multiply(bf16[1]{0} %p0, bf16[1]{0} %broadcast), metadata={op_type=\"aten__mul\" op_name=\"aten__mul\"}\r\n ROOT %tuple = (bf16[1]{0}) tuple(bf16[1]{0} %multiply), frontend_attributes={neff_output_names=\"output0\"}\r\n}\r\n```\r\n\r\nfunc3 and func4 give the same graphs:\r\n```\r\nHloModule SyncTensorsGraph.14, entry_computation_layout={(bf16[],bf16[],bf16[1]{0})->(bf16[1]{0})}\r\n\r\nENTRY %SyncTensorsGraph.14 (p0: bf16[], p1: bf16[], p2: bf16[1]) -> (bf16[1]) {\r\n %p2 = bf16[1]{0} parameter(2), frontend_attributes={neff_input_name=\"input2\"}, metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n %p1 = bf16[] parameter(1), frontend_attributes={neff_input_name=\"input1\"}, metadata={op_type=\"xla__device_data\" op_name=\"xla__device_data\"}\r\n %broadcast.1 = bf16[1]{0} broadcast(bf16[] %p1), dimensions={}, metadata={op_ty", "url": "https://github.com/pytorch/xla/issues/4994", "state": "closed", "labels": [ "question", "lowering" ], "created_at": "2023-05-09T07:18:12Z", "updated_at": "2025-05-05T12:57:35Z", "user": "amithrm" }, { "repo": "pytorch/pytorch", "number": 100859, "title": "how to calculate the macs after prune?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weight * mask.\r\nbut after I use prune.remove() to make the pruning permanent, the macs calculated by torchprofile.profile_macs() still same as the model befor prune.I use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weight * mask.\r\nbut after I use prune.remove() to make the pruning permanent, the macs calculated by torchprofile.profile_macs() still same as the model befor prune.\r\n![2023-05-08_17-01-34](https://user-images.githubusercontent.com/38120691/236770558-5905bf8d-6c72-4de6-afef-26c373273e37.jpg)\r\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel", "url": "https://github.com/pytorch/pytorch/issues/100859", "state": "closed", "labels": [ "oncall: quantization", "triaged" ], "created_at": "2023-05-08T08:06:34Z", "updated_at": "2023-10-05T23:32:18Z", "user": "machengjie321" }, { "repo": "pytorch/tutorials", "number": 2313, "title": "how to calculate the macs after prune\uff1f", "body": "### \ud83d\ude80 Descirbe the improvement or the new tutorial\n\nI use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weight * mask.\r\nbut after I use prune.remove() to make the pruning permanent, the macs calculated by torchprofile.profile_macs() still same as the model befor prune.\r\n![2023-05-08_17-01-34](https://user-images.githubusercontent.com/38120691/236769666-ba881425-5328-40b9-8968-df427cd7bbb0.jpg)\r\n\r\n\n\n### Existing tutorials on this topic\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/tutorials/issues/2313", "state": "open", "labels": [ "question" ], "created_at": "2023-05-08T08:02:31Z", "updated_at": "2023-05-26T20:02:13Z", "user": "machengjie321" }, { "repo": "pytorch/pytorch", "number": 100827, "title": "How to install standalone torch dynamo with pytorch1.x", "body": "### \ud83d\udc1b Describe the bug\n\nFor many reasons, the environment is not compatible with pytorch2.0. For example, Megatron-LM compiles its transformer operators written in C++, which confine it to the limit of torch 1.x c++ extension, otherwise many compile errors. For another example, DeepSpeed implements their distributed trainer whose components depends on triton 1 but not triton 2 to build. \r\n\r\n\n\n### Error logs\n\n_No response_\n\n### Minified repro\n\n_No response_\n\n### Versions\n\nTherefore, could you be so kind to guide me how to install torchdynamo independently without having a torch2.0?\r\n\r\nOr, are there other ways for compilation in torch1.0? I heard of torch.jit, but someone told me that it could not speed up training. \r\n\r\nI would appreciate if there is any methods that work to speedup torch 1.x 's code with regard to fast Large Language Model training. \n\ncc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305", "url": "https://github.com/pytorch/pytorch/issues/100827", "state": "closed", "labels": [ "dependency issue", "oncall: pt2" ], "created_at": "2023-05-07T09:55:43Z", "updated_at": "2023-05-07T21:50:41Z", "user": "2catycm" }, { "repo": "pytorch/pytorch", "number": 100800, "title": "[cpu inductor] where is silently incorrect when SIMD code is generated.", "body": "### \ud83d\udc1b Describe the bug\n\n```python\r\nimport torch\r\n\r\ninput_tensor = torch.ones(3, 3)\r\n\r\ndef f(x):\r\n return torch.where(torch.ones_like(x).to(torch.bool), torch.zeros_like(x), torch.ones_like(x)* 2)\r\n\r\nres1 = f(input_tensor)\r\nprint(res1)\r\n\r\njit_func = torch.compile(f)\r\nres2 = jit_func(input_tensor)\r\nprint(res2)\r\n\r\n```\r\n\r\nOutput\r\n```\r\ntensor([[0., 0., 0.],\r\n [0., 0., 0.],\r\n [0., 0., 0.]])\r\ntensor([[2., 2., 2.],\r\n [2., 2., 2.],\r\n [2., 2., 0.]])\r\n```\r\n\r\nReason:\r\nImplementation of where relies on `blendv` where MSB of the mask element should be 0 for first element of the packed vector to be copied.\r\nhttps://github.com/pytorch/pytorch/blob/8d56b0a5b57cf3e82402556ceb5c7080c0f9d5b6/torch/_inductor/codegen/cpp.py#L572-L573\r\n\r\nblendv: https://www.intel.com/content/www/us/en/docs/cpp-compiler/developer-guide-reference/2021-8/mm256-blendv-ps.html\r\n\r\nFound in https://github.com/pytorch/pytorch/pull/100799#issuecomment-1537136218\r\n\n\n### Versions\n\nmaster\n\ncc @soumith @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire", "url": "https://github.com/pytorch/pytorch/issues/100800", "state": "closed", "labels": [ "triaged", "module: inductor" ], "created_at": "2023-05-06T13:03:01Z", "updated_at": "2023-05-10T02:16:14Z", "user": "kshitij12345" }, { "repo": "pytorch/TensorRT", "number": 1889, "title": "Multi-GPU: optimize for cuda:1 but model also gets pushed on cuda:0, why???", "body": "## \u2753 Question\r\n\r\nI have two GPUs in my system. When optimize my model for the cuda:1 device the model gets somehow ALSO loaded onto the cuda:0 device (probably because that's the default device?). This happends during the optimization process which is called with:\r\n`optModel = torch_tensorrt::torchscript::compile(model, compile_settings);`\r\nWith `nvidia-smi` I can clearly see that the optimization is performed on cuda:1 (as expected) as I explicitly tell to do so. Shortly before the optimization is finished the model is also loaded on cuda:0?\r\nHow can I stop the loading on cuda:0?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - Libtorch Version: 1.10.2+cu113\r\n - CPU Architecture:\r\n - OS (e.g., Linux): Linux\r\n - CUDA version: 11.3\r\n - GPU models and configuration: both GPUs are Nvidia RTX A4000\r\n - TensorRT: 8.4.0.6\r\n - Torch-TensorRT: torch-tensorrt for libtorch-1.10.2", "url": "https://github.com/pytorch/TensorRT/issues/1889", "state": "closed", "labels": [ "question" ], "created_at": "2023-05-05T11:43:50Z", "updated_at": "2023-07-06T15:04:44Z", "user": "bjaeger1" }, { "repo": "pytorch/data", "number": 1149, "title": "[RFC] Performance Profiling Tools", "body": "### \ud83d\ude80 The feature\n\n1. Store usage statistics in `Prefetcher` \r\n - By tracking statistics within `Prefetcher`, we can reasonably determine whether upstream processes or downstream processes are faster. For example, the emptiness of the buffer queue may imply consumers are faster than producers. Users can insert this into various points in the pipeline to examine various behaviors. A common pattern we expect is to examine whether the pipeline is IO bound or compute bound.\r\n - [ ] #1141\r\n\r\n2. `DataLoader2` main process\r\n - `torch` profilers (e.g. `torch.profiler.profile`) currently work with `DataLoader2`, however, it only tracks functions and DataPipes that are executed within the main process. Nonetheless, we should validate that the information is helpful if most of the computations take place within the main process (e.g. using `InProcessReadingService` or dispatching process.\r\n - After 1 is completed, we can add APIs to `DataLoader2` to fetch the relevant statistics from `Prefetcher`'s buffer, such as the one that exists at the end of the main loop. It should allow users to examine whether the model is consuming faster than the preparation of samples.\r\n - [ ] PR pending\r\n - [ ] Tutorial pending \r\n\r\n3. `DataLoader2` worker process profiling\r\n - Two main options under considerations are:\r\n 1. Attaching the profiler to worker process in order to get worker level metrics/trace. This will allow us to use existing profilers without re-implementing their features.\r\n 2. `MultiprocessingReadingService` can provide methods to retrieve and aggregate metrics from certain DataPipes (mainly `Prefetcher`)\r\n\r\n4. Integration with other tools (e.g. tracers)\r\n - We will likely want main and worker processes' to be visible within tracers (e.g. useful when integrated with TorchTNT).\n\n### Motivation, pitch\n\nThis set of tools and features aim to answer the questions:\r\n1. Is my model training bottlenecked by data loading?\r\n2. If so, which part of the pipeline? IO? Compute?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nComments and suggestions are welcomed.", "url": "https://github.com/meta-pytorch/data/issues/1149", "state": "open", "labels": [ "topic: new feature" ], "created_at": "2023-05-03T22:01:19Z", "updated_at": "2023-05-30T11:27:53Z", "comments": 3, "user": "NivekT" }, { "repo": "pytorch/TensorRT", "number": 1882, "title": "\u2753 [Question] Request for a model which is supported by Torch-TRT(FX)", "body": "## \u2753 Question\r\n\r\nI'm trying to evaluate the Torch-TensorRT tool, using FX backend for running models in the C++ library.\r\nMy goal is to convert models which are not fully supported by TRT, and accelerate them by running some of the sub-graphs on TRT(as explained by this notebook- https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py)\r\n\r\nThe steps I have already completed-\r\n- I converted a small model which is fully supported by TRT, and I received a single sub-graph as expected.\r\n The model runs successfully using the python lib, and the C++ lib also.\r\n- I have already tried to convert the Resnet50 model, and did it successfully. But this is a fully TRT supported model.\r\n- I converted a small model with a TRT unsupported operator, so the model was divided to 3 sub-graphs.\r\n The model runs successfully using the python Torch-TensorRT lib, and the C++ lib also.\r\n The problem is that the Torch-TensorRT model's latency is twice bigger than the original Torch model's latency.\r\n\r\nI thought that maybe there is overhead because of the passes between sub-graphs on Torch and TRT back and forth. So I want to take a larger model, which is not fully supported by TensorRT, but still supported by Torch-TensorRT(by dividing the graph to TRT and Torch sub-grpahs, using the FX backend).\r\nI tried some models, but I can't find such a model.\r\nIs there a model that you tested the tool with?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.0\r\n - CPU Architecture: x86-64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): -\r\n - Are you using local sources or building from archives: Torch-TensorRT which has been built from sources\r\n - Python version: 3.8.10\r\n - CUDA version: 11.8\r\n - GPU models and configuration: NVIDIA T1000\r\n - Any other relevant information: -\r\n\r\n@OronG13", "url": "https://github.com/pytorch/TensorRT/issues/1882", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-05-03T13:53:40Z", "updated_at": "2023-11-17T00:02:12Z", "user": "DanielLevi6" }, { "repo": "pytorch/kineto", "number": 756, "title": "urgent!!! profiler: Profiler is not initialized: skipping step() invocation", "body": "I got the warning, when using torch profiler to profiling, the steps are merged into one:\r\n[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation\r\n[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation\r\n[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation\r\n[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation\r\n[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation\r\n[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation\r\n\r\nimage\r\nimage\r\n\r\nVersions\r\nCollecting environment information...\r\nPyTorch version: 2.0.0a0+1767026\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.24.1\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.29\r\nIs CUDA available: True\r\nCUDA runtime version: 12.1.66\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration:\r\nGPU 0: NVIDIA A100-SXM4-80GB\r\nGPU 1: NVIDIA A100-SXM4-80GB\r\nGPU 2: NVIDIA A100-SXM4-80GB\r\nGPU 3: NVIDIA A100-SXM4-80GB\r\nGPU 4: NVIDIA A100-SXM4-80GB\r\nGPU 5: NVIDIA A100-SXM4-80GB\r\nGPU 6: NVIDIA A100-SXM4-80GB\r\nGPU 7: NVIDIA A100-SXM4-80GB\r\n\r\nNvidia driver version: 470.103.01\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 46 bits physical, 57 bits virtual\r\nCPU(s): 128\r\nOn-line CPU(s) list: 0-127\r\nThread(s) per core: 2\r\nCore(s) per socket: 32\r\nSocket(s): 2\r\nNUMA node(s): 2\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 106\r\nModel name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz\r\nStepping: 6\r\nCPU MHz: 799.871\r\nCPU max MHz: 3500.0000\r\nCPU min MHz: 800.0000\r\nBogoMIPS: 5800.00\r\nVirtualization: VT-x\r\nL1d cache: 3 MiB\r\nL1i cache: 2 MiB\r\nL2 cache: 80 MiB\r\nL3 cache: 96 MiB\r\nNUMA node0 CPU(s): 0-31,64-95\r\nNUMA node1 CPU(s): 32-63,96-127\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities\r\n\r\nVersions of relevant libraries:\r\n[pip3] efficientnet-pytorch==0.7.1\r\n[pip3] numpy==1.22.2\r\n[pip3] pytorch-lightning==1.9.2\r\n[pip3] pytorch-quantization==2.1.2\r\n[pip3] torch==2.0.0a0+1767026\r\n[pip3] torch-accl==0.3.0\r\n[pip3] torch-tb-profiler==0.4.1\r\n[pip3] torch-tensorrt==1.4.0.dev0\r\n[pip3] torchmetrics==0.6.0\r\n[pip3] torchtext==0.13.0a0+fae8e8c\r\n[pip3] torchvision==0.15.0a0\r\n[pip3] triton==2.0.0\r\n[conda] Could not collect\r\n\r\nimage\r\n\r\nimport argparse\r\nimport nvtx\r\nfrom typing import Tuple\r\nfrom tqdm import tqdm\r\n\r\nimport torch\r\nfrom torch import nn, optim\r\nfrom torch.distributed import Backend\r\nfrom torch.nn.parallel.distributed import DistributedDataParallel\r\nfrom torch.utils.data import DataLoader, DistributedSampler\r\nfrom torchvision import datasets, transforms\r\n\r\n\r\ndef create_data_loaders(rank: int,\r\n ", "url": "https://github.com/pytorch/kineto/issues/756", "state": "closed", "labels": [ "question" ], "created_at": "2023-05-01T23:35:54Z", "updated_at": "2024-04-23T15:28:55Z", "user": "Johnsonms" }, { "repo": "pytorch/TensorRT", "number": 1872, "title": "\u2753 [Question] How do you ....? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nHow to compile torch-tensorrt for NVIDIA Jetson TX2 (jetpack4.6)\r\n## What you have already tried\r\nHi, @kneatco \r\nI have the same issue when I downlograded numpy version from 1.19.5 to 1.19.4.\r\n\r\nI did following steps.\r\n\r\n1. Downloading docker image for TX2 (jetpack=4.6)\r\n```\r\n# Pull the image\r\ndocker pull nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3\r\n# Run the image\r\nsudo docker run -it --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3\r\n```\r\n3. Installing bazel in the container\r\n```\r\n# Download torch-tensorrt repo\r\ngit clone -b v1.1.0 https://github.com/pytorch/TensorRT.git\r\n# Install bazel\r\nexport BAZEL_VERSION=$(cat <PATH_TO_TORCHTRT_ROOT>/.bazelversion)\r\nmkdir bazel\r\ncd bazel\r\ncurl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip\r\nunzip bazel-$BAZEL_VERSION-dist.zip\r\nbash ./compile.sh\r\ncp output/bazel /usr/local/bin/\r\n```\r\n5. Modifiying WORKSPACE as follows:\r\n```\r\nworkspace(name = \"Torch-TensorRT\")\r\n\r\nload(\"@bazel_tools//tools/build_defs/repo:http.bzl\", \"http_archive\")\r\nload(\"@bazel_tools//tools/build_defs/repo:git.bzl\", \"git_repository\")\r\n\r\nhttp_archive(\r\n name = \"rules_python\",\r\n sha256 = \"778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f\",\r\n url = \"https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz\",\r\n)\r\n\r\nload(\"@rules_python//python:pip.bzl\", \"pip_install\")\r\n\r\nhttp_archive(\r\n name = \"rules_pkg\",\r\n sha256 = \"038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d\",\r\n urls = [\r\n \"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n \"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n ],\r\n)\r\n\r\nload(\"@rules_pkg//:deps.bzl\", \"rules_pkg_dependencies\")\r\n\r\nrules_pkg_dependencies()\r\n\r\ngit_repository(\r\n name = \"googletest\",\r\n commit = \"703bd9caab50b139428cea1aaff9974ebee5742e\",\r\n remote = \"https://github.com/google/googletest\",\r\n shallow_since = \"1570114335 -0400\",\r\n)\r\n# External dependency for torch_tensorrt if you already have precompiled binaries.\r\nlocal_repository(\r\n name = \"torch_tensorrt\",\r\n path = \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt\"\r\n)\r\n\r\n# CUDA should be installed on the system locally\r\nnew_local_repository(\r\n name = \"cuda\",\r\n build_file = \"@//third_party/cuda:BUILD\",\r\n path = \"/usr/local/cuda-10.2/\",\r\n)\r\n\r\nnew_local_repository(\r\n name = \"cublas\",\r\n build_file = \"@//third_party/cublas:BUILD\",\r\n path = \"/usr\",\r\n)\r\n#############################################################################################################\r\n# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)\r\n#############################################################################################################\r\n\r\n#http_archive(\r\n# name = \"libtorch\",\r\n# build_file = \"@//third_party/libtorch:BUILD\",\r\n# sha256 = \"8d9e829ce9478db4f35bdb7943308cf02e8a2f58cf9bb10f742462c1d57bf287\",\r\n# strip_prefix = \"libtorch\",\r\n# urls = [\"https://download.pytorch.org/libtorch/cu113/libtorch-cxx11-abi-shared-with-deps-1.11.0%2Bcu113.zip\"],\r\n#)\r\n#http_archive(\r\n# name = \"libtorch_pre_cxx11_abi\",\r\n# build_file = \"@//third_party/libtorch:BUILD\",\r\n# sha256 = \"90159ecce3ff451f3ef3f657493b6c7c96759c3b74bbd70c1695f2ea2f81e1ad\",\r\n# strip_prefix = \"libtorch\",\r\n# urls = [\"https://download.pytorch.org/libtorch/cu113/libtorch-shared-with-deps-1.11.0%2Bcu113.zip\"],\r\n#)\r\n\r\n# Download these tarballs manually from the NVIDIA website\r\n# Either place them in the distdir directory in third_party and use the --distdir flag\r\n# or modify the urls to \"file:///<PATH TO TARBALL>/<TARBALL NAME>.tar.gz\r\n\r\n#http_archive(\r\n# name = \"cudnn\",\r\n# build_file = \"@//third_party/cudnn/archive:BUILD\",\r\n# sha256 = \"0e5d2df890b9967efa6619da421310d97323565a79f05a1a8cb9b7165baad0d7\",\r\n# strip_prefix = \"cuda\",\r\n# urls = [\r\n# \"https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.4/11.4_20210831/cudnn-11.4-linux-x64-v8.2.4.15.tgz\",\r\n# ],\r\n#)\r\n\r\n#http_archive(\r\n# name = \"tensorrt\",\r\n# build_file = \"@//third_party/tensorrt/archive:BUILD\",\r\n# sha256 = \"826180eaaecdf9a7e76116855b9f1f3400ea9b06e66b06a3f6a0747ba6f863ad\",\r\n# strip_prefix = \"TensorRT-8.2.4.2\",\r\n# urls = [\r\n# \"https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.4/tars/tensorrt-8.2.4.2.linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz\",\r\n# ],\r\n#)\r\n####################################################################################\r\n# Locally installed dependencies (use in cases of custom dependencies or aarch64)\r\n####################################################################################\r\n\r\n# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path\r\n# with ", "url": "https://github.com/pytorch/TensorRT/issues/1872", "state": "closed", "labels": [ "question" ], "created_at": "2023-05-01T13:53:19Z", "updated_at": "2023-05-19T18:30:16Z", "user": "godhj93" }, { "repo": "pytorch/TensorRT", "number": 1871, "title": "\u2753 [Question] torch.fx.proxy.TraceError: Proxy object cannot be iterated", "body": "## \u2753 Question\r\n\r\n\r\nI'm trying to convert an nn.Module of ASLfeat(Pytorch) to a runtime Torch-TensorRT model(for C++)\r\nThe steps I followed are the same as written in- https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py\r\n\r\nBut for some reason, the tracing step fails every time.\r\nThe error message is-\r\n`torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors`\r\n\r\nThe line which cause it is- `n_samples, n_channel, *_ = dense_feat_map.shape`\r\n\r\nThe same is happening when I'm trying to use the FasterRCNN model from torchvision.\r\nThe same error occurs because of a loop running on the input list in the model.\r\n\r\nIs there a workaround for these cases?\r\nASLfeat and FasterRCNN are complicated models which I can't convert directly to TensorRT, so Torch-TensorRT could be a very useful option for me.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior(for FasterRCNN):\r\n\r\n1. `from torchvision.models.detection.faster_rcnn import fasterrcnn_resnet50_fpn,FasterRCNN_ResNet50_FPN_Weights`\r\n2. `model = fasterrcnn_resnet50_fpn(weights=FasterRCNN_ResNet50_FPN_Weights.DEFAULT).cuda().eval()`\r\n3. `inputs = [torch.rand((1, 300, 400), device=\"cuda\"), torch.rand((1, 500, 400), device=\"cuda\")]`\r\n4. `traced = acc_tracer.trace(model, inputs)`\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\nThe result-\r\n`Traceback (most recent call last):`\r\n` File \"/home/Projects/Torch-TensorRT/python/trainingPath/step_7_fasterRCNNInference-X.py\", line 44, in <module>`\r\n` traced = acc_tracer.trace(model, inputs)`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py\",` `line 667, in trace`\r\n` traced = rewriter_base_trace(mod, ast_rewriter_allow_list, leaf_module_list)`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py\",` `line 585, in rewriter_base_trace`\r\n` rewritten_graph, rewritten_mod = AccRewritingTracer().trace(`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py\",` `line 309, in trace`\r\n` return super().trace(rewritten, concrete_args), rewritten`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py\", line 778, in trace`\r\n` (self.create_arg(fn(*args)),),`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py\",` `line 75, in forward`\r\n` for img in images:`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch/fx/proxy.py\", line 385, in __iter__`\r\n` return self.tracer.iter(self)`\r\n` File \"/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch/fx/proxy.py\", line 285, in iter`\r\n` raise TraceError('Proxy object cannot be iterated. This can be '`\r\n`torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors`\r\n\r\n## What you have already tried\r\nI also tried to use torch.jit.trace which traces the ASLfeat model successfully, but I need to trace the model using fx(because I need to convert it to a runtime model)\r\n\r\n## Expected behavior\r\n\r\nReceive a traced model, prepared for the torch-tensorrt usage.\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n\r\n- PyTorch Version (e.g., 1.0): 2.0\r\n- CPU Architecture: x86-64\r\n- OS (e.g., Linux): Ubuntu 20.04\r\n- How you installed PyTorch (conda, pip, libtorch, source): pip\r\n- Build command you used (if compiling from source): -\r\n- Are you using local sources or building from archives: Torch-TensorRT which has been built from sources\r\n- Python version: 3.8.10\r\n- CUDA version: 11.8\r\n- GPU models and configuration: NVIDIA T1000\r\n- Any other relevant information: -\r\n\r\n@OronG13", "url": "https://github.com/pytorch/TensorRT/issues/1871", "state": "closed", "labels": [ "question", "No Activity", "component: fx" ], "created_at": "2023-05-01T11:24:21Z", "updated_at": "2023-08-21T00:02:11Z", "user": "DanielLevi6" }, { "repo": "pytorch/hub", "number": 328, "title": "Need help on how to contribute ", "body": "Hello everyone.\r\nI wanted to add simplenet architecture from 2016 which outperforms vggnets resnet18, resbet34 and the likes while being a plain CNN with 5m to 9m parameters to the pytorch hub. \r\nI read the docs but I'm a bit confused. Could you kindly help me get this sorted out?\r\n\r\nHere are my issues:\r\n1.where exactly should I put the hubconf.py in my \r\nrepository? My repository([Link](https://github.com/Coderx7/SimpleNet_Pytorch/tree/master) ) is organized like this :\r\n-cifar10\r\n-imagenet\r\n--simplenet.py\r\n--readme.md \r\n\r\nShould I add the hubconf.py at the root of the repository, or next to the model inside the imagenet directory for this to work?\r\n\r\n2.and to be clear, hubconf.py will only cobtain the functions for instantiating each model variant right? \r\nThat is one entry for simplenetv1_5m1, another for simplenetv1_5m2, and so on right? And I should only import these from my simplenet.py right? \r\n\r\n3.And then fork this repo, create a new .md file right?\r\nHow should I name that file? Should I use my real name fir owner or my GitHub handle name? i.e. coderx7_SimpleNet_Pytorch_title?\r\nWhat should I write for title here? Can I leave it out? How many characters are allowed? \r\nIs the repo name case sensitive?\r\n\r\nWhen I created all of that , I make a pull request and that's it? \r\n\r\n", "url": "https://github.com/pytorch/hub/issues/328", "state": "closed", "labels": [], "created_at": "2023-04-29T14:52:26Z", "updated_at": "2023-05-03T09:56:37Z", "user": "Coderx7" }, { "repo": "pytorch/pytorch", "number": 100293, "title": "How to get nn.MultiheadAttention mid layer output", "body": "### \ud83d\udcda The doc issue\r\n\r\nHello, I have a quetion about MultiheadAttention(short for MA). Not about the [doc explaination](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html?highlight=multiheadattention#torch.nn.MultiheadAttention), but is about using this module. I want to plot a heatmap(CAM) for my neural network based on transformer. In this process, I need to get the MA mid layer output, especially the dot product results for query-key pairs. How can I get it? If can't get it, I have to calculate the output dot product to estimate the result for the self attention layers. But this estimation may cause some errors. So do you have any idea to get the mid-layer results\uff1f\r\nI want to use `register_forward_hook`, but this module architecture output really makes me confused cause it doesn't show me the component layer that I need.\r\n```\r\n>>> print(self_attn)\r\nMultiheadAttention(\r\n (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)\r\n)\r\n```\r\n\r\n\r\nSo can you help me? Thank you very much!\r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/100293", "state": "closed", "labels": [], "created_at": "2023-04-28T23:30:29Z", "updated_at": "2023-04-30T05:51:23Z", "user": "Lucky-Light-Sun" }, { "repo": "pytorch/pytorch", "number": 100181, "title": "[Dynamo] How to better handle customized list/dict ", "body": "### \ud83d\udc1b Describe the bug\n\nThis is a pattern I found from Meta internal user case:\r\n```\r\nimport torch\r\nimport logging\r\nimport torch._dynamo\r\nfrom typing import Any, List, Optional\r\n\r\ntorch._logging.set_logs(dynamo=logging.DEBUG)\r\n\r\nclass _non_none_list(list):\r\n def append(self, obj: Any):\r\n if obj is not None:\r\n super().append(obj)\r\n\r\n def extend(self, lst: Optional[List[Any]]):\r\n if lst is not None:\r\n super().extend(x for x in lst if x is not None)\r\n\r\ndef fn(x):\r\n a = _non_none_list()\r\n a.append(x)\r\n a.append(x + 1)\r\n return torch.cat(a, dim=1)\r\n\r\nx = torch.rand(2, 2)\r\nprint(fn(x))\r\nopt_fn = torch.compile(backend=\"eager\")(fn)\r\nprint(opt_fn(x))\r\n```\r\n\r\nThere are three major graph breaks:\r\n* ```Unsupported: call_function UserDefinedClassVariable() [] {}``` when calling ```_non_none_list()```.\r\n* ```Unsupported: non-function or method super: <method 'append' of 'list' objects>``` when calling ```super().append(obj)```\r\n* ```Unsupported: call_function args: UserDefinedObjectVariable(_non_none_list) ConstantVariable(int)``` when calling ```torch.cat(a, dim=1)```.\r\n\r\nHowever, if I switch the customized list to python builtin list, there is no graph break. I'd like to know what is dynamo's story to handle customized list/dict. \n\n### Versions\n\nN/A\n\ncc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire", "url": "https://github.com/pytorch/pytorch/issues/100181", "state": "closed", "labels": [ "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2023-04-27T16:26:39Z", "updated_at": "2023-05-03T04:25:40Z", "user": "yanboliang" }, { "repo": "pytorch/TensorRT", "number": 1861, "title": "\u2753 [Question] Binding index warnings while using fx backend", "body": "## \u2753 Question\r\nI want to convert a torch model(from python-nn.Module) to a runtime model(in C++), using the torch.fx capabilities. That will allow me to accelerate a model that isn't fully supported by TensorRT.\r\n\r\nThe model I'm using is-\r\n`class Model(nn.Module):`\r\n` def __init__(self):`\r\n` super().__init__()`\r\n` self.linear = nn.Linear(10, 10)`\r\n` self.relu = nn.ReLU()`\r\n\r\n` def forward(self, input_0):`\r\n` input_0 = self.linear(input_0)`\r\n` input_0 = self.relu(input_0)`\r\n` input_0 = torch.linalg.norm(input_0, ord=2, dim=1)`\r\n` output_0 = self.relu(input_0)`\r\n` return output_0`\r\n\r\nThe compile command I'm using right after that is-\r\n`model = Model().cuda().eval()`\r\n`trt_fx_module_f = torch_tensorrt.fx.compile(`\r\n` model, input=[torch.randn(1, 10, device=\"cuda\")], lower_precision=\"fp32\", min_acc_module_size=1, explicit_batch_dimension=True, use_experimental_fx_rt=True`\r\n`)`\r\n\r\n(I wrote it according to the response I received in torch's forums- https://discuss.pytorch.org/t/using-torchtrt-fx-backend-on-c/170639/6)\r\n\r\nBut after I'm trying to use that, so I see warnings about the binding indexes of the model. For example, here is one of the warnings-\r\n`WARNING: [Torch-TensorRT] - ICudaEngine::getProfileDimensions: bindingIndex 0 is not in profile 2. Using bindingIndex = 4 instead.`\r\n\r\nOn the other hand, when I'm using the flow as written in this notebook(on the same model)-\r\nhttps://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py\r\nso the warnings are being vanished.\r\n\r\nCan you tell me what are the actual differences between these two flows?\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 2.0\r\n - CPU Architecture: x86-64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): -\r\n - Are you using local sources or building from archives: Torch-TensorRT which has been built from sources\r\n - Python version: 3.8.10\r\n - CUDA version: 11.8\r\n - GPU models and configuration: NVIDIA T1000\r\n - Any other relevant information: -\r\n\r\n@OronG13", "url": "https://github.com/pytorch/TensorRT/issues/1861", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-04-27T13:31:59Z", "updated_at": "2023-08-10T00:02:37Z", "user": "DanielLevi6" }, { "repo": "pytorch/TensorRT", "number": 1860, "title": "\u2753 [Question] Runtimes for timm + TensorRT", "body": "## \u2753 Question\r\n\r\nI created a script to compare inference runtimes with `torch`, `torch.compile` and `torch_tensorrt.compile` for any timm model, input shape and dtype and some runtimes are worse using TensorRT, why ? \r\n\r\n## What you have already tried\r\n\r\nI used [latest NVIDIA pytorch container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)(`nvcr.io/nvidia/pytorch:23.04-py3`, released today) on a g5.2xlarge AWS instance (A10g GPU). You can find the script (`benchmark.py`) at the end of this issue and the command used to run it below : \r\n```bash\r\ndocker run --gpus all --rm --volume $DIR:/app nvcr.io/nvidia/pytorch:23.04-py3 /bin/bash -c \"pip install --pre timm && python /app/benchmark.py\"\r\n```\r\n\r\nwith `$DIR` the path to the directory where I saved the script. Here are a few results : \r\n\r\n| model | dtype | shape | torch | torch.compile | torch_tensorrt.compile |\r\n|--------|--------|--------|--------|--------|--------|\r\n| resnet50 | float32 | (16, 3, 224, 224) | 16.0ms | 11.4ms | **7.6ms** |\r\n| resnet50 | float16 | (16, 3, 224, 224) | 9.0ms | 6.3ms | **3.6ms** |\r\n| convnext_large | float32 | (16, 3, 224, 224) | 70.5ms | **56.7ms** | 145.9ms |\r\n| convnext_large | float16 | (16, 3, 224, 224) | 35.4ms | **28.3ms** | 64.8ms |\r\n| vit_base_patch16_224 | float32 | (16, 3, 224, 224) | 28.6ms | **28.2ms** | 30.5ms |\r\n| vit_large_patch14_clip_336 | float32 | (16, 3, 336, 336) | 288.1ms | 284.2ms | 310.2ms |\r\n| vit_large_patch14_clip_336 | float16 | (16, 3, 336, 336) | 129.1ms | 127.5ms | error\u00b0 |\r\n\r\n(error\u00b0 : `Expected input tensors to have type Half, found type float`, maybe some forcing on Layernorm layers is applied and I should enable mixed precision somehow ?)\r\n\r\nEverything goes well for the resnet50 model but for the convnext_large and vit models the `torch_tensorrt.compile` option get lower throughput and even fail in one case. And of course these models are the ones I am interested in \ud83d\ude05 \r\n\r\nSeveral questions : \r\n- Do you see any issue with the script I provided or how I ran it ? \r\n- How can I minimize the runtimes for the convnext_large and vit_large_patch14_clip_336 models ? Would using ONNX + TensorRT provide different results ? Is it related to how these models are implemented in timm ? \r\n\r\nI can provide more details if needed (*e.g.* stack track),\r\nThanks for your help and support,\r\nSimon\r\n\r\n____\r\n\r\n```python\r\nfrom time import time\r\nimport timm\r\nimport torch\r\nimport torch_tensorrt\r\n\r\n\r\ndef benchmark(model, inputs, compile_torch=False, compile_tensorrt=False, n_warmups=5, n_runs=100):\r\n \"\"\"\r\n 1. Optionally compile the model\r\n 2. Warmup phase (n_warmups) \r\n 3. Benchmark phase (n_runs)\r\n \"\"\"\r\n\r\n assert not (compile_torch and compile_tensorrt), \"Cannot compile both torch and tensorrt\"\r\n\r\n # 1. Compile\r\n if compile_tensorrt:\r\n model = torch_tensorrt.compile(model,\r\n inputs=[torch_tensorrt.Input(inputs.shape, dtype=inputs.dtype)],\r\n enabled_precisions={inputs.dtype})\r\n\r\n if compile_torch:\r\n model = torch.compile(model)\r\n\r\n # 2. Warmup\r\n for _ in range(n_warmups):\r\n with torch.no_grad():\r\n model(inputs)\r\n torch.cuda.synchronize()\r\n\r\n # 3. Benchmark\r\n runtimes = []\r\n for _ in range(n_runs):\r\n with torch.no_grad():\r\n start = time()\r\n model(inputs)\r\n torch.cuda.synchronize()\r\n runtimes.append(time() - start)\r\n \r\n # Print result\r\n print('*' * 80)\r\n print(f\"Average: {1000*sum(runtimes)/n_runs:.2f}ms\")\r\n print('*' * 80)\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n # To run this script using the latest pytorch docker image, save it into a directory (DIR) and run:\r\n # docker run --gpus all --rm --volume $DIR:/app nvcr.io/nvidia/pytorch:23.04-py3 /bin/bash -c \"pip install --pre timm && python /app/benchmark.py\"\r\n\r\n # Parameters\r\n model_name = 'resnet50'\r\n shape = (16, 3, 224, 224)\r\n dtype = torch.float32\r\n\r\n # Prepare model and inputs\r\n model = timm.create_model(model_name)\r\n model.eval().cuda().type(dtype)\r\n inputs = torch.randn(*shape).type(dtype).cuda()\r\n\r\n benchmark(model, inputs)\r\n benchmark(model, inputs, compile_torch=True)\r\n benchmark(model, inputs, compile_tensorrt=True)\r\n```", "url": "https://github.com/pytorch/TensorRT/issues/1860", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-04-26T15:19:14Z", "updated_at": "2024-10-04T15:58:16Z", "user": "SimJeg" }, { "repo": "pytorch/TensorRT", "number": 1858, "title": "\u2753 [Question] Why was this Repo renamed to TensorRT ?", "body": "Thank you all for the great work on Torch-TensorRT. \r\nIt's been a pleasure to see it evolve since the days of TRTorch.\r\n\r\nThis repo went through multiple names but I think the current one is extremely confusing, if I clone both this repo and the original TensorRT repo I now have two TensorRT folders. \r\n\r\nThis is extremely confusing and at times infuriating. Would it be possible to know more about what prompted this naming choice ? \r\n\r\nWouldn't it be clearer to use Torch-TensorRT ? \r\n\r\nPS: it seems [other people are confused and are posting issues for TensorRT here](https://github.com/pytorch/TensorRT/issues/1703) \r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1858", "state": "closed", "labels": [ "question" ], "created_at": "2023-04-25T12:03:06Z", "updated_at": "2023-05-02T10:08:41Z", "user": "MatthieuToulemont" }, { "repo": "pytorch/data", "number": 1140, "title": "Shuffle batches across workers", "body": "### \ud83d\ude80 The feature\n\nI have a Dataloader with n workers. My understanding is that each worker constructs a full batch independently, which is then served by the dataloader. My samples are large, so I cannot increase the shuffle buffer size in each worker. Is there a way to perform the batching and shuffling only in the main process? \n\n### Motivation, pitch\n\nThis would improve shuffling for a fixed memory usage.\n\n### Alternatives\n\nI tried having an inner dataloader with n workers that produces samples, and an outer dataloader that shuffles and batches them. I couldnt get the inner dataloader to use n workers.\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1140", "state": "closed", "labels": [], "created_at": "2023-04-24T15:53:08Z", "updated_at": "2023-04-28T02:49:08Z", "comments": 2, "user": "platers" }, { "repo": "pytorch/text", "number": 2159, "title": " how to use Field ,RawField with torchtext 0.15.0 , don't need lower version", "body": "## \ud83d\udc1b Bug\r\n\r\n**Describe the bug** A clear and concise description of what the bug is.\r\n\r\n\r\n\r\n\r\n\r\n- PyTorch Version (e.g., 1.0): 1.12\r\n- OS (e.g., Linux):\r\n- How you installed PyTorch (`conda`, `pip`, source):\r\n- Build command you used (if compiling from source):\r\n- Python version:3.8\r\n- CUDA/cuDNN version: 10.2\r\n- GPU models and configuration:\r\n- Any other relevant information:\r\n\r\nas enviroment is pytorch 1.12 + ,but i want to torchtext 0.12+ ,but torchtext 0.12+ have remove Filed, how to use torchtext 0.12+ Field, casue when pip isntall torchtext below o.12, it default install pytorch and override the version i install before, i want to use pytorch 0.12+ as well as torrchtext Field , RawField, together. how to achieve this?\r\n", "url": "https://github.com/pytorch/text/issues/2159", "state": "open", "labels": [], "created_at": "2023-04-22T03:17:29Z", "updated_at": "2023-04-23T07:51:49Z", "user": "cqray1990" }, { "repo": "pytorch/serve", "number": 2253, "title": "Troubled me too, How to solve this problem in TorchServe 0.7.1", "body": " Just to let you know that I have the same kind of issue on Windows server 2019, with TorchServe 0.7.1. \r\n\r\nFrom Anaconda Prompt (ran as admninistrator), I run `torchserve --start ...`, everything goes fine including the inference test on the served model. I stop the `torchserve --start ...` command with CTRL+C. \r\n\r\nI guess the SIGINT is not catched by `torchserve.exe` on Windows to delete the `.model_server.pid` from `%APP_DATA%\\Local\\Temp\\1\\` so I have to delete it manually before running the next `torchserve --start ...` command.\r\n\r\n_Originally posted by @khelkun in https://github.com/pytorch/serve/issues/1866#issuecomment-1425916308_\r\n ", "url": "https://github.com/pytorch/serve/issues/2253", "state": "closed", "labels": [ "triaged", "windows" ], "created_at": "2023-04-21T04:09:42Z", "updated_at": "2023-10-28T19:39:28Z", "user": "Z863058" }, { "repo": "pytorch/TensorRT", "number": 1845, "title": "\u2753 [Question] Can I use TensorRT8.5.3.1 and torch1.10.1 with torch_TensorRT? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nI found that when pip install torch_tensorrt corresponding to TensorRT8.5.3.1, torch must be 1.13. Can I use TensorRT8.5.3.1 and torch1.10.1 with torch_TensorRT? \r\n\r\nAnd if I use c++ torch_tensorrt, can i avoid this situation?\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.1\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Python version: 3.9.16\r\n - CUDA version: 11.6\r\n - GPU models and configuration: rtx3090", "url": "https://github.com/pytorch/TensorRT/issues/1845", "state": "closed", "labels": [ "question" ], "created_at": "2023-04-20T12:34:27Z", "updated_at": "2023-04-23T08:05:33Z", "user": "Yoh-Z" }, { "repo": "pytorch/TensorRT", "number": 1844, "title": "\u2753 [Question] Internal Error-given invalid tensor name", "body": "## \u2753 Question\r\n\r\nI want to convert a torch model(from python) to a runtime model(in C++), using the torch.fx capabilities. That will allow me to accelerate a model that isn't fully supported by TensorRT.\r\nI understand that this flow is experimental, so I used the examples which are given in this repository.\r\n\r\nBy using this example-\r\nhttps://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py\r\n\r\nI got some internal errors while running this code part(and also while running inference after that, but the error messages are identical as before, so I guess it's related.)-\r\n`trt_mod = TRTModule(`\r\n` name=\"my_module\",`\r\n` serialized_engine=engine_str,`\r\n` input_binding_names=r.input_names,`\r\n` output_binding_names=r.output_names,`\r\n` target_device=Device(f\"cuda:{torch.cuda.current_device()}\"),`\r\n` )`\r\n\r\nThe error messages are-\r\n`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorShape given invalid tensor name: input_0)`\r\n`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorDataType given invalid tensor name: input_0)`\r\n`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorShape given invalid tensor name: output_0)`\r\n`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorDataType given invalid tensor name: output_0)\r\n`\r\nWhat can cause these errors?\r\nI tried to find other way to define the model inputs and outputs(which will maybe affect the input and output names in some way, as hinted from the error messages), but I don't see other way in the examples.\r\n\r\n## What you have already tried\r\n\r\nI have already tried the notebook I linked before, and on other flow I got in the torch forum-\r\nhttps://discuss.pytorch.org/t/using-torchtrt-fx-backend-on-c/170639/6\r\n\r\nThe code for this flow is-\r\n`model_fx = model_fx.cuda()`\r\n`inputs_fx = [i.cuda() for i in inputs_fx]`\r\n`trt_fx_module_f16 = torch_tensorrt.compile(`\r\n ` model_fx,`\r\n ` ir=\"fx\",`\r\n ` inputs=inputs_fx,`\r\n ` enabled_precisions={torch.float16},`\r\n ` use_experimental_fx_rt=True,`\r\n ` explicit_batch_dimension=True`\r\n`)`\r\n`torch.save(trt_fx_module_f16, \"trt.pt\")`\r\n`reload_trt_mod = torch.load(\"trt.pt\")`\r\n`scripted_fx_module = torch.jit.trace(trt_fx_module_f16, example_inputs=inputs_fx)`\r\n`scripted_fx_module.save(\"/tmp/scripted_fx_module.ts\")`\r\n`scripted_fx_module = torch.jit.load(\"/tmp/scripted_fx_module.ts\") #This can also be loaded in C++`\r\n\r\nThe error is the same, while running the torch.compile method, using the \"use_fx_experimental_rt=True\" flag\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.13.1\r\n - CPU Architecture: x86-64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): -\r\n - Are you using local sources or building from archives: I used the pre-built version of Torch-TensorRT 1.3.0 release\r\n - Python version: 3.8.10\r\n - CUDA version: 11.8\r\n - GPU models and configuration: NVIDIA T1000\r\n - Any other relevant information: -\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1844", "state": "closed", "labels": [ "question" ], "created_at": "2023-04-20T10:35:35Z", "updated_at": "2023-04-27T16:10:46Z", "user": "DanielLevi6" }, { "repo": "pytorch/serve", "number": 2242, "title": "How to send a json body to Torchserve", "body": "I'd like to do a post request to torch serve with application/json as its content-type, instead of a file. data could be `{\"text\": \"hi\"}`. Is that possible?\r\n\r\nIn the docs it is shown how you can send binary file data\r\n\r\n```\r\nimport requests\r\n\r\nres = requests.post(\"http://localhost:8080/predictions/squeezenet1_1\", files={'data': open('docs/images/dogs-before.jpg', 'rb'), 'data': open('docs/images/kitten_small.jpg', 'rb')})\r\n```\r\n\r\nCan't get anything like this to work:\r\n\r\n```\r\nimport requests\r\nimport io\r\n\r\nstr = \"oi321op4\"\r\n\r\nraw_data = io.BytesIO(str.encode())\r\nfiles = {\"data\": raw_data}\r\nres = requests.post(\"myurl\", files=files)\r\nres.json()\r\n```", "url": "https://github.com/pytorch/serve/issues/2242", "state": "closed", "labels": [], "created_at": "2023-04-19T16:17:51Z", "updated_at": "2023-04-19T22:47:55Z", "user": "nihiluis" }, { "repo": "pytorch/TensorRT", "number": 1835, "title": "\u2753 [Question] Is torch-tensorrt compiled code device agnostic? ", "body": "Thanks for this wonderful repo! \r\n\r\nIs the torch-tensorrt compiled code runnable on any (Nvidia) device or should it be compiled on the target device? I know that the usual tensorrt programs (compiled from onnx) need to be compiled on the target device. I would expect the same from torch-tensorrt. However, the docs on [deployment](https://pytorch.org/TensorRT/tutorials/runtime.html#runtime) do not specify this and rather make me believe that the compiled code is device agnostic. \r\n", "url": "https://github.com/pytorch/TensorRT/issues/1835", "state": "closed", "labels": [ "question" ], "created_at": "2023-04-18T16:44:56Z", "updated_at": "2023-04-18T17:05:52Z", "user": "FabianSchuetze" }, { "repo": "pytorch/serve", "number": 2236, "title": "How to get image name", "body": "I use curl http://localhost:8080/predictions/resnet-18 -T kitten_small.jpg\r\n\r\nI want to get the image name like kitten_small.jpg but the data in the handler is only image", "url": "https://github.com/pytorch/serve/issues/2236", "state": "closed", "labels": [], "created_at": "2023-04-17T08:31:55Z", "updated_at": "2023-10-28T19:39:20Z", "user": "zzh1230" }, { "repo": "pytorch/data", "number": 1132, "title": "torchdata.datapipes.map.Shuffler should return a MapDataPipe", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nHello. I am working on mixing two speech datasets, both of them are indexable datasets. Using MapDataPipe, shuffle one of the speech datasets, and zip them together with one zipper:\r\n\r\n```python\r\nimport torchdata.datapipes as dp\r\n\r\ndp1 = dp.map.SequenceWrapper([0, 1, 2, 3, 4, 5]) # speech 1\r\ndp2 = dp.map.SequenceWrapper(['a', 'b', 'c']) # speech 2\r\ndp1 = dp.map.Shuffler(dp1) # shuffle one\r\ndpz = dp.map.Zipper(dp1, dp2)\r\n\r\nprint(list(dpz))\r\nprint(list(dpz))\r\nprint(list(dpz))\r\nprint()\r\n```\r\nHowever, the shuffler returns one IterDataPipe... So the code above raises an Error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/home/quancs/projects/NBSS_pmt/testxxx.py\", line 6, in <module>\r\n dpz = dp.map.Zipper(dp1, dp2)\r\n File \"/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combining.py\", line 80, in __init__\r\n raise TypeError(\"Expected all inputs to be `MapDataPipe`\")\r\nTypeError: Expected all inputs to be `MapDataPipe`\r\n```\r\n\r\nI tried to use IterDataPipe, but I don't know how to sample it in ddp situation (in one epoch, each sample is sampled only once).\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 2.0.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.8\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: CentOS Linux 7 (Core) (x86_64)\r\nGCC version: (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)\r\nClang version: Could not collect\r\nCMake version: version 2.8.12.2\r\nLibc version: glibc-2.17\r\n\r\nPython version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17\r\nIs CUDA available: True\r\nCUDA runtime version: 11.3.109\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA A100-SXM4-40GB\r\nGPU 1: NVIDIA A100-SXM4-40GB\r\nGPU 2: NVIDIA A100-SXM4-40GB\r\nGPU 3: NVIDIA A100-SXM4-40GB\r\nGPU 4: NVIDIA A100-SXM4-40GB\r\nGPU 5: NVIDIA A100-SXM4-40GB\r\nGPU 6: NVIDIA A100-SXM4-40GB\r\nGPU 7: NVIDIA A100-SXM4-40GB\r\n\r\nNvidia driver version: 530.30.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 128\r\nOn-line CPU(s) list: 0-127\r\nThread(s) per core: 1\r\nCore(s) per socket: 64\r\nSocket(s): 2\r\nNUMA node(s): 8\r\nVendor ID: AuthenticAMD\r\nCPU family: 23\r\nModel: 49\r\nModel name: AMD EPYC 7742 64-Core Processor\r\nStepping: 0\r\nCPU MHz: 1500.000\r\nCPU max MHz: 2250.0000\r\nCPU min MHz: 1500.0000\r\nBogoMIPS: 4491.63\r\nVirtualization: AMD-V\r\nL1d cache: 32K\r\nL1i cache: 32K\r\nL2 cache: 512K\r\nL3 cache: 16384K\r\nNUMA node0 CPU(s): 0-15\r\nNUMA node1 CPU(s): 16-31\r\nNUMA node2 CPU(s): 32-47\r\nNUMA node3 CPU(s): 48-63\r\nNUMA node4 CPU(s): 64-79\r\nNUMA node5 CPU(s): 80-95\r\nNUMA node6 CPU(s): 96-111\r\nNUMA node7 CPU(s): 112-127\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.23.5\r\n[pip3] pytorch-lightning==2.0.1.post0\r\n[pip3] torch==2.0.0\r\n[pip3] torchaudio==2.0.0\r\n[pip3] torchdata==0.6.0\r\n[pip3] torcheval==0.0.6\r\n[pip3] torchmetrics==0.11.4\r\n[pip3] torchtnt==0.0.7\r\n[pip3] torchvision==0.15.0\r\n[pip3] triton==2.0.0\r\n[conda] blas 1.0 mkl \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2021.4.0 h06a4308_640 \r\n[conda] mkl-service 2.4.0 py310h7f8727e_0 \r\n[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 \r\n[conda] mkl_random 1.2.2 py310h00e6091_0 \r\n[conda] numpy 1.23.5 py310hd5efca6_0 \r\n[conda] numpy-base 1.23.5 py310h8e6c178_0 \r\n[conda] pytorch 2.0.0 py3", "url": "https://github.com/meta-pytorch/data/issues/1132", "state": "closed", "labels": [], "created_at": "2023-04-17T02:29:55Z", "updated_at": "2023-04-18T14:56:05Z", "comments": 7, "user": "quancs" }, { "repo": "pytorch/data", "number": 1131, "title": "What does it mean for a DataPipe to be 'replicable'? ", "body": "### \ud83d\udcda The doc issue\n\nIn the [ReadingService docs](https://pytorch.org/data/main/reading_service.html?highlight=replicable) the different sharding options and that one applies to replicable and one to non-replicable datapipes, but it's not really explained what that means.\r\n\r\nIndirectly related, I'm also confused by the names `ShardingRoundRobinDispatcher` and `ShardingFilter`. The docs for `ShardingFilter` say \r\n\r\n> each instance of the DataPipe (on different workers) will have every n-th element of the original DataPipe, where n equals to the number of instances.\r\n\r\nIs that not essentially the definition of round robin distribution? How is that different than what the the DataPipes downstream of a `ShardingRoundRobinDispatcher` on different workers receive?\n\n### Suggest a potential alternative/fix\n\nClarify more the difference between `ShardingRoundRobinDispatcher` and `ShardingFilter` and explain what 'replicable' means in that context. \r\n\r\nPossibly consider renaming `ShardingRoundRobinDispatcher` and `ShardingFilter`, if the answers to my questions above are 'yes' to something more meaningful. ", "url": "https://github.com/meta-pytorch/data/issues/1131", "state": "open", "labels": [], "created_at": "2023-04-15T03:27:12Z", "updated_at": "2023-05-27T21:47:09Z", "comments": 4, "user": "lendle" }, { "repo": "pytorch/TensorRT", "number": 1824, "title": "\u2753 [Question] Pytorch 2.0 Compatability?", "body": "## \u2753 Question\r\n\r\nThanks for this repo. Is TensorRT compatible with pytorch 2.0? I see that the latest release targets pytorch 1.13. Is there some way I can use TensorRT with pytorch 2.0? \r\n", "url": "https://github.com/pytorch/TensorRT/issues/1824", "state": "closed", "labels": [ "question" ], "created_at": "2023-04-14T17:38:33Z", "updated_at": "2023-04-22T21:21:34Z", "user": "FabianSchuetze" }, { "repo": "pytorch/pytorch", "number": 99143, "title": "No documentation to show how to implement aten::view for custom backend", "body": "### \ud83d\udcda The doc issue\n\nThe original code is:\r\n\r\n```py\r\n x = torch.empty([1024], device='privateuseone:0')\r\n y = x.view([2, -1]) # raise error by missing aten::view\r\n```\r\nThen I get following errors:\r\n```txt\r\nNotImplementedError: Could not run 'aten::view' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::view' is only available for ..\r\n```\r\n\r\nAccording to some interface declaration in Pytorch source code, the extension looks like this:\r\n```cpp\r\nstatic at::Tensor __view(c10::DispatchKeySet ks, const at::Tensor & self, c10::SymIntArrayRef size) {\r\n return at::_ops::view::redispatch(ks, self, size);\r\n}\r\nTORCH_LIBRARY_IMPL(aten, Antares, m) {\r\n m.impl(\"view\", __view);\r\n}\r\n```\r\n\r\nHowever, it results in infinite recursive call of this function and ends with stack overflow.\r\nI don't think `x.view([2, -1])` really requires user to define its implementation. If this definition is a must, what documentation can I refer to get it passed correctly?\n\n### Suggest a potential alternative/fix\n\nAn document example of how to implement custom `aten::view`, or any simpler solutions to solve the reshape problem above.\n\ncc @malfet @zou3519 @svekars @carljparker", "url": "https://github.com/pytorch/pytorch/issues/99143", "state": "open", "labels": [ "module: cpp-extensions", "module: docs", "triaged" ], "created_at": "2023-04-14T11:36:09Z", "updated_at": "2024-04-16T16:18:30Z", "user": "ghostplant" }, { "repo": "pytorch/examples", "number": 1136, "title": "examples/imagenet/main.py Multiple Gpus use for training", "body": "By setting up multiple Gpus for use, the model and data are automatically loaded to these Gpus for training. What is the difference between this way and single-node multi-GPU distributed training?\r\n", "url": "https://github.com/pytorch/examples/issues/1136", "state": "open", "labels": [], "created_at": "2023-04-13T12:05:39Z", "updated_at": "2023-04-30T01:18:17Z", "comments": 1, "user": "Ansor-ZJJ" }, { "repo": "pytorch/tutorials", "number": 2284, "title": "[BUG] - module 'torch' has no attribute '_six'", "body": "### Add Link\n\nhttps://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html\n\n### Describe the bug\n\nWhen I try to run the data loader section, it keeps returning this error of torch not having the attribute _six. I made sure that my dataroot is right and the files are there but it just doesn't seem to fix the problem. \n\n### Describe your environment\n\nMac and PyTorch version 2.0.0", "url": "https://github.com/pytorch/tutorials/issues/2284", "state": "closed", "labels": [ "question" ], "created_at": "2023-04-13T04:46:34Z", "updated_at": "2024-11-20T14:19:23Z", "user": "vanilladucky" }, { "repo": "pytorch/android-demo-app", "number": 311, "title": "What is MemoryFormat.CHANNELS_LAST?", "body": "And What is BitmaptoFloat32Tensor?\r\n\r\nThx.", "url": "https://github.com/pytorch/android-demo-app/issues/311", "state": "open", "labels": [], "created_at": "2023-04-12T02:03:49Z", "updated_at": "2023-04-12T02:03:49Z", "user": "NeighborhoodCoding" }, { "repo": "pytorch/examples", "number": 1131, "title": "New examples requested", "body": "Hi everyone, @svekars and I are looking to increase the number of new contributions to pytorch/examples, this might be especially interesting to you if you've never contributed to an open source project before.\r\n\r\nAt a high level, we're looking for new interesting models.\r\n\r\nSo here's what you need to do\r\n1. Check out our contributing guide: https://github.com/pytorch/examples/blob/main/CONTRIBUTING.md\r\n2. Pick a model idea - I've listed a few below, comment on this task so others know you're working on it\r\n3. Implement your model from scratch using PyTorch, no external dependencies will be allowed to keep the examples as educational as possible\r\n\r\nYour implementation needs to include\r\n1. A folder with your code which needs to define\r\n 1. Your model architecture\r\n 2. Training code\r\n 4. Evaluation code \r\n 5. An argparser \r\n3. Make sure your script runs in CI so it doesn't break in the future by adding it to `run_python_examples.sh`\r\n4. README describing any usage instructions\r\n\r\nAs an example this recent contribution by @sudomaze is a good one to follow https://github.com/pytorch/examples/pull/1003/files\r\n\r\nHere are some model ideas\r\n\r\n## Model ideas\r\n\r\n\r\n* [ ] Controlnet - Guided diffusion\r\n* [ ] NERF \r\n* [x] Graph Neural Network @JoseLuisC99 \r\n* [ ] Diffusion Model, stable diffusion or any variant of the architecture you like\r\n* [x] Vision Transformer\r\n* [ ] Video model\r\n* [ ] Toolformer\r\n* [ ] Differentiable physics\r\n* [ ] Flownet\r\n* [ ] Dreamfusion or any 3d model\r\n* [ ] Language Translation\r\n* [ ] Swin transformer\r\n\r\nBut I'm quite open to anything we don't have that's cool\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/1131", "state": "closed", "labels": [ "good first issue" ], "created_at": "2023-04-10T19:49:49Z", "updated_at": "2025-07-05T19:17:22Z", "comments": 58, "user": "msaroufim" }, { "repo": "pytorch/serve", "number": 2224, "title": "How to prevent torchserve unloading my models in case of inactivity?", "body": "### \ud83d\udcda The doc issue\n\nAccording to my experience, even though I wasn't able to find it in documentation, torchserve unloads a model after some time of inactivity. After the inference api for that model is invoked, it will load it again in memory, and thus increasing total inference time.\r\nCan I control that behavior and set appropriate inactivity time?\r\nOr can I just disable that option at all, and have all my models always loaded in memory?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2224", "state": "open", "labels": [ "triaged", "sagemaker" ], "created_at": "2023-04-10T12:32:26Z", "updated_at": "2023-05-08T21:51:39Z", "user": "petrovicu" }, { "repo": "pytorch/text", "number": 2145, "title": "Loading vectors into a GPU", "body": "## \ud83d\ude80 Feature\r\n\r\nIs there any way for loading vectors based on device with torchtext.vocab.Vectors class?\r\n", "url": "https://github.com/pytorch/text/issues/2145", "state": "closed", "labels": [], "created_at": "2023-04-06T15:38:38Z", "updated_at": "2023-04-14T18:04:46Z", "comments": 4, "user": "saeeddhqan" }, { "repo": "pytorch/functorch", "number": 1123, "title": "Can I call torch.utils.data.WeightedRandomSampler inside vmap?", "body": "Dear Experts,\r\n\r\nI am trying to accelerate a series of weighted sampling (i.e., transition using a stochastic matrix) using vmap.\r\nBasically, I am trying to accelerate the code from here: https://discuss.pytorch.org/t/best-way-to-implement-series-of-weighted-random-sampling-for-transition-w-stochastic-matrix/176713 using vmap instead of a for loop, by calling torch.utils.data.WeightedRandomSamper() inside vmap (the link is my question asking for any alternative way for acceleration in the general forum).\r\nHowever, I get an error and I am not sure if this is possible.\r\n\r\nBelow is my code:\r\n\r\n```\r\nimport torch\r\nfrom torch import nn\r\nfrom functorch import vmap\r\n\r\nN = 10\r\nM = 20\r\nL = 5\r\n\r\nP = torch.rand([N, M])\r\nx = torch.randint(0, N, [L])\r\nP_new = torch.stack([P[x[i]] for i in range(L)])\r\n\r\nf = lambda p: torch.tensor(list(torch.utils.data.WeightedRandomSampler(p, 1))[0])\r\ny = vmap(f, randomness='different')(P_new)\r\n\r\nprint(y)\r\n```\r\n\r\nIdeally, I want to sample L elements, each using distribution P[x[i]] for i = range(L).\r\nBelow is the error I get:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"xxx/test.py\", line 17, in <module>\r\n y = vmap(f, randomness='different')(P_new)\r\n File \"xxx/functorch/_src/vmap.py\", line 361, in wrapped\r\n return _flat_vmap(\r\n File \"xxx/functorch/_src/vmap.py\", line 487, in _flat_vmap\r\n batched_outputs = func(*batched_inputs, **kwargs)\r\n File xxx/test.py\", line 16, in <lambda>\r\n f = lambda p: torch.tensor(list(torch.utils.data.WeightedRandomSampler(p, 1))[0])\r\n File \"xxxx/site-packages/torch/utils/data/sampler.py\", line 203, in __iter__\r\n yield from iter(rand_tensor.tolist())\r\nRuntimeError: Cannot access data pointer of Tensor that doesn't have storage\r\n```\r\n\r\nI wonder if something like this is fundamentally impossible, or is there a way around my error.\r\n\r\nAny help would be highly appreciated!\r\nThank you", "url": "https://github.com/pytorch/functorch/issues/1123", "state": "closed", "labels": [], "created_at": "2023-04-04T23:47:08Z", "updated_at": "2023-04-04T23:55:33Z", "comments": 1, "user": "kwmaeng91" }, { "repo": "pytorch/text", "number": 2139, "title": "torchtext.vocab.Vectors(..).__getitem__ does not work", "body": "## \u2753 Questions and Help\r\n\r\n\r\nI loaded a model:\r\n```python\r\nvects = torchtext.vocab.Vectors('text5-emb.txt')\r\n```\r\nAnd when I want to know whether a vocab is in the dataset or not, I run this:\r\n```python\r\nif \"the\" in vects:\r\n```\r\nand the code stops here. I waited for a long time but it does not do anything.\r\nThen, I loaded the model and set the unk_init to `lambda x: False`\r\nNow, I can use `vects['the']` to know whether the vocab exists or not.\r\n\r\nBut why does not __getitem__ work?\r\n", "url": "https://github.com/pytorch/text/issues/2139", "state": "closed", "labels": [], "created_at": "2023-04-03T17:54:45Z", "updated_at": "2023-04-04T13:52:53Z", "comments": 0, "user": "saeeddhqan" }, { "repo": "pytorch/xla", "number": 4837, "title": "How to run XLA compilation thru MLIR", "body": "## \u2753 Questions and Help\r\nHi,\r\nIs there a way to switch pytorch->XLA to compilation through MLIR chain? (StableHLO/MHLO/LMHLO etc.) Or will it appear only after switch to openxla/xla repository? (I see such pull requests in the list, but according to OpenXLA community meeting slides, these repositories should have the same contents).\r\nSo far, using all found env.options, I managed to get dumps only of HLO (non-MLIR) IR and I guess this is a non-MLIR path used by default.\r\n\r\nThank you.", "url": "https://github.com/pytorch/xla/issues/4837", "state": "closed", "labels": [], "created_at": "2023-03-30T13:16:28Z", "updated_at": "2023-05-22T19:32:41Z", "user": "MUR-83" }, { "repo": "pytorch/xla", "number": 4831, "title": "Increasing rendezvous timeout patience?", "body": "## \u2753 Questions and Help\r\n\r\nHi, this might be a basic question but how do I increase the timeout of `xm.rendezvous()`? I'm training a large model and due to the system we're training on saving can take >5 minutes which results in timeout errors such as\r\n\r\n`2023-03-29 13:52:59 172.16.96.171 [1] RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Connection reset by peer (14)`\r\n\r\nSorry if I missed this in the documentation. I might have misinterpreted this error but it seems like a basic rendezvous timeout? Thanks!", "url": "https://github.com/pytorch/xla/issues/4831", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2023-03-29T18:38:42Z", "updated_at": "2025-05-05T13:20:41Z", "user": "bram-w" }, { "repo": "pytorch/tutorials", "number": 2273, "title": "[BUG] - Chatbot Tutorial - Unterminated string starting at: line 1 column 91 (char 90)", "body": "### Add Link\n\nhttps://pytorch.org/tutorials/beginner/chatbot_tutorial.html#chatbot-tutorial\n\n### Describe the bug\n\nI downloaded the zip and extracted it. \r\n\r\nNow I got this error: \r\n\r\n```\r\nProcessing corpus into lines and conversations...\r\n---------------------------------------------------------------------------\r\nJSONDecodeError Traceback (most recent call last)\r\n[<ipython-input-14-0fd208236945>](https://localhost:8080/#) in <module>\r\n 11 # Load lines and conversations\r\n 12 print(\"\\nProcessing corpus into lines and conversations...\")\r\n---> 13 lines, conversations = loadLinesAndConversations(os.path.join(corpus, \"utterances.jsonl\"))\r\n 14 \r\n 15 # Write new csv file\r\n\r\n3 frames\r\n[/usr/lib/python3.9/json/decoder.py](https://localhost:8080/#) in raw_decode(self, s, idx)\r\n 351 \"\"\"\r\n 352 try:\r\n--> 353 obj, end = self.scan_once(s, idx)\r\n 354 except StopIteration as err:\r\n 355 raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\n\r\nJSONDecodeError: Unterminated string starting at: line 1 column 91 (char 90)\r\n\r\n```\r\n\r\nOn this line:\r\n`lines, conversations = loadLinesAndConversations(os.path.join(corpus, \"utterances.jsonl\"))`\r\n\n\n### Describe your environment\n\nI just clicked on the Collab Notebook button and ran it", "url": "https://github.com/pytorch/tutorials/issues/2273", "state": "open", "labels": [ "bug", "question" ], "created_at": "2023-03-28T21:29:07Z", "updated_at": "2024-11-09T02:31:22Z", "user": "levalencia" }, { "repo": "pytorch/audio", "number": 3206, "title": "How to train a wav2vec 2.0 pretrain model from scratch ?", "body": "### \ud83d\ude80 The feature\n\nThere is an example for hubert training [here](https://github.com/pytorch/audio/tree/main/examples/self_supervised_learning), but has no example aboult wav2vec 2.0.\n\n### Motivation, pitch\n\nI'm woking on ssl with/without a pretrained model to continue train the pretrained model like wav2vec 2.0 on other dataset,\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3206", "state": "closed", "labels": [ "triaged" ], "created_at": "2023-03-27T13:26:38Z", "updated_at": "2023-04-23T09:57:51Z", "user": "kobenaxie" }, { "repo": "pytorch/pytorch", "number": 97654, "title": "where is the engine_layer_visualize.py,isn't removed?", "body": "### \ud83d\udc1b Describe the bug\n\nwhere is the engine_layer_visualize.py,isn't removed?\n\n### Versions\n\nwhere is the engine_layer_visualize.py,isn't removed?", "url": "https://github.com/pytorch/pytorch/issues/97654", "state": "closed", "labels": [], "created_at": "2023-03-27T08:45:04Z", "updated_at": "2023-03-27T18:20:59Z", "user": "cqray1990" }, { "repo": "pytorch/data", "number": 1110, "title": "`scan` support", "body": "### \ud83d\ude80 The feature\n\nHow does one create an `IterDataPipe` with [`scan`/`fold`](http://learnyouahaskell.com/higher-order-functions) semantics? \n\n### Motivation, pitch\n\nNecessary for pipelines that require some kind of state, eg. label encoding for an unknown number of labels.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1110", "state": "open", "labels": [ "good first issue", "help wanted" ], "created_at": "2023-03-24T18:24:33Z", "updated_at": "2023-03-24T22:19:53Z", "comments": 3, "user": "samuela" }, { "repo": "pytorch/android-demo-app", "number": 305, "title": "I am doing object detection, app is working fine with android studio emulator. but when run on device it is showing interface as expected, all other buttons working . but when detection is pressed nothing happens . what might be the issue", "body": "", "url": "https://github.com/pytorch/android-demo-app/issues/305", "state": "open", "labels": [], "created_at": "2023-03-24T12:07:25Z", "updated_at": "2023-05-05T15:17:16Z", "user": "som1233" }, { "repo": "pytorch/examples", "number": 1128, "title": "Question about the difference between at::Tensor and torch::Tensor in PyTorch c++", "body": "I think the document of the PyTorch c++ library is not quite complete.\r\nI noticed that there are some codes in the cppdoc use torch::Tensor, especially in the \u201cTensor Basics\u201d and \u201cTensor Creation API\u201d. I can\u2019t find \u201ctorch::Tensor\u201d in \u201cLibrary API\u201d but the \u201cat::Tensor \u201c.\r\nI want to know is there any difference between them, and where can I find a more complete document about \u201cPyTorch cpp\u201d\r\n", "url": "https://github.com/pytorch/examples/issues/1128", "state": "closed", "labels": [], "created_at": "2023-03-23T06:59:46Z", "updated_at": "2023-03-25T01:56:58Z", "comments": 1, "user": "Ningreka" }, { "repo": "pytorch/pytorch", "number": 97364, "title": "Confused as to where a script is.", "body": "According to pytorch/torch/_C/__init__.pyi.in there's supposed to be a torch/aten script but I can't find it, has this been phased out, because if it has is it in an older version of PyTorch? It's just without it, it completely stops one of the programs I downloaded from working, called Colossalai. It tries to call from aten.upsample_nearest2d_backward.vec and can't. According to ChatGPT the last version of PyTorch it saw, PyTorch 1.9.0 has aten in it, but both versions that you can download from the get started on the PyTorch website don't have it. Any recommendations would be great thanks.\r\n", "url": "https://github.com/pytorch/pytorch/issues/97364", "state": "closed", "labels": [], "created_at": "2023-03-22T18:04:44Z", "updated_at": "2023-03-24T17:03:18Z", "user": "Shikamaru5" }, { "repo": "pytorch/TensorRT", "number": 1758, "title": "\u2753 [Question] The compilation process does not display errors, but the program does not continue...", "body": "![image](https://user-images.githubusercontent.com/91169172/226786255-d829be12-65d1-46aa-9e02-a2de67a9662a.png)\r\n![image](https://user-images.githubusercontent.com/91169172/226786304-2ed096a2-6ee2-4901-86f7-b8664d9a2090.png)\r\n![image](https://user-images.githubusercontent.com/91169172/226786332-ae430913-d9bb-4d5f-850b-8dac572076a5.png)\r\nWith resnet it works fine, but with my model it compiles but doesn't output the result. I don't know if there is a problem with Input.", "url": "https://github.com/pytorch/TensorRT/issues/1758", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-03-22T02:30:28Z", "updated_at": "2023-07-02T00:02:37Z", "user": "AllesOderNicht" }, { "repo": "pytorch/text", "number": 2125, "title": "How to install torchtext for cmake c++?", "body": "", "url": "https://github.com/pytorch/text/issues/2125", "state": "open", "labels": [], "created_at": "2023-03-21T19:07:38Z", "updated_at": "2023-06-06T22:01:16Z", "user": "Toocic" }, { "repo": "pytorch/data", "number": 1104, "title": "Add documentation about custom Shuffle and Sharding DataPipe", "body": "### \ud83d\udcda The doc issue\n\nTorchData has a few special graph functions to handle Shuffle and Sharding DataPipe. But, we never document what is expected for those graph functions, which leads users to extend custom shuffle and sharding by diving into our code base. \r\n\r\nWe should add clear document about the expected methods attached to Shuffle or Sharding DataPipe.\r\n\r\nThis problem has been discussed in the #1081 as well, but I want to track documentation issue separately\r\n\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/1104", "state": "open", "labels": [], "created_at": "2023-03-21T18:14:29Z", "updated_at": "2023-03-21T21:47:32Z", "comments": 0, "user": "ejguan" }, { "repo": "pytorch/vision", "number": 7438, "title": "Feedback on Video APIs", "body": "### Feedback request\r\n\r\nWith torchaudio's recent success in getting a clean FFMPEG build with a full support for FFMPEG 5 and 6 (something we can't replicate in torchvision easily yet), we are thinking of adopting their API and joining efforts to have a better support for video reading. \r\n\r\nWith that in mind, we were hoping to gather a some feedback from TV users who rely on video reader (or would like to use it but find it hard to do so):\r\n\r\n1. What are your main pain points with our current API?\r\n2. What do you wish was supported? \r\n3. What are the most important features of a video IO for you? \r\n\r\nWe can't promise we'll support everything (of course), but we'd love to gather as much feedback as possible and get as much of it incorporated as possible. \r\n", "url": "https://github.com/pytorch/vision/issues/7438", "state": "open", "labels": [ "question", "needs discussion", "module: io", "module: video" ], "created_at": "2023-03-21T13:20:36Z", "updated_at": "2024-05-20T14:50:59Z", "user": "bjuncek" }, { "repo": "pytorch/kineto", "number": 743, "title": "Questions about ROCm profiler", "body": "Hi @mwootton @aaronenyeshi ,\r\n\r\nI found some interesting results for the models running on NVIDIA A100 and AMD MI210 GPUs. For example, I tested model resnext50_32x4d in [TorchBench](https://github.com/pytorch/benchmark). resnext50_32x4d obtains about 4.89X speedup on MI210. However, when I use PyTorch Profiler to profile models on MI210, the profile trace is strange. The total execution time of resnext50_32x4d is about 32ms on A100 and 7ms on MI210. But in the profile traces, the execution time is about 117ms on A100 and 106ms on MI210. I tested PyTorch 1.13.1 with CUDA 11.7 and ROCm 5.2. And the profile traces have been attached. Do you have any ideas?\r\n\r\nAnother question is that what do the GPU kernels do before the `Profiler Step` in ROCm profiling trace? These kernels take about 45s but no python calling context is shown in the trace view.\r\n\r\n[resnext50.zip](https://github.com/pytorch/kineto/files/11021559/resnext50.zip)\r\n", "url": "https://github.com/pytorch/kineto/issues/743", "state": "closed", "labels": [ "question" ], "created_at": "2023-03-20T18:11:16Z", "updated_at": "2023-10-24T17:39:57Z", "user": "FindHao" }, { "repo": "pytorch/TensorRT", "number": 1749, "title": "How to import after compilation", "body": "![image](https://user-images.githubusercontent.com/91169172/226314599-ba8a8424-fe2c-421a-827b-2d63e3502057.png)\r\nShow me that I don't have this package when I import torch_tensorrt \r\n", "url": "https://github.com/pytorch/TensorRT/issues/1749", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-03-20T10:35:37Z", "updated_at": "2023-06-29T00:02:42Z", "user": "AllesOderNicht" }, { "repo": "pytorch/rl", "number": 977, "title": "[Feature Request] How to implement algorithms with multiple optimise phase like PPG?", "body": "## Motivation\r\n\r\nI'm trying to implement [PPG](https://proceedings.mlr.press/v139/cobbe21a) and [DNA](https://arxiv.org/pdf/2206.10027.pdf) algorithms with torchrl, and both algorithms have more than one optimise phase in a single training loop. However, I suggest the [Trainer class](https://pytorch.org/rl/reference/trainers.html) doesn't support multiple loss modules or optimisers.\r\n\r\n\r\n## Solution\r\n\r\nI wish there will be an example code of how to implement the aforementioned algorithms, or alternatively, good guidance on how to customise the Trainer.\r\n\r\n\r\n## Checklist\r\n\r\n- [ x] I have checked that there is no similar issue in the repo \r\n", "url": "https://github.com/pytorch/rl/issues/977", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-03-20T04:57:59Z", "updated_at": "2023-03-21T06:44:16Z", "user": "levilovearch" }, { "repo": "pytorch/pytorch", "number": 97026, "title": "How to get list of all valid devices?", "body": "### \ud83d\udcda The doc issue\n\n`torch.testing.get_all_device_types()`\r\nyields all valid devices on the current machine however unlike `torch._tensor_classes` , `torch.testing.get_all_dtypes()`, and `import typing; typing.get_args(torch.types.Device)`, there doesn't seem to be a comprehensive list of all valid device types, which gets listed when I force an error\r\n\r\n```\r\ntorch.devcie('asdasjdfas')\r\nRuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, privateuseone device type at start of device string: asdasjdfas\r\n```\n\n### Suggest a potential alternative/fix\n\n```\r\ntorch._device_names = cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, privateuseone \r\n\r\n```\n\ncc @svekars @carljparker", "url": "https://github.com/pytorch/pytorch/issues/97026", "state": "open", "labels": [ "module: docs", "triaged" ], "created_at": "2023-03-17T15:37:00Z", "updated_at": "2023-03-20T23:49:13Z", "user": "dsm-72" }, { "repo": "pytorch/kineto", "number": 742, "title": "How can I get detailed aten::op name like add.Tensor/abs.out?", "body": "I wonder if I could trace detailed op name like add.Tensor\u3001add.Scalar\u3001sin.out\u3001abs.out\r\nCurrently the profiler only gives me add/sin/abs, etc.\r\n\r\nIs there a method to acquire detailed dispatched op name?", "url": "https://github.com/pytorch/kineto/issues/742", "state": "closed", "labels": [ "question" ], "created_at": "2023-03-17T05:25:42Z", "updated_at": "2024-04-23T15:31:20Z", "user": "Hurray0" }, { "repo": "pytorch/cppdocs", "number": 16, "title": "How to set up pytorch for c++ (with g++) via commandline not cmake", "body": "I have a Lapop with a nvidia graphicscard and I'm trying to use pytorch for cuda with g++. But i couldn't find any good information about dependecies e.g and my compiler always trohws errors, I'm currently using this command I found on the internet: \"g++ -std=c++14 main.cpp -I ${TORCH_DIR}/include/torch/csrc/api/include/ -I ${TORCH_DIR}/include/ -L ${TORCH_DIR}/lib/ -L /usr/local/cuda/lib64 -L /usr/local/cuda/nvvm/lib64 -ltorch -lc10 -lc10_cuda -lnvrtc -lcudart_static -ldl -lrt -pthread -o out\", but it just says: \"torch/torch.h: file not found\"", "url": "https://github.com/pytorch/cppdocs/issues/16", "state": "closed", "labels": [], "created_at": "2023-03-13T21:15:08Z", "updated_at": "2023-03-18T22:36:04Z", "user": "usr577" }, { "repo": "pytorch/pytorch", "number": 96655, "title": "What is the state of support for AD of BatchNormalization and DropOut layers?", "body": "I have come to this issue from this post.\r\n\r\nhttps://pytorch.org/functorch/stable/notebooks/per_sample_grads.html\r\n\r\n## Background\r\n\r\nWhat I am doing requires per-sample gradient (in fact, I migrated from TF, so I do not have much experience with pytorch, but I have a sufficient understanding of NN training).\r\nWhen reading the post, I could not figure out whether functorch's `vmap` supports AD function of BN and DropOut layers.\r\n\r\nIn my understanding, these layers are relatively popular.\r\nBecause they are not pure functions (different behaviors in training and testing modes), and also not pure (e.g., BN layer accumulates the average across the different forward pass in training mode), which makes me wonder:\r\n\r\n## My questions\r\n1. Does functorch's `vmap` support AD function of BN and DropOut layers?\r\n2. If yes, how does it do that?\r\n\r\nI tried searching for issues with BatchNormalization or DropOut keywords, but the results were fragmented and I still do not know what is the current state now.\r\nOpacus says that only `EmbeddingBag` is not supported (https://github.com/pytorch/opacus/blob/5aa378ea98df9caf8ca1987ee4d636219267d17e/opacus/grad_sample/functorch.py#L22).\r\nCould anyone tell me the answer?\r\n\r\nIf possible, updating the docs to clarify the supports for these layers would be great.\r\n\r\nThank you very much.\n\ncc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99", "url": "https://github.com/pytorch/pytorch/issues/96655", "state": "closed", "labels": [ "triaged", "module: functorch" ], "created_at": "2023-03-10T01:47:20Z", "updated_at": "2023-03-15T16:02:05Z", "user": "tranvansang" }, { "repo": "pytorch/TensorRT", "number": 1730, "title": "\u2753 [Question] Does torch-tensorrt support seq2seq models?", "body": "## \u2753 Question\r\nDoes torch-tensorrt support seq2seq models? Are there any documentation/examples?\r\n\r\n\r\n## What you have already tried\r\n\r\nPreviously, when I tried to use TensorRT, I need to convert the original torch seq2seq model to 2 onnx files, then convert them separately to TensorRT using trtexec. Not sure if this has changed with torch-tensorrt. \r\n\r\nThanks!\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1730", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-03-08T00:51:31Z", "updated_at": "2023-06-19T00:02:34Z", "user": "brevity2021" }, { "repo": "pytorch/TensorRT", "number": 1727, "title": "complie model failed", "body": "## compile model failed with torchtrt-fp32 opt\r\n\r\n#### ERROR INFO\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Tensor DataType is determined at build time for tensors not marked as input or output.\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [graphShapeAnalyzer.cpp::analyzeShapes::1285] Error Code 4: Miscellaneous (IShuffleLayer (Unnamed Layer* 84) [Shuffle]: reshape changes volume. Reshaping [1,512,1,(+ (CEIL_DIV (+ (# 3 (SHAPE input_0)) -4) 4) 1)] to [1,512,0].)\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )\r\nterminate called after throwing an instance of 'torch_tensorrt::Error'\r\n what(): [Error thrown at core/conversion/conversionctx/ConversionCtx.cpp:147] Building serialized network failed in TensorRT\r\n\r\nAborted\r\n\r\n\r\ncode: \r\n\r\n std::string min_input_shape = \"1 3 32 32\";\r\n std::string opt_input_shape = \"1 3 32 512\";\r\n std::string max_input_shape = \"1 3 32 1024\";\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.11.0):\r\n - CPU Architecture: x86_64\r\n - OS (Linux):\r\n - How you installed PyTorch (`libtorch`):\r\n - Python version:3.8\r\n - CUDA version:11.3\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1727", "state": "closed", "labels": [ "question", "component: conversion", "No Activity" ], "created_at": "2023-03-07T04:09:45Z", "updated_at": "2023-06-18T00:02:24Z", "user": "f291400" }, { "repo": "pytorch/audio", "number": 3153, "title": "Google colab notebook pointing to PyTorch 1.13.1", "body": "### \ud83d\udcda The doc issue\n\nWhen I open https://pytorch.org/audio/main/tutorials/audio_data_augmentation_tutorial.html in google colab and try running the notebook, I see that the PyTorch version is 1.13.1\r\n![Screenshot 2023-03-06 at 6 17 58 PM](https://user-images.githubusercontent.com/16617092/223302468-664694ef-ddf3-4b67-953e-91fb75a94677.png)\r\n\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/audio/issues/3153", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2023-03-07T02:19:06Z", "updated_at": "2023-03-07T15:41:03Z", "user": "agunapal" }, { "repo": "pytorch/tutorials", "number": 2230, "title": "model.train(False) affects gradient tracking?", "body": "In this tutorial here it says in the comment that \"# We don't need gradients on to do reporting\". From what I understand the train flag only affects layers such as dropout and batch-normalization. Does it also affect gradient calculations, or is this comment wrong?\r\n\r\nhttps://github.com/pytorch/tutorials/blob/6bd30cf214bf541a1c5d35cc45d10a381f57af1b/beginner_source/introyt/trainingyt.py#L293\n\ncc @suraj813", "url": "https://github.com/pytorch/tutorials/issues/2230", "state": "closed", "labels": [ "question", "intro", "docathon-h1-2023", "easy" ], "created_at": "2023-03-02T12:09:13Z", "updated_at": "2023-06-01T01:19:02Z", "user": "MaverickMeerkat" }, { "repo": "pytorch/serve", "number": 2162, "title": "How to run torchserver without log printing\uff1f", "body": "How to run torchserver without log printing\uff1fI didn't see the relevant command line. Could someone tell me, thank you!", "url": "https://github.com/pytorch/serve/issues/2162", "state": "closed", "labels": [ "triaged", "support" ], "created_at": "2023-02-28T02:22:02Z", "updated_at": "2023-03-09T20:06:37Z", "user": "mqy9787" }, { "repo": "pytorch/examples", "number": 1121, "title": "About fast_neural_style", "body": "How many rounds did you train in the fast neural style transfer experiment? I operate according to your steps, but the effect of the model I trained is not as good as the model you provided, and why is the model file I trained less than the file you provided by 3kb? I would like to know the reason and look forward to your reply!", "url": "https://github.com/pytorch/examples/issues/1121", "state": "closed", "labels": [ "help wanted" ], "created_at": "2023-02-27T15:53:38Z", "updated_at": "2023-08-17T09:26:17Z", "comments": 2, "user": "TOUBH" }, { "repo": "pytorch/functorch", "number": 1113, "title": "How to get the jacobian matrix in GCNs?", "body": "Hi, I'm trying to use `jacrev` to get the jacobians in graph convolution networks, but it seems like I've called the function incorrectly. \r\n\r\n```python\r\nimport torch.nn.functional as F\r\nimport functorch\r\nimport torch_geometric\r\nfrom torch_geometric.data import Data\r\n\r\nclass GCN(torch.nn.Module):\r\n def __init__(self, input_dim, hidden_dim, output_dim):\r\n super().__init__()\r\n torch.manual_seed(12345)\r\n \r\n self.conv1 = torch_geometric.nn.GCNConv(input_dim, hidden_dim, aggr='add')\r\n self.conv2 = torch_geometric.nn.GCNConv(hidden_dim, output_dim, aggr='add')\r\n \r\n def forward(self, x, edge_index):\r\n x = self.conv1(x, edge_index)\r\n x = x.relu()\r\n x = F.dropout(x, p=0.5, training=self.training)\r\n x = self.conv2(x, edge_index)\r\n return x\r\n\r\nadj_matrix = torch.ones(3,3)\r\nedge_index = adj_matrix .nonzero().t().contiguous()\r\n\r\ngcn = GCN(input_dim=5, hidden_dim=64, output_dim=5)\r\n\r\nN = (128,3, 5) \r\n\r\nx =torch.randn(N, requires_grad=True) # batch_size:128, node_num:10 , node_feature: 5 \r\n\r\ngraph = Data(x=x, edge_index=edge_index)\r\n\r\ngcn_out = gcn(graph.x, graph.edge_index)\r\n\r\n```\r\nThen I try to compute the jacobians of the input data `x` based on the tutorial, \r\n\r\n```python\r\njacobian = functorch.vmap(functorch.jacrev(gcn))(graph.x, graph.edge_index)\r\n```\r\n\r\nand get the following error message: \r\n\r\n```python\r\nValueError: vmap: Expected all tensors to have the same size in the mapped dimension, got sizes [128, 2] for the mapped dimension\r\n```", "url": "https://github.com/pytorch/functorch/issues/1113", "state": "open", "labels": [], "created_at": "2023-02-27T13:23:50Z", "updated_at": "2023-02-27T13:24:15Z", "user": "pcheng2" }, { "repo": "pytorch/functorch", "number": 1112, "title": "Error about using a grad transform with in-place operation is inconsistent with and without DDP", "body": "Hi,\r\n\r\nI was using `torch.func` in pytorch 2.0 to compute the Hessian-vector product of a neural network.\r\n\r\nI first used `torch.func.functional_call` to define a functional version of the neural network model, and then proceeded to use `torch.func.jvp` and `torch.func.grad` to compute the hvp.\r\n\r\nThe above works when I was using one gpu without parallel processing. However, when I wrapped the model with Distributed Data Parallel (DDP), it gave the following error:\r\n\r\n`*** RuntimeError: During a grad (vjp, jvp, grad, etc) transform, the function provided attempted to call in-place operation (aten::copy_) that would mutate a captured Tensor. This is not supported; please rewrite the function being transformed to explicitly accept the mutated Tensor(s) as inputs.`\r\n\r\nI am confused about this error, because if there were indeed such in-place operations (which I couldn't find in my model.forward() code), I'd expect this error to occur regardless of DDP. Given the inconsistent behaviour, can I still trust the hvp result when I wasn't using DDP?\r\n\r\nMy torch version: is `2.0.0.dev20230119+cu117`\r\n\r\n", "url": "https://github.com/pytorch/functorch/issues/1112", "state": "open", "labels": [], "created_at": "2023-02-24T23:09:30Z", "updated_at": "2023-03-14T13:56:55Z", "comments": 1, "user": "XuchanBao" }, { "repo": "pytorch/text", "number": 2072, "title": "how to build torchtext in cpp wiht cmake?", "body": " HI, guys, I want to use torchtext with liborch in cpp like cmake build torchvision in cpp, but I has try ,but meet some error in windows system,I don't know why some dependency subdirectory is empty, how to build it then include with cpp ?\r\nthanks\r\n\r\n````\r\n-- Building for: Visual Studio 17 2022\r\n-- Selecting Windows SDK version 10.0.20348.0 to target Windows 10.0.22621.\r\n-- The C compiler identification is MSVC 19.34.31942.0\r\n-- The CXX compiler identification is MSVC 19.34.31942.0\r\n-- Detecting C compiler ABI info\r\n-- Detecting C compiler ABI info - done\r\n-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe - skipped\r\n-- Detecting C compile features\r\n-- Detecting C compile features - done\r\n-- Detecting CXX compiler ABI info\r\n-- Detecting CXX compiler ABI info - done\r\n-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe - skipped\r\n-- Detecting CXX compile features\r\n-- Detecting CXX compile features - done\r\nCMake Error at third_party/CMakeLists.txt:8 (add_subdirectory):\r\n The source directory\r\n\r\n C:/Apps/text-main/third_party/re2\r\n\r\n does not contain a CMakeLists.txt file.\r\n\r\n\r\nCMake Error at third_party/CMakeLists.txt:9 (add_subdirectory):\r\n The source directory\r\n\r\n C:/Apps/text-main/third_party/double-conversion\r\n\r\n does not contain a CMakeLists.txt file.\r\n\r\n\r\nCMake Error at third_party/CMakeLists.txt:10 (add_subdirectory):\r\n The source directory\r\n\r\n C:/Apps/text-main/third_party/sentencepiece\r\n\r\n does not contain a CMakeLists.txt file.\r\n\r\n\r\nCMake Error at third_party/CMakeLists.txt:11 (add_subdirectory):\r\n The source directory\r\n\r\n C:/Apps/text-main/third_party/utf8proc\r\n\r\n does not contain a CMakeLists.txt file.\r\n\r\n\r\n-- Configuring incomplete, errors occurred!\r\nPS C:\\Apps\\text-main\\build>\r\n\r\n````\r\n", "url": "https://github.com/pytorch/text/issues/2072", "state": "closed", "labels": [], "created_at": "2023-02-22T15:33:37Z", "updated_at": "2023-02-23T02:53:05Z", "user": "mullerhai" }, { "repo": "pytorch/xla", "number": 4666, "title": "Got error when build xla from source", "body": "Hi! I am trying to build xla wheel by following the setup guide here: https://github.com/pytorch/xla/blob/master/CONTRIBUTING.md\r\n\r\nI skipped building torch by `pip install torch==1.13.0` into virtualenv, and then run `env BUILD_CPP_TESTS=0 python setup.py bdist_wheel` under pytorch/xla. I got the following error:\r\n\r\n```bash\r\nERROR: /home/ubuntu/pytorch/xla/third_party/tensorflow/tensorflow/compiler/xla/xla_client/BUILD:42:20: Linking tensorflow/compiler/xla/xla_client/libxla_computation_client.so failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/tensorflow/compiler/xla/xla_client/libxla_computation_client.so-2.params\r\nbazel-out/k8-opt/bin/tensorflow/core/profiler/convert/_objs/xplane_to_tools_data/xplane_to_tools_data.pic.o:xplane_to_tools_data.cc:function tensorflow::profiler::ConvertMultiXSpacesToToolData(tensorflow::profiler::SessionSnapshot const&, std::basic_string_view<char, std::char_traits<char> >, absl::lts_20220623::flat_hash_map<std::string, std::variant<int, std::string>, absl::lts_20220623::container_internal::StringHash, absl::lts_20220623::container_internal::StringEq, std::allocator<std::pair<std::string const, std::variant<int, std::string> > > > const&): error: undefined reference to 'tensorflow::profiler::ConvertHloProtoToToolData(tensorflow::profiler::SessionSnapshot const&, std::basic_string_view<char, std::char_traits<char> >, absl::lts_20220623::flat_hash_map<std::string, std::variant<int, std::string>, absl::lts_20220623::container_internal::StringHash, absl::lts_20220623::container_internal::StringEq, std::allocator<std::pair<std::string const, std::variant<int, std::string> > > > const&)'\r\ncollect2: error: ld returned 1 exit status\r\nTarget //tensorflow/compiler/xla/xla_client:libxla_computation_client.so failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 1657.036s, Critical Path: 351.63s\r\nINFO: 9274 processes: 746 internal, 8528 local.\r\nFAILED: Build did NOT complete successfully\r\nFailed to build external libraries: ['/home/ubuntu/pytorch/xla/build_torch_xla_libs.sh', '-O', '-D_GLIBCXX_USE_CXX11_ABI=0', 'bdist_wheel']\r\n```", "url": "https://github.com/pytorch/xla/issues/4666", "state": "closed", "labels": [ "question", "build" ], "created_at": "2023-02-21T19:56:52Z", "updated_at": "2025-05-06T13:32:43Z", "user": "aws-bowencc" }, { "repo": "pytorch/xla", "number": 4662, "title": "CUDA momery\uff1ahow can i control xla reserved in total by PyTorch with GPU", "body": "## \u2753 Questions and Help\r\nI see xla will reserve almost all memory on GPU\uff0cbut when i run code both with xla and cuda\uff0c it will be error of `torch.cuda.OutOfMemoryError`\u3002\r\n\r\n```python\r\n File \"/workspace/volume/hqp-nas/xla/mmdetection/mmdet/models/backbones/resnet.py\", line 298, in forward\r\n out = _inner_forward(x)\r\n File \"/workspace/volume/hqp-nas/xla/mmdetection/mmdet/models/backbones/resnet.py\", line 275, in _inner_forward\r\n out = self.conv2(out)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.8/site-packages/mmcv/ops/modulated_deform_conv.py\", line 338, in forward\r\n output = modulated_deform_conv2d(x, offset, mask, weight1, bias1,\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.8/site-packages/mmcv/ops/modulated_deform_conv.py\", line 142, in forward\r\n ext_module.modulated_deform_conv_forward(\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 79.20 GiB total capacity; 752.52 MiB already allocated; 27.25 MiB free; 886.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\n\r\nWe can see there is 80GB in the single card of GPU-A100\u3002but CUDA of Pytorch only 886.00 MiB reserved\uff0cand xla does reserve almost all memory on GPU\u3002if i need cuda to exec operators that xla is not supported\uff0cit need more memory\u3002\r\n\r\n```markdown\r\nTue Feb 21 12:39:45 2023 \r\n+-----------------------------------------------------------+\r\n| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 NVIDIA A100-SXM... Off | 00000000:16:00.0 Off | 0 |\r\n| N/A 33C P0 88W / 400W | 81073MiB / 81920MiB | 0% Default |\r\n| | | Disabled |\r\n+-------------------------------+----------------------+----------------------+\r\n \r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n+-----------------------------------------------------------------------------+\r\n```\r\n\r\nIf i can contorl the size of XLA reserved\uff0c it will be nice\u3002any answer will be helpful\u3002\r\n", "url": "https://github.com/pytorch/xla/issues/4662", "state": "closed", "labels": [ "question", "xla:gpu" ], "created_at": "2023-02-21T12:43:09Z", "updated_at": "2025-05-07T12:13:34Z", "user": "qipengh" }, { "repo": "pytorch/kineto", "number": 727, "title": "How to trace torch cuda time in C++ using kineto\uff1f", "body": "**The problem**\r\nHi, I am using the pytorch profile to trace the gpu performance of models, and it works well in python. \r\nFor example: \r\n\r\n```\r\nimport torch\r\nfrom torch.autograd.profiler import profile, record_function\r\n\r\nwith profile(record_shapes=True, use_cuda=True, use_kineto=True, with_stack=False) as prof:\r\n with record_function(\"model_inference\"):\r\n a = torch.randn(128, 128, device=torch.device('cuda:0'))\r\n b = torch.randn(128, 128, device=torch.device('cuda:0'))\r\n c = a + b\r\n\r\nprint(prof.key_averages().table(sort_by=\"cuda_time_total\", row_limit=50))\r\n```\r\nNow, I want to implement the above code in C++ and get each operator's cuda (kernel) time. But I found very few relevant examples. So I implemented a C++ program against the python interface.\r\n\r\n```\r\n#include <torch/csrc/autograd/profiler_kineto.h>\r\n...\r\n...\r\nconst std::set<torch::autograd::profiler::ActivityType> activities(\r\n {torch::autograd::profiler::ActivityType::CPU, torch::autograd::profiler::ActivityType::CUDA});\r\n\r\ntorch::autograd::profiler::prepareProfiler(\r\n torch::autograd::profiler::ProfilerConfig(\r\n torch::autograd::profiler::ProfilerState::KINETO, false, false), activities);\r\n\r\ntorch::autograd::profiler::enableProfiler(\r\n torch::autograd::profiler::ProfilerConfig(\r\n torch::autograd::profiler::ProfilerState::KINETO, false, false), activities);\r\n\r\nauto a = torch::rand({128, 128}, {at::kCUDA});\r\nauto b = torch::rand({128, 128}, {at::kCUDA});\r\nauto c = a + b;\r\n\r\nauto profiler_results_ptr = torch::autograd::profiler::disableProfiler();\r\nconst auto& kineto_events = profiler_results_ptr->events();\r\n\r\nfor (const auto e : kineto_events) {\r\n std::cout << e.name() << \" \" << e.cudaElapsedUs() << \" \" << e.durationUs()<<std::endl;\r\n}\r\n```\r\nBut the printed cuda time is all equal to -1 like:\r\n```\r\naten::empty -1 847\r\naten::uniform_ -1 3005641\r\naten::rand -1 3006600\r\naten::empty -1 21\r\naten::uniform_ -1 53\r\naten::rand -1 82\r\naten::add -1 156\r\ncudaStreamIsCapturing -1 8\r\n_ZN2at6native90_GLOBAL__N__66_tmpxft_000055e0_00000000_13_DistributionUniform_compute_86_cpp1_ii_f2fea07d43distribution_elementwise_grid_stride_kernelIfLi4EZNS0_9templates4cuda21uniform_and_transformIffLm4EPNS_17CUDAGeneratorImplEZZZNS4_14uniform_kernelIS7_EEvRNS_18TensorIteratorBaseEddT_ENKUlvE_clEvENKUlvE2_clEvEUlfE_EEvSA_T2_T3_EUlP24curandStatePhilox4_32_10E0_ZNS1_27distribution_nullary_kernelIffLi4ES7_SJ_SE_EEvSA_SF_RKSG_T4_EUlifE_EEviNS_15PhiloxCudaStateET1_SF_ -1 2\r\ncudaLaunchKernel -1 3005499\r\ncudaStreamIsCapturing -1 4\r\n_ZN2at6native90_GLOBAL__N__66_tmpxft_000055e0_00000000_13_DistributionUniform_compute_86_cpp1_ii_f2fea07d43distribution_elementwise_grid_stride_kernelIfLi4EZNS0_9templates4cuda21uniform_and_transformIffLm4EPNS_17CUDAGeneratorImplEZZZNS4_14uniform_kernelIS7_EEvRNS_18TensorIteratorBaseEddT_ENKUlvE_clEvENKUlvE2_clEvEUlfE_EEvSA_T2_T3_EUlP24curandStatePhilox4_32_10E0_ZNS1_27distribution_nullary_kernelIffLi4ES7_SJ_SE_EEvSA_SF_RKSG_T4_EUlifE_EEviNS_15PhiloxCudaStateET1_SF_ -1 1\r\ncudaLaunchKernel -1 14\r\nvoid at::native::vectorized_elementwise_kernel<4, at::native::AddFunctor<float>, at::detail::Array<char*, 3> >(int, at::native::AddFunctor<float>, at::detail::Array<char*, 3>) -1 1\r\ncudaLaunchKernel -1 16\r\n```\r\nI carefully compared the differences between the above two programs (python and C++) but did not find the cause of the problem. I also tried other parameter combinations and couldn't get the real cuda time.\r\n\r\n**Expected behavior** \r\nIt can output cuda time of each operator in C++ program like python.\r\n\r\n**Environment version**\r\nOS: CentOS release 7.5 (Final)\r\nnvidia driver version: 460.32.03\r\nCUDA version: 11.2\r\nPyTorch version: 1.9.0+cu111\r\nPython version: 3.6.5\r\nGPU: A10", "url": "https://github.com/pytorch/kineto/issues/727", "state": "closed", "labels": [ "question" ], "created_at": "2023-02-21T11:37:24Z", "updated_at": "2023-10-10T15:07:23Z", "user": "TianShaoqing" }, { "repo": "pytorch/kineto", "number": 726, "title": "How to remove log output similar to \u201cActivityProfilerController.cpp:294] Completed Stage: Warm Up\u201d", "body": "## What I encounter\r\nwhen I use torch.profie to profie a large model, I found my log file have many lines like:\r\n```\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101898:101898 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101899:101899 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101898:101898 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101899:101899 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101898:101898 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101899:101899 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101898:101898 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101899:101899 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101898:101898 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101899:101899 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:300] Completed Stage: Collection\r\nSTAGE:2023-02-21 15:15:48 101903:101903 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101902:101902 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\nSTAGE:2023-02-21 15:15:48 101898:101898 ActivityProfilerController.cpp:294] Completed Stage: Warm Up\r\n```\r\n## What I expect\r\n\r\nIs there any way to ignore or turn off the output of these useless logs? I tried to set the environment variable `KINETO_LOG_LEVEL` equal to 99, but it didn't work. thanks you all.\r\n\r\n```python\r\nimport os\r\nos.environ.update({'KINETO_LOG_LEVEL' : '99'})\r\n```\r\n\r\n## Version and platform\r\nCentOS-7 Linux\r\ntorch 1.13.1+cu117 <pip>\r\ntorch-tb-profiler 0.4.1 <pip>\r\ntorchaudio 0.13.1+cu117 <pip>\r\ntorchvision 0.14.1+cu117 <pip>", "url": "https://github.com/pytorch/kineto/issues/726", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-02-21T07:46:59Z", "updated_at": "2023-06-22T02:37:44Z", "user": "SolenoidWGT" }, { "repo": "pytorch/data", "number": 1033, "title": "Accessing DataPipe state with MultiProcessingReadingService", "body": "Hi TorchData team,\r\n\r\nI'm wondering how to access the state of the datapipe in the multi-processing context with DataLoader2 + MultiProcessingReadingService. When using no reading service, we can simply access the graph using `dataloader.datapipe`, then I can easily access the state of my datapipe using the code shown below.\r\n\r\nHowever, in the multi processing case, the datapipe graph is replaced with QueueWrapper instances, and I cannot find any way to communicate with the workers to get access to the state of the data pipe (and I get the error that my StatefulIterator cannot be found on the datapipe). If I access `dl2._datapipe_before_reading_service_adapt` I do get the initial state only which makes sense since there is no state sync between the main and worker processes.\r\n\r\nAs far as I understand, this will also be a blocker for state capturing for proper DataLoader checkpointing when the MultiProcessingReadingService is being used.\r\n\r\nPotentially, could we add a `getstate` communication primitive in `communication.messages` in order to capture the state (via getstate) of a datapipe in a worker process?\r\nWe're also open to using `sharding_round_robin_dispatch` in order to keep more information in the main process but I'm a bit confused on how to use it, if you have some sample code for me for the following case?\r\n\r\nRunning against today's master (commit a3b34a00e7d2b6694ea0d5e21fcc084080a3abae):\r\n\r\n```python\r\nimport torchdata.datapipes as dp\r\nfrom torch.utils.data.graph_settings import get_all_graph_pipes, traverse_dps\r\nfrom torchdata.dataloader2 import DataLoader2, MultiProcessingReadingService\r\n\r\n\r\nclass StatefulIterator(dp.iter.IterDataPipe):\r\n def __init__(self, datapipe):\r\n self.datapipe = datapipe\r\n self.custom_index = 0\r\n\r\n def __iter__(self):\r\n self.custom_index = 0\r\n for item in self.datapipe:\r\n self.custom_index += 1\r\n yield item\r\n self.custom_index = 0\r\n\r\n\r\ndef get_datapipe():\r\n initial_data = dp.iter.IterableWrapper([1, 2, 3, 4])\r\n stateful_data = StatefulIterator(initial_data)\r\n sharded_data = stateful_data.sharding_filter()\r\n return sharded_data\r\n\r\n\r\ndef get_datapipe_state(datapipe):\r\n graph = traverse_dps(datapipe)\r\n all_pipes = get_all_graph_pipes(graph)\r\n for pipe in all_pipes:\r\n if hasattr(pipe, \"custom_index\"):\r\n return pipe.custom_index\r\n\r\n raise ValueError(\"This datapipe does not contain a StatefulIterator.\")\r\n\r\n\r\ndef main_no_multiprocessing():\r\n dp = get_datapipe()\r\n dl2 = DataLoader2(dp)\r\n for item in dl2:\r\n print(\"Custom index\", get_datapipe_state(dl2.datapipe))\r\n print(\"Item\", item)\r\n\r\n\r\ndef main_multiprocessing():\r\n dp = get_datapipe()\r\n dl2 = DataLoader2(dp, reading_service=MultiProcessingReadingService(num_workers=4))\r\n for item in dl2:\r\n print(\"Custom index\", get_datapipe_state(dl2.datapipe))\r\n print(\"Item\", item)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main_no_multiprocessing()\r\n main_multiprocessing()\r\n```\r\n\r\ncc: @ejguan @VitalyFedyunin @NivekT ", "url": "https://github.com/meta-pytorch/data/issues/1033", "state": "closed", "labels": [], "created_at": "2023-02-20T15:01:44Z", "updated_at": "2025-08-25T06:43:11Z", "comments": 9, "user": "jhoareau" }, { "repo": "pytorch/benchmark", "number": 1420, "title": "How to enable jit with nvfuser testing", "body": "I want to benchmark models in torchbenchmark with jit and nvfuser. I want to also dump the graph fused.\r\nI tried following command, but nothing is printed.\r\nPYTORCH_JIT_LOG_LEVEL=\">>graph_fuser\" python3 ../../run.py resnet50 -d cuda -m jit -t train", "url": "https://github.com/pytorch/benchmark/issues/1420", "state": "closed", "labels": [], "created_at": "2023-02-20T14:26:50Z", "updated_at": "2023-03-07T16:20:49Z", "user": "fxing-GitHub" }, { "repo": "pytorch/TensorRT", "number": 1680, "title": "\u2753 [Question]just import acc_tracer speed up my code", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\nI use torch_tensortt compile a trt model,when I use the model to inference video ,I found when I add a line `import torch_tensorrt.fx.tracer.acc_tracer.acc_tracer as acc_tracer` , the code ran faster.\r\nTime spent on the original code:\r\n![1](https://user-images.githubusercontent.com/38580985/219603277-c5f2904a-7d8b-425a-8595-968c7157d58f.JPG)\r\nTime spent on the code with add `import torch_tensorrt.fx.tracer.acc_tracer.acc_tracer as acc_tracer`:\r\n![2](https://user-images.githubusercontent.com/38580985/219603543-deb21971-c1c1-4a63-bbdb-58d4c651b20c.JPG)\r\nI don't know why this happened.\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):1.13.0\r\n - CPU Architecture:\r\n - OS (e.g., Linux):Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:3.10\r\n - CUDA version:11.7\r\n - GPU models and configuration:A4000\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1680", "state": "closed", "labels": [ "question", "No Activity", "component: fx" ], "created_at": "2023-02-17T09:19:19Z", "updated_at": "2023-05-29T00:02:22Z", "user": "T0L0ve" }, { "repo": "pytorch/functorch", "number": 1111, "title": "Use functional models inside usual nn.Module", "body": "Hi, Thanks for the adding functional features to Pytorch. I want to use a `nn.Module` converted into a functional form inside a usual stateful `nn.Module`. However, the code below does not correctly register the parameters for the functional module. Is there a way to do this currently? \r\n\r\n\r\n\r\n```python \r\nimport torch\r\nimport optree\r\nimport torch.nn as nn\r\nfrom functorch import make_functional\r\n\r\nx = torch.randn(4, 10)\r\nclass TinyModel(torch.nn.Module):\r\n\r\n def __init__(self):\r\n super(TinyModel, self).__init__()\r\n self.func_l,self.params_l=make_functional(nn.Linear(10,10))\r\n for i,ele in enumerate(self.params_l):\r\n self.register_parameter(str(i),ele)\r\n def forward(self,inputs):\r\n return self.func_l(self.params_l,inputs)\r\n \r\nmodel = TinyModel()\r\nfunc, params = make_functional(model)\r\n```\r\n\r\nThis is useful for me as I want to use functional operations over an inner `nn.Module` (such as vmap, jvp, vip) inside the forward pass of an outer `nn.Module`. The idea is to be able to have a lifted version of vjp, jvp, etc, similar to Flax (https://flax.readthedocs.io/en/latest/api_reference/_autosummary/flax.linen.vjp.html).", "url": "https://github.com/pytorch/functorch/issues/1111", "state": "open", "labels": [], "created_at": "2023-02-15T08:14:22Z", "updated_at": "2023-02-18T09:57:48Z", "comments": 1, "user": "subho406" }, { "repo": "pytorch/vision", "number": 7250, "title": "Add more docs about how to build a wheel of vision with the all features of video", "body": "### \ud83d\ude80 The feature\r\n\r\nNo docs to show how to build a wheel with the all features of video including the video_reader(gpu decoder).\r\n\r\n### Motivation, pitch\r\n\r\nI want to use GPU to accelerate the speed of video decoding.\r\nAnd i find that you support the gpu video decoder.\r\nThere are some questions below\uff1a\r\n1. from https://github.com/pytorch/vision#video-backend, I know that i need ffmpeg or pyav to enable the video feature. However, both of them do not support GPU originally. So what do i need if i want to use GPU video decoder.\r\n2. No detail docs to show how to build a wheel of vision with GPU video decoder.\r\n3. After gpu decoding\uff0cwhere is the tensor, system memory or gpu memory?\r\n4. What's the data flow of your video processing and inference\uff1f\r\n```\r\n1. Decoding in the gpu memory\r\n2. Downloading to the system memory.\r\n3. Uploading to the gpu memory for inference.\r\n4. Downloading to the system memory.\r\n5. Uploading to gpu memory for encoding.(Maybe it does not exist)\r\n```\r\nor\r\n```\r\n1. Decoding in the gpu memory\r\n2. Inference in the gpu memory directly.\r\n3. Encoding in the gpu memory(Maybe it does not exist)\r\n```\r\n5.Is there any way for video to work with this pipeline\u2014\u20141.decoded by gpu and keep it in the gpu memory. 2.Inference with tensor in gpu memory directly without downloading to the system memory and uploading to gpu memory for inference again.\r\n\r\n\r\nI think you should add these to docs.\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_", "url": "https://github.com/pytorch/vision/issues/7250", "state": "open", "labels": [ "module: documentation", "module: video" ], "created_at": "2023-02-15T01:53:44Z", "updated_at": "2023-02-15T08:07:13Z", "user": "wqh17101" }, { "repo": "pytorch/tutorials", "number": 2205, "title": "During downsampling, bicubic interpolation produces WORSE results than ffmpeg. How can I fix this issue?", "body": "I have used \"`bicubic`\" interpolation with `(antialias=True)`. I checked the output downsampled image and found that It crates some artifacts on the image. See the image **[here](https://drive.google.com/file/d/1x1knhzyGpyqfkEjqi8tCxD4Ka_lhpfE5/view?usp=sharing)**,\r\n\r\nHere is my code for downsampling:\r\n\r\n```\r\nfrom torch.nn.functional import interpolate\r\nimg = Image.open(\"image_location\")\r\n#4x down-sampled\r\nds_img = interpolate(transforms.ToTensor()(img).unsqueeze(0),scale_factor=.25,mode = \"bicubic\", antialias=True) \r\ndown_img=transforms.ToPILImage()(ds_img.squeeze().cpu())\r\n```\r\n\r\nThank you\r\n", "url": "https://github.com/pytorch/tutorials/issues/2205", "state": "closed", "labels": [ "question" ], "created_at": "2023-02-14T17:38:57Z", "updated_at": "2023-02-16T14:01:04Z", "user": "tahsirmunna" }, { "repo": "pytorch/pytorch", "number": 94704, "title": "`where` triggers INTERNAL ASSERT FAILED when `out` is a long tensor due to mixed types", "body": "### \ud83d\udc1b Describe the bug\n\n`where` triggers INTERNAL ASSERT FAILED when `out` is a long tensor due to mixed types\r\n\r\n```py\r\nimport torch\r\n\r\na = torch.ones(3, 4)\r\nb = torch.zeros(3, 4)\r\nc = torch.where(a > 0, a, b, out=torch.zeros(3, 4, dtype=torch.long))\r\n# RuntimeError: !needs_dynamic_casting<func_t>::check(iter) INTERNAL ASSERT FAILED \r\n# at \"/opt/conda/conda-bld/pytorch_1672906354936/work/aten/src/ATen/native/cpu/Loops.h\":308, \r\n# please report a bug to PyTorch. \r\n```\n\n### Versions\n\n```\r\nPyTorch version: 2.0.0.dev20230105\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.1 LTS (x86_64)\r\nGCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 11.7.99\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration:\r\nGPU 0: NVIDIA GeForce RTX 3090\r\nGPU 1: NVIDIA GeForce RTX 3090\r\nGPU 2: NVIDIA GeForce RTX 3090\r\n\r\nNvidia driver version: 515.86.01\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.23.5\r\n[pip3] torch==2.0.0.dev20230105\r\n[pip3] torchaudio==2.0.0.dev20230105\r\n[pip3] torchvision==0.15.0.dev20230105\r\n[conda] blas 1.0 mkl\r\n[conda] mkl 2021.4.0 h06a4308_640\r\n[conda] mkl-service 2.4.0 py39h7f8727e_0\r\n[conda] mkl_fft 1.3.1 py39hd3c417c_0\r\n[conda] mkl_random 1.2.2 py39h51133e4_0\r\n[conda] numpy 1.23.5 py39h14f4228_0\r\n[conda] numpy-base 1.23.5 py39h31eccc5_0\r\n[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly\r\n[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly\r\n[conda] pytorch-mutex 1.0 cuda pytorch-nightly\r\n[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly\r\n[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly\r\n[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly\r\n```\n\ncc @nairbv @mruberry", "url": "https://github.com/pytorch/pytorch/issues/94704", "state": "open", "labels": [ "module: error checking", "triaged", "module: type promotion" ], "created_at": "2023-02-12T16:32:30Z", "updated_at": "2023-02-27T18:15:01Z", "user": "cafffeeee" }, { "repo": "pytorch/pytorch", "number": 94699, "title": "How to correct TypeError: zip argument #1 must support iteration training in multiple GPU", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI am doing a creating custom pytorch layer and model training using `Trainer API` function on top of `Hugging face` model.\r\n\r\nWhen I run on `single GPU`, it trains fine. But when I train it on `multiple GPU` it throws me error.\r\n\r\n`TypeError: zip argument #1 must support iteration training in multiple GPU`\r\n\r\nData Creation Code:\r\n\r\n```\r\ntrain_ex ={'texts':[x[0] for x in train_set],'tag_names':[x[1] for x in train_set]}\r\ntrain_data = tokenize_and_align_labels(train_ex,label2id)\r\n_=train_data.pop('offset_mapping')\r\n\r\nclass MyDataset(torch.utils.data.Dataset):\r\n def __init__(self, examples):\r\n self.encodings = examples \r\n self.labels = examples['labels']\r\n def __getitem__(self, idx):\r\n item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}\r\n item[\"labels\"] = torch.tensor([self.labels[idx]])\r\n return item def __len__(self):\r\n return len(self.labels)\r\ntrain_data=MyDataset(train_data)\r\n```\r\n\r\n**Training Code**\r\n\r\n bert_model = BertForTokenClassification.from_pretrained( model_checkpoint,id2label=id2label,label2id=label2id)\r\n bert_model.config.output_hidden_states=True\r\n\r\n\r\n class BERT_CUSTOM(nn.Module):\r\n \r\n \r\n def __init__(self, bert_model,id2label,num_labels):\r\n \r\n \r\n \r\n super(BERT_CUSTOM, self).__init__()\r\n self.bert = bert_model\r\n self.config=self.bert.config\r\n self.dropout = nn.Dropout(0.25)\r\n self.classifier = nn.Linear(768, num_labels)\r\n self.crf = CRF(num_labels, batch_first = True)\r\n \r\n \r\n def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):\r\n \r\n outputs = self.bert(input_ids, attention_mask=attention_mask)\r\n sequence_output = torch.stack((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4])).mean(dim=0)\r\n sequence_output = self.dropout(sequence_output)\r\n emission = self.classifier(sequence_output) # [32,256,21] logits\r\n \r\n if labels is not None:\r\n \r\n labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])\r\n loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')\r\n prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))\r\n return [loss, prediction]\r\n \r\n else:\r\n \r\n prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))\r\n prediction=[id2label[k] for k in prediction]\r\n return prediction\r\n\r\n\r\n**Training API**\r\n\r\n model = BERT_CUSTOM(bert_model, id2label,num_labels=len(label2id))\r\n model.to(device)\r\n \r\n args = TrainingArguments(\r\n \"model\",\r\n save_strategy=\"epoch\",\r\n learning_rate=2e-5,\r\n num_train_epochs=2,\r\n weight_decay=0.01,\r\n per_device_train_batch_size=32,\r\n fp16=True\r\n \r\n )\r\n \r\n trainer = Trainer(\r\n model=model,\r\n args=args,\r\n train_dataset=train_data,\r\n tokenizer=tokenizer)\r\n \r\n trainer.train()\r\n\r\n### Versions\r\n\r\n'1.7.1+cu110'\r\n\r\n\r\n**Error**\r\n\r\n\r\n\r\nHere is the complete traceback:\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"spanbert_model_check.py\", line 263, in <module>\r\n trainer.train()\r\n File \"/opt/conda/lib/python3.7/site-packages/transformers/trainer.py\", line 1531, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/opt/conda/lib/python3.7/site-packages/transformers/trainer.py\", line 1775, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/opt/conda/lib/python3.7/site-packages/transformers/trainer.py\", line 2523, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/opt/conda/lib/python3.7/site-packages/transformers/trainer.py\", line 2555, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 162, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 174, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 68, in gather\r\n res = gather_map(outputs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return type(out)(map(gather_map, zip(*outputs)))\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return type(out)(map(gather_", "url": "https://github.com/pytorch/pytorch/issues/94699", "state": "closed", "labels": [], "created_at": "2023-02-12T13:37:47Z", "updated_at": "2023-05-12T11:27:46Z", "user": "pratikchhapolika" }, { "repo": "pytorch/data", "number": 1005, "title": "\"torchdata=0.4.1=py38\" and Conda runtime error \"glibc 2.29\" not found. ", "body": "### \ud83d\udc1b Describe the bug\n\nI installed \"torchdata=0.4.1=py38\" in a Conda environment. \r\nWhen I run the code, there is an error, \"glibc 2.29\" not found.\r\n\r\nOur cluster run on Centos 8.5 and only has upto \"glibc 2.28\" . \r\n\r\nIs \"torchdata 0.4.1\" compatible with \"glibc 2.28\"?\r\nIs there a conda build that support gilbc 2.28?\r\nOr, is there a workaround to make \"torchdata 0.4.1\" work with clusters having only \"glibc 2.28\" . \n\n### Versions\n\ntorchdata=0.4.1\r\npy38\r\nglibc 2.28\r\nconda 23.1.0", "url": "https://github.com/meta-pytorch/data/issues/1005", "state": "closed", "labels": [], "created_at": "2023-02-11T03:26:10Z", "updated_at": "2023-02-13T14:29:56Z", "comments": 1, "user": "mahm1846" }, { "repo": "pytorch/examples", "number": 1112, "title": "word_language_model with torch.nn.modules.transformer", "body": "The `torch.nn.modules.transformer` documentation says the `word_language_model` example in this repo is an example of its use. But it seems to instead DIY a transformer and uses that instead. Is this intentional? I would offer my help to write it for `torch.nn.modules.transformer` but I'm here to learn how to use it.", "url": "https://github.com/pytorch/examples/issues/1112", "state": "open", "labels": [ "good first issue", "nlp", "docs" ], "created_at": "2023-02-09T19:31:43Z", "updated_at": "2023-02-21T04:05:42Z", "comments": 2, "user": "olafx" }, { "repo": "pytorch/examples", "number": 1111, "title": "\ud83d\ude80 Feature request / I want to contribute an algorithm", "body": "<!--\r\nThank you for suggesting an idea to improve pytorch/examples\r\n\r\nPlease fill in as much of the template below as you're able.\r\n-->\r\n\r\n## Is your feature request related to a problem? Please describe.\r\n<!-- Please describe the problem you are trying to solve. -->\r\n\r\nCurrently, PyTorch/examples does not have an implementation of the forward forward algorithm.[forward forward algorithm.](https://arxiv.org/abs/2212.13345) This algorithm is a new learning procedure for neural networks and has promising approach to training neural networks, it is also becoming popular, because it's written by father of deep learning aka Geoffrey Hinton, its inclusion in PyTorch/examples would make it more accessible to a wider community of researchers practitioners, and I would like to contribute in it\u2764\ufe0f, I've Implemented This algorithm in my local notebook in pure pytorch\u2764\ufe0f.I am new so please let me know How can I contribute this algorithm in this repo.\r\n\r\nThanks,\r\nVivek\r\n\r\n## Describe the solution\r\n\r\nThe solution is to implement/add the forward forward algorithm in PyTorch/examples. This would include writing the code for the algorithm, as well as any docs or tutorial addition to the existing codebase.\r\n\r\n## Describe alternatives solution\r\n<!-- Please describe alternative solutions or features you have considered. -->\r\n[https://keras.io/examples/vision/forwardforward/](https://keras.io/examples/vision/forwardforward/) \r\n", "url": "https://github.com/pytorch/examples/issues/1111", "state": "closed", "labels": [], "created_at": "2023-02-09T15:13:40Z", "updated_at": "2023-02-26T23:47:19Z", "comments": 3, "user": "viveks-codes" }, { "repo": "pytorch/TensorRT", "number": 1653, "title": "\u2753 [Question] Partitioning for unsupported operations", "body": "## \u2753 Question\r\n\r\nAs far as I understand Torch-TensorRT performs a partitioning step when unsupported operations are encountered. Then, graph uses generated TensorRT engine(s) for supported partition(s) and falls back to TorchScript JIT anywhere else. I can observe this behavior from generated graphs in general. However, I receive errors with specific blocks in which I couldn't understand why such blocks are problematic. \r\n\r\nFor instance, for the following (example) block:\r\n\r\n```python\r\n\"\"\"block(for+cond)\"\"\"\r\nretval=[]\r\nfor slice in x: # x: torch.Tensor\r\n if slice.sum() > 0: # any cond. dep. on tensor/slice\r\n retval.append(slice + 100)\r\n else:\r\n retval.append(slice + 50)\r\n\"\"\"block(for+cond)\"\"\"\r\n```\r\n\r\nI receive a `RuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:167] Expected ivalues_maps.count(input) to be true but got false` on `torch_tensorrt.compile(...)`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/burak/test.py\", line 36, in <module>\r\n net_trt = torch_tensorrt.compile(net, **net_specs)\r\n File \"/home/burak/miniconda3/envs/convert/lib/python3.10/site-packages/torch_tensorrt/_compile.py\", line 125, in compile\r\n return torch_tensorrt.ts.compile(\r\n File \"/home/burak/miniconda3/envs/convert/lib/python3.10/site-packages/torch_tensorrt/ts/_compiler.py\", line 136, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nRuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:167] Expected ivalues_maps.count(input) to be true but got false\r\nCould not find torch::jit::Value* slice.1 produced from %slice.1 : Tensor = aten::select(%158, %6, %19) # /home/burak/test.py:20:8 in lowering graph for mini graph input.\r\n```\r\n\r\n## What you have already tried\r\n\r\nI have tried this behavior with the following example script:\r\n\r\n```python\r\nimport torch\r\nimport torch_tensorrt\r\ntorch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Info)\r\n\r\nclass Net(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n\r\n self.conv0 = torch.nn.Conv2d(3, 8, kernel_size=3)\r\n self.relu = torch.nn.ReLU(inplace=True)\r\n self.conv1 = torch.nn.Conv2d(8, 16, kernel_size=3)\r\n\r\n def forward(self, x):\r\n x = self.conv0(x)\r\n x = self.relu(x)\r\n x = self.conv1(x)\r\n\r\n \"\"\"block(for+cond)\"\"\"\r\n retval=[]\r\n for slice in x:\r\n if slice.sum() > 0: # any cond. dep. on tensor/slice\r\n retval.append(slice + 100)\r\n else:\r\n retval.append(slice + 50)\r\n \"\"\"block(for+cond)\"\"\"\r\n\r\n return retval\r\n\r\nnet = Net().eval().cuda()\r\n\r\nnet_specs = {\r\n 'inputs': [torch_tensorrt.Input(shape=[1, 3, 224, 224], dtype=torch.float32)],\r\n 'enabled_precisions': {torch.float32, torch.half},\r\n}\r\n\r\nnet_trt = torch_tensorrt.compile(net, **net_specs)\r\nprint(net_trt.graph)\r\n```\r\n\r\nI receive the following RuntimeError (full output, info log-level):\r\n\r\n```\r\nINFO: [Torch-TensorRT] - ir was set to default, using TorchScript as ir\r\nINFO: [Torch-TensorRT] - Module was provided as a torch.nn.Module, trying to script the module with torch.jit.script. In the event of a failure please preconvert your module to TorchScript\r\nINFO: [Torch-TensorRT] - Lowered Graph: graph(%x.1 : Tensor):\r\n %self.conv0.weight.1 : Float(8, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=<Tensor>]()\r\n %self.conv0.bias.1 : Float(8, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value= 0.1437 0.0745 0.1127 0.1185 0.1406 0.1445 -0.0802 0.0562 [ CUDAFloatType{8} ]]()\r\n %self.conv1.weight.1 : Float(16, 8, 3, 3, strides=[72, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=<Tensor>]()\r\n %self.conv1.bias.1 : Float(16, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=<Tensor>]()\r\n %9 : int = prim::Constant[value=1]()\r\n %8 : NoneType = prim::Constant()\r\n %7 : bool = prim::Constant[value=1]() # /home/burak/test.py:20:8\r\n %6 : int = prim::Constant[value=0]() # /home/burak/test.py:21:29\r\n %5 : int = prim::Constant[value=100]() # /home/burak/test.py:22:38\r\n %4 : int = prim::Constant[value=50]() # /home/burak/test.py:24:38\r\n %3 : int[] = prim::Constant[value=[1, 1]]()\r\n %2 : int[] = prim::Constant[value=[0, 0]]()\r\n %153 : bool = prim::Constant[value=0]()\r\n %154 : int[] = prim::Constant[value=[0, 0]]()\r\n %155 : Tensor = aten::_convolution(%x.1, %self.conv0.weight.1, %self.conv0.bias.1, %3, %2, %3, %153, %154, %9, %153, %153, %153, %153)\r\n %17 : Tensor[] = prim::ListConstruct()\r\n %137 : Tensor = aten::relu(%155) # /home/burak/miniconda3/envs/convert/lib/python3.10/site-packages/torch/nn/functional.py:1455:17\r\n %156 : bool = prim::Constant[value=0]()\r\n %157 : int[] = prim::Constant[value=[0, 0]]()\r\n %158 : Tensor = aten::_convolution(%137, %self.conv1.weight.1, %self.conv1.bias.1, %3, %2, %3, %156, %157, %9, %156, %156, %156, %156)\r\n %144 : int = aten::len(%15", "url": "https://github.com/pytorch/TensorRT/issues/1653", "state": "closed", "labels": [ "question", "No Activity", "component: partitioning" ], "created_at": "2023-02-08T07:39:17Z", "updated_at": "2023-06-10T00:02:25Z", "user": "kunkcu" }, { "repo": "pytorch/TensorRT", "number": 1651, "title": "\u2753 [Question] Unknown type name '__torch__.torch.classes.tensorrt.Engine'", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n**My c++ code:**\r\ntorch::Device device(torch::kCUDA);\r\n torch::jit::script::Module module = torch::jit::load(\"lenet_trt.ts\");\r\n module.to(device);\r\n vector<jit::IValue> inputs;\r\n inputs.emplace_back(torch::ones({1,1,32,32}).to(device));\r\n at::Tensor output = module.forward(inputs).toTensor();\r\n cout << output << endl;\r\n\r\n**After running the code, the error occurs:**\r\nterminate called after throwing an instance of 'torch::jit::ErrorReport'\r\n what(): \r\nUnknown type name '__torch__.torch.classes.tensorrt.Engine':\r\n File \"code/__torch__/___torch_mangle_18.py\", line 4\r\n __parameters__ = []\r\n __buffers__ = []\r\n __torch______torch_mangle_18_LeNet_trt_engine_ : __torch__.torch.classes.tensorrt.Engine\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n def forward(self_1: __torch__.___torch_mangle_18.LeNet_trt,\r\n input_0: Tensor) -> Tensor:\r\n\r\nSignal: SIGABRT (Aborted)\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n\r\n - CPU Architecture: x86 x64\r\n - OS: Ubuntu22.04\r\n - CUDA version: 11.7\r\n - libtorch Version 1.13.1:\r\n - TensorRT: 8.5.3.1\r\n - torch_tensorrt: 1.3.0\r\n\r\n## Additional context\r\n\r\nI compiled a pytorch model using the torchtrtc command. This model can be loaded successfully with python code, but fails with c++ code. Can someone help me solve this issue?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1651", "state": "closed", "labels": [ "question", "component: api [C++]", "component: runtime" ], "created_at": "2023-02-07T05:49:57Z", "updated_at": "2023-02-08T02:08:12Z", "user": "chensuo2048" }, { "repo": "pytorch/rl", "number": 897, "title": "[Feature Request] Tutorial on how to build the simplest agent", "body": "## Motivation\r\n\r\nHey,\r\n\r\nthe [DDPG tutorial](https://pytorch.org/rl/tutorials/coding_ddpg.html) has me pooping my pants. I want to suggest an example of creating a simple DDPG or similar agent that just acts and observes and gets the job done for a researcher looking to implement and RL algorithm on their own environment that has nothing to do with the usual benchmarking environments., i.e. just applying RL to their specific field.\r\n\r\nThis [video](https://www.youtube.com/watch?v=cIKMhZoykEE) advertises being able to use components without having to use the rest of the library, and I want to believe it, but when I look at the [docs page](https://pytorch.org/rl/reference/index.html) I see a lot of components that I don't know how to use and when I look into the docs of the specific components I find that they take arguments that are an interface to something that I have no idea what it is and has an abstract name. Not to sound ignorant, but I feel like I have to know the entire framework just to use one part of it, which is againts the core idea, as I understand it.\r\n\r\n## Solution\r\n\r\nLike, I have my own environment that's completely numpy and doesn't have anything to do with Gym or anything else, and I wan't to have the following workflow:\r\n\r\n```\r\nclass MyAgent:\r\n def __init__(self, **kwargs):\r\n # torchrl code goes here\r\n # how to init networks\r\n # how to init a replay buffer, a simple one\r\n # init objectives like DDPGloss\r\n \r\n def act(self, state):\r\n # how to produce an action with the actor network or more likely actor module\r\n # how to add noise\r\n \r\n def observe(self, s, action, new_s, reward):\r\n # how to put a transition into the replay buffer\r\n # how to update the neural networks\r\n # so how to sample from the RB, how to use the objectives, how to backpropagate, how to soft update \r\n\r\nenv = MyEnv() # isn't made with torchrl\r\nagent = MyAgent() # class made with torchrl\r\n\r\ns = env.reset() # init state\r\n\r\nfor t in range(T):\r\n action = agent.act(s)\r\n new_s, reward = env.step() # could be converted to output tensordicts\r\n agent.observe(s, action, new_s, reward) # observe transition and update the model\r\n```\r\n\r\nJust the \"RL for dummies\" toy example. For those of us who don't need transforms and parallelization just yet; we can get into that once we've got the basics working. Like, I found the component's I need - [soft update](https://pytorch.org/rl/reference/generated/torchrl.objectives.SoftUpdate.html#torchrl.objectives.SoftUpdate), [ddpg loss](https://pytorch.org/rl/reference/generated/torchrl.objectives.DDPGLoss.html#torchrl.objectives.DDPGLoss)... I just don't know how to put them together without the monstrosity of the code that is [DDPG tutorial](https://pytorch.org/rl/tutorials/coding_ddpg.html).\r\n\r\n## Alternatives\r\n\r\n/\r\n\r\n## Additional context\r\n\r\n/\r\n\r\n## Checklist\r\n\r\n- [ x] I have checked that there is no similar issue in the repo (**required**)\r\nI've found this [issue](https://github.com/pytorch/rl/issues/90) that hits the spot but I don't know if it amounted to anything, and my issue is leaning towards providing an example of this low level functionality. \r\n\r\nThis [issue](https://github.com/pytorch/rl/issues/861) is also pretty good but I'd aim for even simpler and especially for the environment to not need to be torchrl.\r\n\r\n\r\n## Conclusion\r\n\r\nThose were my two cents. I hope I've hit the target with them. If there's something like this already available and I just haven't found it yet, please do let me know. \r\n", "url": "https://github.com/pytorch/rl/issues/897", "state": "open", "labels": [ "enhancement" ], "created_at": "2023-02-06T22:42:47Z", "updated_at": "2023-02-07T10:01:42Z", "user": "viktor-ktorvi" }, { "repo": "pytorch/tutorials", "number": 2196, "title": "nestedtensor.py on Colab building from master ", "body": "When running the [nested tensors tutorial](https://pytorch.org/tutorials/prototype/nestedtensor.html) on [google colab](https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/db9e0933e73063322e250e5d0cec413d/nestedtensor.ipynb) builds from master instead of main. The master branch version is non-functional, main branch version appears to work correctly:\r\n- nested tensors created with `torch.nested_tensor` instead of `torch.nested.nested_tensor` \r\n- the `mha_netsed` function handle batch size inference incorrectly.\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/2196", "state": "open", "labels": [ "question", "2.0" ], "created_at": "2023-02-06T22:13:41Z", "updated_at": "2023-02-07T21:32:28Z", "user": "alex-rakowski" }, { "repo": "pytorch/data", "number": 986, "title": "Disable cron job running on forked repo", "body": "### \ud83d\udc1b Describe the bug\n\n\r\nMy forked repo of torchdata have been running the cron job to validate nightly binaries.\r\n\r\nSee workflow https://github.com/ejguan/data/actions/runs/4097726223\r\n\r\n@atalman Is this expected? Can we disable it by doing something like:\r\nhttps://github.com/pytorch/data/blob/01fc76200354501b057bb439b43a1f05f609dd0a/.github/workflows/nightly_release.yml#L11\n\n### Versions\n\nmain", "url": "https://github.com/meta-pytorch/data/issues/986", "state": "open", "labels": [ "Better Engineering" ], "created_at": "2023-02-06T16:29:00Z", "updated_at": "2023-04-11T16:48:19Z", "comments": 0, "user": "ejguan" }, { "repo": "pytorch/TensorRT", "number": 1650, "title": "error when bazel compile torch_tensorrt on win10", "body": "## \u2753 Question\r\nwhen command \" bazel build //:libtorchtrt --compilation_mode opt\", the error comes\r\n\r\nERROR: C:/users/zhang/downloads/tensorrt-main/core/runtime/BUILD:13:11: Compiling core/runtime/TRTEngineProfiler.cpp failed: (Exit 2): cl.exe failed: error executing command (from target //core/runtime:runtime) D:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29333\\bin\\HostX64\\x64\\cl.exe /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN3 2_WINNT=0x0601 /D_CRT_SECURE_NO_DEPRECATE ... (remaining 48 arguments skipped)\r\n\r\n## What you have already tried\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):1.13\r\n - CPU Architecture: AMD5600x\r\n - OS (e.g., Linux):win10\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:11.7\r\n - GPU models and configuration:2060\r\n - Any other relevant information: visual studio2019\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1650", "state": "closed", "labels": [ "question", "No Activity", "channel: windows" ], "created_at": "2023-02-06T07:38:28Z", "updated_at": "2023-06-08T00:02:27Z", "user": "zhanghuqiang" }, { "repo": "pytorch/text", "number": 2047, "title": "Update `CONTRIBUTING.md` w/ instruction on how to install `torchdata` from source", "body": "See https://github.com/pytorch/text/issues/2045", "url": "https://github.com/pytorch/text/issues/2047", "state": "closed", "labels": [], "created_at": "2023-02-03T18:20:17Z", "updated_at": "2023-02-06T00:37:39Z", "user": "joecummings" }, { "repo": "pytorch/vision", "number": 7168, "title": "Current way to use torchvision.prototype.transforms", "body": "### \ud83d\udcda The doc issue\r\n\r\nI tried to run the [end-to-end example in this recent blog post](https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/#an-end-to-end-example):\r\n\r\n```python\r\nimport PIL\r\nfrom torchvision import io, utils\r\nfrom torchvision.prototype import features, transforms as T\r\nfrom torchvision.prototype.transforms import functional as F\r\n# Defining and wrapping input to appropriate Tensor Subclasses\r\npath = \"COCO_val2014_000000418825.jpg\"\r\nimg = features.Image(io.read_image(path), color_space=features.ColorSpace.RGB)\r\n# img = PIL.Image.open(path)\r\nbboxes = features.BoundingBox(\r\n [[2, 0, 206, 253], [396, 92, 479, 241], [328, 253, 417, 332],\r\n [148, 68, 256, 182], [93, 158, 170, 260], [432, 0, 438, 26],\r\n [422, 0, 480, 25], [419, 39, 424, 52], [448, 37, 456, 62],\r\n [435, 43, 437, 50], [461, 36, 469, 63], [461, 75, 469, 94],\r\n [469, 36, 480, 64], [440, 37, 446, 56], [398, 233, 480, 304],\r\n [452, 39, 463, 63], [424, 38, 429, 50]],\r\n format=features.BoundingBoxFormat.XYXY,\r\n spatial_size=F.get_spatial_size(img),\r\n)\r\nlabels = features.Label([59, 58, 50, 64, 76, 74, 74, 74, 74, 74, 74, 74, 74, 74, 50, 74, 74])\r\n# Defining and applying Transforms V2\r\ntrans = T.Compose(\r\n [\r\n T.ColorJitter(contrast=0.5),\r\n T.RandomRotation(30),\r\n T.CenterCrop(480),\r\n ]\r\n)\r\nimg, bboxes, labels = trans(img, bboxes, labels)\r\n# Visualizing results\r\nviz = utils.draw_bounding_boxes(F.to_image_tensor(img), boxes=bboxes)\r\nF.to_pil_image(viz).show()\r\n```\r\n\r\nbut found that `torchvision.prototype.features` is now gone. What's the current way to run this? I attempted to simply pass the images, bboxes and labels with the following types: `torchvision.prototype.datasets.utils._encoded.EncodedImage`, `torchvision.prototype.datapoints._bounding_box.BoundingBox`, `torchvision.prototype.datapoints._label.Label`. However this didn't seem to apply the transforms as everything remained the same shape.\r\n\r\n**edit:** I've found that `features` seems to be renamed to `datapoints`. I tried applying this, but `EncodedImage` in a coco `sample['image']` seems to be 1D and `prototype.transforms` requires 2D images. What's the proper way to get this as 2D so I can apply transforms? Is there a decode method I'm missing?\r\n\r\n\r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_\n\ncc @vfdev-5 @bjuncek @pmeier", "url": "https://github.com/pytorch/vision/issues/7168", "state": "closed", "labels": [ "question", "module: transforms", "prototype" ], "created_at": "2023-02-02T15:47:41Z", "updated_at": "2023-02-02T21:11:35Z", "user": "austinmw" }, { "repo": "pytorch/TensorRT", "number": 1645, "title": "\u2753 [Question] How to use Torch-TensorRT with multi-headed (multiple output) networks", "body": "## \u2753 Question\r\n\r\nI am having trouble using Torch-TensorRT with multi-headed networks. `torch_tensorrt.compile(...)` works fine and I can successfully use the resulting `ScriptModule` for execution. However, when I try to save and re-load the module I receive a RuntimeError on `torch.jit.load(...)`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/burak/test.py\", line 33, in <module>\r\n net_trt = torch.jit.load('net_trt.ts')\r\n File \"/home/burak/miniconda3/envs/convert/lib/python3.9/site-packages/torch/jit/_serialization.py\", line 162, in load\r\n cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\nRuntimeError: [Error thrown at core/runtime/TRTEngine.cpp:132] Expected (binding_name == engine_binded_name) to be true but got false\r\nCould not find a TensorRT engine binding for output named output_0\r\n```\r\n\r\n## What you have already tried\r\n\r\nI have tried this behavior with a very simple multi-headed network:\r\n\r\n```python\r\nimport torch\r\nimport torch_tensorrt\r\n\r\nclass Net(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n\r\n self.conv0 = torch.nn.Conv2d(3, 8, kernel_size=3)\r\n self.relu = torch.nn.ReLU(inplace=True)\r\n self.conv1b1 = torch.nn.Conv2d(8, 16, kernel_size=3)\r\n self.conv1b2 = torch.nn.Conv2d(8, 32, kernel_size=3)\r\n\r\n def forward(self, x):\r\n x = self.conv0(x)\r\n x = self.relu(x)\r\n output1 = self.conv1b1(x)\r\n output2 = self.conv1b2(x)\r\n\r\n return output1, output2\r\n\r\nnet = Net().eval().cuda()\r\n```\r\nThen, I have compiled this network for TensorRT as usual:\r\n\r\n```python\r\nnet_specs = {\r\n 'inputs': [torch_tensorrt.Input(shape=[1, 3, 224, 224], dtype=torch.float32)],\r\n 'enabled_precisions': {torch.float32, torch.half},\r\n}\r\n\r\nnet_trt = torch_tensorrt.compile(net, **net_specs)\r\n```\r\n\r\nNo problem so far. `net_trt` works just fine. However, when I try to save and re-load it:\r\n\r\n```python\r\ntorch.jit.save(net_trt, 'net_trt.ts')\r\nnet_trt = torch.jit.load('net_trt.ts')\r\n```\r\n\r\nI receive the following RuntimeError:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/burak/test.py\", line 33, in <module>\r\n net_trt = torch.jit.load('net_trt.ts')\r\n File \"/home/burak/miniconda3/envs/convert/lib/python3.9/site-packages/torch/jit/_serialization.py\", line 162, in load\r\n cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\nRuntimeError: [Error thrown at core/runtime/TRTEngine.cpp:132] Expected (binding_name == engine_binded_name) to be true but got false\r\nCould not find a TensorRT engine binding for output named output_0\r\n```\r\n\r\nI have only encountered this with multi-headed networks. Everything seems to work fine with other type of networks.\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - Torch-TensorRT Version (e.g. 1.0.0): 1.3.0\r\n - PyTorch Version (e.g., 1.0): 1.13.1\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.9.15\r\n - CUDA version: 11.7\r\n - GPU models and configuration: NVIDIA GeForce RTX 3070 (Laptop)\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1645", "state": "closed", "labels": [ "question", "bug: triaged [verified]" ], "created_at": "2023-02-02T13:05:35Z", "updated_at": "2023-02-03T19:58:34Z", "user": "kunkcu" }, { "repo": "pytorch/pytorch", "number": 93347, "title": "when I want to use a new backend, how to deal with the op with 'device' argument? ", "body": "### \ud83d\udc1b Describe the bug\n\nHi\r\nI saw the generated code in python_torch_functionsEverything.cpp line 4763\uff0c there are so many tricks for the op with 'device' argument, such as init CUDA device, `torch::utils::maybe_initialize_cuda(options);`\r\n```\r\nstatic PyObject * THPVariable_arange(PyObject* self_, PyObject* args, PyObject* kwargs)\r\n{\r\n HANDLE_TH_ERRORS\r\n static PythonArgParser parser({\r\n \"arange(Scalar end, *, Tensor out=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)\",\r\n \"arange(Scalar start, Scalar end, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)\",\r\n \"arange(Scalar start, Scalar end, Scalar step=1, *, Tensor out=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)\",\r\n }, /*traceable=*/true);\r\n\r\n ParsedArgs<9> parsed_args;\r\n auto _r = parser.parse(nullptr, args, kwargs, parsed_args);\r\n if(_r.has_torch_function()) {\r\n return handle_torch_function(_r, nullptr, args, kwargs, THPVariableFunctionsModule, \"torch\");\r\n }\r\n switch (_r.idx) {\r\n case 0: {\r\n if (_r.isNone(1)) {\r\n // aten::arange(Scalar end, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor\r\n const auto options = TensorOptions()\r\n .dtype(_r.scalartypeOptional(2))\r\n .device(_r.deviceWithDefault(4, torch::tensors::get_default_device()))\r\n .layout(_r.layoutOptional(3))\r\n .requires_grad(_r.toBool(6))\r\n .pinned_memory(_r.toBool(5));\r\n torch::utils::maybe_initialize_cuda(options);\r\n```\r\n\r\nwhen I want to use a new backend which also need to init like CUDA, so I want to add some code to make my backend running fine, It is that ok ?\r\nthanks.\r\n\n\n### Versions\n\nnew backend\r\npython:3.7.5\r\npytorch: 2.0.0\r\nCUDA: None", "url": "https://github.com/pytorch/pytorch/issues/93347", "state": "open", "labels": [ "triaged", "module: backend" ], "created_at": "2023-01-31T09:34:00Z", "updated_at": "2023-02-06T14:44:50Z", "user": "heidongxianhua" }, { "repo": "pytorch/TensorRT", "number": 1638, "title": "Error when running Resnet50-CPP.ipynb, ./torchtrt_runtime_example: symbol lookup error: ./torchtrt_runtime_example: undefined symbol: _ZN2at4_ops11randint_low4callEllN3c108ArrayRefIlEENS2_8optionalINS2_10ScalarTypeEEENS5_INS2_6LayoutEEENS5_INS2_6DeviceEEENS5_IbEE", "body": "I was following this notebook on nvcr.io/nvidia/pytorch:22.12-py3, container make runs fine but this step fails. My host has nvidia-470, 11.4. I have tried multiple times but the same error.", "url": "https://github.com/pytorch/TensorRT/issues/1638", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-01-31T06:09:55Z", "updated_at": "2023-05-13T00:02:12Z", "user": "akshayantony12" }, { "repo": "pytorch/TensorRT", "number": 1631, "title": "Is NN inheritance possible?", "body": "So I have an application that I am enhancing; I really want to use the TensorRT backend as it's benchmarks are just brilliant; however, I cannot see a way one would go about using inheritance: eg ``class MyFancyEncoder(tensor_compiled_resnet_passing_torch.nn)``\r\n\r\nExample here: https://pastebin.com/nTdTfFnZ\r\nVersus here: https://pastebin.com/L0YyKg9H\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1631", "state": "closed", "labels": [ "question" ], "created_at": "2023-01-30T23:43:51Z", "updated_at": "2023-01-31T21:14:03Z", "user": "manbehindthemadness" }, { "repo": "pytorch/data", "number": 965, "title": "Correct way to shuffle, batch and shard WebDataset", "body": "### \ud83d\udcda The doc issue\n\nHi, the [docs on the WebDataset decoder](https://pytorch.org/data/main/generated/torchdata.datapipes.iter.WebDataset.html) give the following example:\r\n\r\n```python\r\n>>> from torchdata.datapipes.iter import FileLister, FileOpener\r\n>>>\r\n>>> def decode(item):\r\n>>> key, value = item\r\n>>> if key.endswith(\".txt\"):\r\n>>> return key, value.read().decode(\"utf-8\")\r\n>>> if key.endswith(\".bin\"):\r\n>>> return key, value.read().decode(\"utf-8\")\r\n>>>\r\n>>> datapipe1 = FileLister(\"test/_fakedata\", \"wds*.tar\")\r\n>>> datapipe2 = FileOpener(datapipe1, mode=\"b\")\r\n>>> dataset = datapipe2.load_from_tar().map(decode).webdataset()\r\n>>> for obj in dataset:\r\n>>> print(obj)\r\n```\r\n\r\nHowever this doesn't include demonstrating the proper location for shuffling, sharding and batching the dataset.\r\n\n\n### Suggest a potential alternative/fix\n\nCould you please let me know where to place `.shuffle`, `.batch` and `.sharding_filter` in this pipeline?", "url": "https://github.com/meta-pytorch/data/issues/965", "state": "closed", "labels": [ "documentation", "good first issue" ], "created_at": "2023-01-25T16:25:40Z", "updated_at": "2023-01-25T17:51:57Z", "comments": 4, "user": "austinmw" }, { "repo": "pytorch/cpuinfo", "number": 131, "title": "How to cross-compile pytorch-cpuinfo?", "body": "Hi! \r\n\r\nFirst a bit of context. I'm trying to build onnxruntime for raspberry pi using cross-compilation ([instructions here](https://onnxruntime.ai/docs/build/inferencing.html#cross-compiling-on-linux)). The onnxruntime package depends on pytorch-cpuinfo and fetches and builds it as part of the build process.\r\n\r\nI'm using this command.\r\n\r\n```shell\r\nVERBOSE=1 ./build.sh --config Release --build_shared_lib --arm --update --build --path_to_protoc_exe /build/bin/protoc\r\n```\r\n\r\nThis triggers the following error:\r\n\r\n```shell\r\n[...]\r\n[ 66%] Building C object _deps/pytorch_cpuinfo-build/CMakeFiles/cpuinfo.dir/src/x86/init.c.o\r\ncd /build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-build && /usr/bin/arm-linux-gnueabihf-gcc -DCPUINFO_LOG_LEVEL=2 -DEIGEN_MPL2_ONLY -DORT_ENABLE_STREAM -D_GNU_SOURCE=1 -I/build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-src/src -I/build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-src/include -I/build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-src/deps/clog/include -ffunction-sections -fdata-sections -Wno-error=attributes -O3 -DNDEBUG -fPIC -std=c99 -MD -MT _deps/pytorch_cpuinfo-build/CMakeFiles/cpuinfo.dir/src/x86/init.c.o -MF CMakeFiles/cpuinfo.dir/src/x86/init.c.o.d -o CMakeFiles/cpuinfo.dir/src/x86/init.c.o -c /build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-src/src/x86/init.c\r\nIn file included from /build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-src/src/x86/init.c:5:\r\n/build/onnxruntime/build/Linux/Release/_deps/pytorch_cpuinfo-src/src/x86/cpuid.h:5:11: fatal error: cpuid.h: No such file or directory\r\n 5 | #include <cpuid.h>\r\n | ^~~~~~~~~\r\ncompilation terminated.\r\ngmake[2]: *** [_deps/pytorch_cpuinfo-build/CMakeFiles/cpuinfo.dir/build.make:118: _deps/pytorch_cpuinfo-build/CMakeFiles/cpuinfo.dir/src/x86/init.c.o] Error 1\r\ngmake[2]: Leaving directory '/build/onnxruntime/build/Linux/Release'\r\ngmake[1]: *** [CMakeFiles/Makefile2:5506: _deps/pytorch_cpuinfo-build/CMakeFiles/cpuinfo.dir/all] Error 2\r\ngmake[1]: Leaving directory '/build/onnxruntime/build/Linux/Release'\r\n[...]\r\n```\r\n\r\nMy take on this is that pytorch-cpuinfo errorneously tries to compile for x86 (the host for the cross-compile). \r\n\r\nLooking at the CMakeList.txt in this project, I think the culprit is that it always assumes that the host architecture is also the target architecture: https://github.com/pytorch/cpuinfo/blob/3dc310302210c1891ffcfb12ae67b11a3ad3a150/CMakeLists.txt#L59\r\n\r\nWould love to hear if I'm doing something wrong. Or if I can submit a PR for this to allow to override the target architecture from the environment variables. ", "url": "https://github.com/pytorch/cpuinfo/issues/131", "state": "closed", "labels": [], "created_at": "2023-01-24T09:25:31Z", "updated_at": "2023-01-27T13:02:24Z", "user": "pietermarsman" }, { "repo": "pytorch/data", "number": 959, "title": "Tables in Documentation not rendering properly", "body": "### \ud83d\udcda The doc issue\r\n\r\nCompared to the last release, the tables in the documentation of the `main` branch is rendering differently. I do not recall any intentional changes to the format of our documentation or generation. We should have a look at this before the next release.\r\n\r\nBefore (0.5.1):\r\n\r\n<img width=\"886\" alt=\"Screenshot 2023-01-23 at 3 14 36 PM\" src=\"https://user-images.githubusercontent.com/4935152/214140520-f2a78f1b-84c1-4b02-a1a9-d43c15019340.png\">\r\n\r\n\r\nCurrent (main):\r\n\r\n<img width=\"947\" alt=\"Screenshot 2023-01-23 at 3 14 45 PM\" src=\"https://user-images.githubusercontent.com/4935152/214140549-8917d8fe-8483-4bb6-85d9-4cb0b9162cf7.png\">\r\n", "url": "https://github.com/meta-pytorch/data/issues/959", "state": "closed", "labels": [ "documentation", "good first issue" ], "created_at": "2023-01-23T20:14:52Z", "updated_at": "2023-02-02T14:38:24Z", "comments": 8, "user": "NivekT" }, { "repo": "pytorch/pytorch", "number": 93516, "title": "[Question] How to debug \"munmap_chunk(): invalid pointer\" when compiling to triton?", "body": "I'm trying to use torchdynamo to compile a function to triton.\r\nMy logs indicate that the function optimizes without issue,\r\n\r\nbut when running the function on a given input, I just get \"munmap_chunk(): invalid pointer\" w/o a stack trace / any useful debugging information.\r\n\r\nI'm wondering how to go about debugging such an error.\r\nAre any developers familiar with what this indicates?\n\ncc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/93516", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2023-01-22T03:36:25Z", "updated_at": "2023-02-01T14:19:30Z", "user": "vedantroy" }, { "repo": "pytorch/TensorRT", "number": 1600, "title": "\u2753 [Question] How do you compile for Jetson 5.0? ", "body": "## \u2753 Question\r\n\r\nHi, as there seems to be no prebuilt python binary, just wanted to know if there is any way to install this package on jetson 5.0?\r\n\r\n## What you have already tried\r\n\r\nI tried normal installation for jetson 4.6 which fails, I aslo tried this https://forums.developer.nvidia.com/t/installing-building-torch-tensorrt-for-jetpack-5-0-1-dp-l4t-ml-r34-1-1-py3/220565/6 which gives me this error:\r\n```\r\nuser@ubuntu:/mnt/Data/home/ParentCode/TensorRT$ bazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0\r\nStarting local Bazel server and connecting to it...\r\nINFO: Analyzed target //:libtorchtrt (71 packages loaded, 9773 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /mnt/Data/home/ParentCode/TensorRT/cpp/lib/BUILD:5:10: Linking cpp/lib/libtorchtrt_plugins.so failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc @bazel-out/aarch64-fastbuild/bin/cpp/lib/libtorchtrt_plugins.so-2.params\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging\r\n/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-fastbuild/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch.so while searching for torch\r\n/usr/bin/ld.gold: error: cannot find -ltorch\r\n/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-fastbuild/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch_cuda.so while searching for torch_cuda\r\n/usr/bin/ld.gold: error: cannot find -ltorch_cuda\r\n/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-fastbuild/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch_cpu.so while searching for torch_cpu\r\n/usr/bin/ld.gold: error: cannot find -ltorch_cpu\r\n/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-fastbuild/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch_global_deps.so while searching for torch_global_deps\r\n/usr/bin/ld.gold: error: cannot find -ltorch_global_deps\r\n/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-fastbuild/bin/_solib_aarch64/_U@libtorch_S_S_Cc10_Ucuda___Ulib/libc10_cuda.so while searching for c10_cuda\r\n/usr/bin/ld.gold: error: cannot find -lc10_cuda\r\n/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-fastbuild/bin/_solib_aarch64/_U@libtorch_S_S_Cc10___Ulib/libc10.so while searching for c10\r\n/usr/bin/ld.gold: error: cannot find -lc10\r\ncollect2: error: ld returned 1 exit status\r\nTarget //:libtorchtrt failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 19.738s, Critical Path: 6.94s\r\nINFO: 12 processes: 10 internal, 2 linux-sandbox.\r\nFAILED: Build did NOT complete successfully\r\n```\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 1.13.0a0+d0d6b1f2.nv22.10\r\n - CPU Architecture: aarch64\r\n - OS (e.g., Linux): Linux, NVidia's version of ubuntu 20.04 for jetson\r\n - Python version: Python 3.8.10\r\n - CUDA version: \r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2022 NVIDIA Corporation\r\nBuilt on Wed_May__4_00:02:26_PDT_2022\r\nCuda compilation tools, release 11.4, V11.4.239\r\nBuild cuda_11.4.r11.4/compiler.31294910_0\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1600", "state": "closed", "labels": [ "question", "No Activity", "channel: linux-jetpack" ], "created_at": "2023-01-20T17:08:42Z", "updated_at": "2023-09-15T11:33:27Z", "user": "arnaghizadeh" }, { "repo": "pytorch/xla", "number": 4482, "title": "How to save checkpoints in XLA", "body": "Hello\r\n\r\nI have training scripts running on CPUs and GPUs without error.\r\nI am trying to make the scripts compatible with TPUs.\r\n\r\nI was using the following lines to save checkpoints\r\n\r\n```\r\ntorch.save(checkpoint, path_checkpoints_file )\r\n```\r\n\r\nand following lines to load checkpoints\r\n\r\n```\r\ncheckpoint = torch.load(path_checkpoints_file, map_location=torch.device('cpu'))\r\n \r\nlastEpoch = checkpoint['lastEpoch']\r\nactiveChunk = checkpoint['activeChunk']\r\nchunk_count = checkpoint['chunk_count']\r\n\r\nmodel.load_state_dict(checkpoint['model_state_dict']) \r\nmodel.to(device)\r\n\r\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\nlr_scheduler.load_state_dict(checkpoint['lr_scheduler_state_dict'])\r\n```\r\n\r\nFor TPUs I replaced the saving operation with\r\n\r\n```\r\nxm.save(checkpoint, path_checkpoints_file )\r\n```\r\n\r\nand the loading part with\r\n\r\n```\r\ncheckpoint = xser.load( path_checkpoints_file )\r\n\r\nlastEpoch = checkpoint['lastEpoch']\r\nactiveChunk = checkpoint['activeChunk']\r\nchunk_count = checkpoint['chunk_count']\r\n\r\nmodel.load_state_dict(checkpoint['model_state_dict']) \r\nmodel.to(device)\r\n\r\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\nlr_scheduler.load_state_dict(checkpoint['lr_scheduler_state_dict'])\r\n```\r\n\r\nBut during training, the loss remains almost constant.\r\nDo we have a template to save and load checkpoints having models, optimizers and learning schedulers?\r\n\r\nbest regards\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/4482", "state": "open", "labels": [], "created_at": "2023-01-19T21:50:05Z", "updated_at": "2023-02-15T22:58:13Z", "user": "mfatih7" }, { "repo": "pytorch/functorch", "number": 1106, "title": "Vmap and backward hook problem", "body": "I try to get the gradient of the intermedia layer of model, so I use the backwards hook with functroch.grad to get the gradient of each image. When I used for loop to iterate each image, I successfully obtained 5000 gradients (dataset size). However, when I use vmap to do the same thing, I only get 40 gradients (40 batches in 1 epoch). Is there any way to solve it, or I have to use for loop? ", "url": "https://github.com/pytorch/functorch/issues/1106", "state": "open", "labels": [], "created_at": "2023-01-19T21:25:02Z", "updated_at": "2023-01-23T05:08:49Z", "comments": 1, "user": "pmzzs" }, { "repo": "pytorch/tutorials", "number": 2175, "title": "OSError: Missing: valgrind, callgrind_control, callgrind_annotate", "body": "The error occurs on below step:\r\n8. Collecting instruction counts with Callgrind:\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"benchmark.py\", line 805, in <module>\r\n stats_v0 = t0.collect_callgrind()\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/benchmark/utils/timer.py\", line 486, in collect_callgrind\r\n result = valgrind_timer_interface.wrapper_singleton().collect_callgrind(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.py\", line 526, in collect_callgrind\r\n self._validate()\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.py\", line 512, in _validate\r\n raise OSError(\"Missing: \" + \", \".join(missing_cmds))\r\nOSError: Missing: valgrind, callgrind_control, callgrind_annotate", "url": "https://github.com/pytorch/tutorials/issues/2175", "state": "open", "labels": [ "question" ], "created_at": "2023-01-19T16:00:52Z", "updated_at": "2023-01-24T10:47:08Z", "user": "ghost" }, { "repo": "pytorch/data", "number": 949, "title": "`torchdata` not available through `pytorch-nightly` conda channel", "body": "### \ud83d\udc1b Describe the bug\n\nThe nightly version of torchdata does not seem available through the corresponding conda channel.\r\n\r\n**Command:**\r\n```\r\n$ conda install torchdata -c pytorch-nightly --override-channels\r\n```\r\n\r\n**Result:**\r\n```\r\nCollecting package metadata (current_repodata.json): done\r\nSolving environment: failed with initial frozen solve. Retrying with flexible solve.\r\nCollecting package metadata (repodata.json): done\r\nSolving environment: failed with initial frozen solve. Retrying with flexible solve.\r\n\r\nPackagesNotFoundError: The following packages are not available from current channels:\r\n\r\n - torchdata\r\n\r\nCurrent channels:\r\n\r\n - https://conda.anaconda.org/pytorch-nightly/osx-arm64\r\n - https://conda.anaconda.org/pytorch-nightly/noarch\r\n\r\nTo search for alternate channels that may provide the conda package you're\r\nlooking for, navigate to\r\n\r\n https://anaconda.org\r\n\r\nand use the search bar at the top of the page.\r\n```\r\n\r\n\n\n### Versions\n\n```\r\nPyTorch version: N/A\r\nIs debug build: N/A\r\nCUDA used to build PyTorch: N/A\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 13.1 (arm64)\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: version 3.22.4\r\nLibc version: N/A\r\n\r\nPython version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:25:29) [Clang 14.0.6 ] (64-bit runtime)\r\nPython platform: macOS-13.1-arm64-arm-64bit\r\nIs CUDA available: N/A\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.23.5\r\n[conda] numpy 1.23.5 py310h5d7c261_0 conda-forge\r\n```", "url": "https://github.com/meta-pytorch/data/issues/949", "state": "closed", "labels": [ "high priority" ], "created_at": "2023-01-18T15:49:29Z", "updated_at": "2023-01-18T17:11:32Z", "comments": 4, "user": "PierreGtch" }, { "repo": "pytorch/tutorials", "number": 2173, "title": "Area calculation in TorchVision Object Detection Finetuning Tutorial", "body": "I see that at https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html, \r\n\r\n` area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])` but shouldn't it be something like `area = ((boxes[:, 3] - boxes[:, 1]) + 1) * ((boxes[:, 2] - boxes[:, 0]) + 1) ` for calculating areas? If I am wrong, can someone explain me why is it so. Thanks in advance !\n\ncc @datumbox @nairbv", "url": "https://github.com/pytorch/tutorials/issues/2173", "state": "closed", "labels": [ "module: vision" ], "created_at": "2023-01-18T08:55:55Z", "updated_at": "2023-02-15T16:55:24Z", "comments": 1, "user": "Himanshunitrr" }, { "repo": "pytorch/torchx", "number": 684, "title": "Docker workspace: How to specify \"latest\" (nightly) base image?", "body": "## \u2753 Questions and Help\r\n\r\nFor my docker workspace (e.g. scheduler == \"local_docker\" or \"aws_batch\"), I'd like to use a base image that is published on a nightly cadence. So I have this `Dockerfile.torchx`\r\n\r\n```\r\n# Dockerfile.torchx\r\nARGS IMAGE\r\nARGS WORKSPACE\r\n\r\nFROM $IMAGE\r\n\r\nWORKDIR /workspace/mfive\r\nCOPY $WORKSPACE .\r\n\r\n# installs my workspace (has setup.py)\r\nRUN pip install -e .[dev]\r\n```\r\n\r\nIn my `.torchxconfig` I've specified the latest default image as:\r\n\r\n```\r\n# .torchxconfig\r\n\r\n[dist.ddp]\r\nimage = registry.gitlab.aws.dev/mfive/mfive-nightly:latest\r\n```\r\n\r\nThe nightly build tags the nightly image as `latest` in addition to the `YYYY.MM.DD` (e.g. `mfive-nightly:2023.01.15`) but because the [`DockerWorkspace`'s build argument has `pull=False`](https://github.com/pytorch/torchx/blob/main/torchx/workspace/docker_workspace.py#L126) this won't work since `latest` will be cached.\r\n\r\nIs there a better way for me to specify a \"use-the-latest\" base image from the repo policy when building a docker workspace?\r\n\r\ncc) @d4l3k ", "url": "https://github.com/meta-pytorch/torchx/issues/684", "state": "closed", "labels": [], "created_at": "2023-01-17T22:03:19Z", "updated_at": "2023-03-17T22:06:25Z", "comments": 3, "user": "kiukchung" }, { "repo": "pytorch/PiPPy", "number": 723, "title": "How to reduce memory costs when running on CPU", "body": "I running HF_inference.py on my CPU and it works well! It can successfully applying pipeline parallelism on CPU. However, when I applying pipeline parallelism, I found that each rank will load the whole model and it seems not necessary since each rank only performs a part of the model. There must be some ways can figure out this issue and I would love to solve this issue. It would be great if developers of TAU can give me some advice, we can discuss more about it if you have any idea. Thanks!", "url": "https://github.com/pytorch/PiPPy/issues/723", "state": "closed", "labels": [], "created_at": "2023-01-17T07:54:31Z", "updated_at": "2025-06-10T02:40:11Z", "user": "jiqing-feng" }, { "repo": "pytorch/pytorch", "number": 92202, "title": "Generator: when I want to use a new backend, how to create a Generator with the new backend?", "body": "### \ud83d\udc1b Describe the bug\n\n I want to add a new backend, so I add my backend by referring to this tutorial. https://github.com/bdhirsh/pytorch_open_registration_example\r\n\r\nBut how to create a Generator with my new backend ?\r\nI see the code related to 'Generator' is in the file, https://github.com/pytorch/pytorch/blob/master/torch/csrc/Generator.cpp\r\n\r\nstatic PyObject* THPGenerator_pynew(\r\n PyTypeObject* type,\r\n PyObject* args,\r\n PyObject* kwargs) {\r\n HANDLE_TH_ERRORS\r\n static torch::PythonArgParser parser({\"Generator(Device device=None)\"});\r\n torch::ParsedArgs<1> parsed_args;\r\n auto r = parser.parse(args, kwargs, parsed_args);\r\n auto device = r.deviceWithDefault(0, at::Device(at::kCPU));\r\n\r\n THPGeneratorPtr self((THPGenerator*)type->tp_alloc(type, 0));\r\n if (device.type() == at::kCPU) {\r\n self->cdata = make_generator<CPUGeneratorImpl>();\r\n }\r\n#ifdef USE_CUDA\r\n else if (device.type() == at::kCUDA) {\r\n self->cdata = make_generator<CUDAGeneratorImpl>(device.index());\r\n }\r\n#elif USE_MPS\r\n else if (device.type() == at::kMPS) {\r\n self->cdata = make_generator<MPSGeneratorImpl>();\r\n }\r\n#endif\r\n else {\r\n AT_ERROR(\r\n \"Device type \",\r\n c10::DeviceTypeName(device.type()),\r\n \" is not supported for torch.Generator() api.\");\r\n }\r\n return (PyObject*)self.release();\r\n END_HANDLE_TH_ERRORS\r\n}\r\n\r\n\r\nSo how to create a Generator with my new backend named \"privateuseone\" ? \n\n### Versions\n\nnew backend \r\npython:3.7.5 \r\npytorch: 2.0.0\r\nCUDA: None", "url": "https://github.com/pytorch/pytorch/issues/92202", "state": "closed", "labels": [ "triaged", "module: backend" ], "created_at": "2023-01-14T08:11:04Z", "updated_at": "2023-10-28T15:02:10Z", "user": "heidongxianhua" }, { "repo": "pytorch/TensorRT", "number": 1592, "title": "\u2753 [Question] How should recompilation in Torch Dynamo + `fx2trt` be handled?", "body": "## \u2753 Question\r\n\r\nGiven that Torch Dynamo compiles models lazily, how should benchmarking/usage of Torch Dynamo models, especially in cases where the inputs have a dynamic batch dimension, be handled?\r\n\r\n## What you have already tried\r\n\r\nBased on compiling and running inference using `fx2trt` with Torch Dynamo on the [BERT base-uncased model](https://huggingface.co/bert-base-uncased), and other similar networks, it seems that Torch Dynamo recompiles the model for each different batch-size input provided. Additionally, once the object has encountered a particular batch size, it does not recompile the model from scratch upon seeing another input of the same shape. It seems that Dynamo may be caching statically-shaped model compilations and dynamically selecting among these at inference time. The code used to generate the dynamo model and outputs is:\r\n```python\r\ndynamo_model = torchdynamo.optimize(\"fx2trt\")(model)\r\noutput = dynamo_model(input)\r\n```\r\nWhile Dynamo does have a flag which allows users to specify dynamic shape prior to compilation (`torchdynamo.config.dynamic_shapes=True`), for BERT this seems to break compilation with `fx2trt`.\r\n\r\nRecompilation of the model for each new batch size makes inference challenging for benchmarking and general usage tasks, as each time the model encounters an input of new shape, it would take much longer to complete the inference task than otherwise.\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.14.0.dev20221114+cu116\r\n - Torch-TRT Version: dc570e47\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Python version: 3.8\r\n - CUDA version: 11.6\r\n\r\n## Additional context\r\nDynamo provides many benefits when used in conjunction with fx2trt, as it enables accelerated inference even when the graph might not normally be traceable due to control flow constraints. It would be beneficial to understand the dynamic batch/recompilation issue so Dynamo can be more effectively integrated into benchmarking for Torch-TRT.\r\n\r\nQuestion #1569 could be relevant to this issue as it also relates to Dynamic Batch + FX.", "url": "https://github.com/pytorch/TensorRT/issues/1592", "state": "closed", "labels": [ "question", "No Activity", "component: fx" ], "created_at": "2023-01-14T02:18:53Z", "updated_at": "2023-05-09T00:02:14Z", "user": "gs-olive" }, { "repo": "pytorch/functorch", "number": 1101, "title": "How to get only the last few layers' gradident?", "body": "```\r\nfrom functorch import make_functional_with_buffers, vmap, grad\r\nfmodel, params, buffers = make_functional_with_buffers(net,disable_autograd_tracking=True)\r\n\r\ndef compute_loss_stateless_model (params, buffers, sample, target):\r\n batch = sample.unsqueeze(0)\r\n targets = target.unsqueeze(0)\r\n\r\n predictions = fmodel(params, buffers, batch) \r\n loss = criterion(predictions, targets)\r\n return loss\r\n\r\nft_compute_grad = grad(compute_loss_stateless_model)\r\ngradinet = ft_compute_grad(params, buffers, train_poi_set[0][0].cuda(), torch.tensor(train_poi_set[0][1]).cuda())\r\n```\r\nThis will return the gradient of the whole model. However, I only want the second last layers' gradient, like:\r\n```\r\ngradinet = ft_compute_grad(params, buffers, train_poi_set[0][0].cuda(), torch.tensor(train_poi_set[0][1]).cuda())[-2]\r\n```\r\nAlthough this method can also obtain the required gradient, it will cause a lot of unnecessary overhead. Is there any way to close the 'require_grad' of all previous layers? Thanks for your answer!", "url": "https://github.com/pytorch/functorch/issues/1101", "state": "open", "labels": [], "created_at": "2023-01-13T21:48:42Z", "updated_at": "2024-04-05T03:02:41Z", "user": "pmzzs" }, { "repo": "pytorch/functorch", "number": 1099, "title": "Will pmap be supported in functorh\uff1f", "body": "Greetings, I am very grateful that vmap is supported in functorch. Is there any plan to include support for pmap in the future? Thank you. Additionally, what are the ways that I can contribute to the development of this project?", "url": "https://github.com/pytorch/functorch/issues/1099", "state": "open", "labels": [], "created_at": "2023-01-11T17:32:48Z", "updated_at": "2024-06-05T16:32:36Z", "comments": 2, "user": "shixun404" }, { "repo": "pytorch/TensorRT", "number": 1585, "title": "\u2753 [Question] How can I make deserialized targets compatible with Torch-TensorRT ABI?", "body": "## \u2753 Question\r\n\r\nWhen I load my compiled model:\r\n\r\n`model = torch.jit.load('model.ts')\r\n`\r\n\r\n**I keep getting the error:** \r\n\r\n`RuntimeError: [Error thrown at core/runtime/TRTEngine.cpp:250] Expected serialized_info.size() == SERIALIZATION_LEN to be true but got false\r\nProgram to be deserialized targets an incompatible Torch-TensorRT ABI`\r\n\r\n## What you have already tried\r\n\r\nIt works when I run the model inside the official [Nvidia PyTorch Release 22.12](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12) docker image:\r\n\r\n```\r\nmodel = torch.jit.load('model.ts')\r\nmodel.eval()\r\n```\r\n\r\n\r\nHowever, I want to run the model in my normal environment for debugging purposes.\r\n\r\nI've installed torch-tensorrt with: pip install torch-tensorrt==1.3.0 --find-links https://github.com/pytorch/TensorRT/releases/expanded_assets/v1.3.0\r\nAnd I've compiled my model using docker: nvcr.io/nvidia/pytorch:22.12-py3\r\n\r\nThe [1.3.0 Release](https://github.com/pytorch/TensorRT/releases/tag/v1.3.0) says that it's based on TensorRT 8.5, and the docker image: [TensorRT\u2122 8.5.1](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12). I've also tried the 22.09 image that specifies NVIDIA TensorRT\u2122 [8.5.0.12](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-09.html#rel-22-09), but I'm still getting the same error.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 1.13.0\r\n - CPU Architecture: AMD Rome\r\n - OS: Ubuntu 22.04 LTS\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\npip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 --extra-index-url https://download.pytorch.org/whl/cu117 https://download.pytorch.org/whl/cu113\r\npip install nvidia-pyindex\r\npip install nvidia-tensorrt\r\npip install torch-tensorrt==1.3.0 --find-links https://github.com/pytorch/TensorRT/releases/expanded_assets/v1.3.0\r\nimport torch_tensorrt\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.9\r\n - CUDA version: cu117\r\n - GPU models and configuration: Nvidia A6000 (Ampere)\r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1585", "state": "closed", "labels": [ "question", "No Activity", "component: runtime" ], "created_at": "2023-01-11T12:44:58Z", "updated_at": "2023-05-04T00:02:16Z", "user": "emilwallner" }, { "repo": "pytorch/kineto", "number": 713, "title": "How to get the CPU utilization by Pytorch Profiler?", "body": "According to the code of gpu_metrics_parser.py of torch-tb-profiler, I understand that the gpu utilization is actually the sum of event times of type EventTypes.KERNEL over a period of time / total time. So, is CPU utilization the sum of event times of type EventTypes.OPERATOR over a period of time / total time? \r\nIt seems that the result of this calculation is not normal.", "url": "https://github.com/pytorch/kineto/issues/713", "state": "closed", "labels": [], "created_at": "2023-01-11T09:46:20Z", "updated_at": "2023-02-15T03:53:40Z", "user": "young-chao" }, { "repo": "pytorch/TensorRT", "number": 1580, "title": "I am deleting some layers of Resneet152 for example del resnet152.fc and del resnet152.layer4 and save it locally in order to get the dimension of 1024. Later when I import this saved model it complains about the missing layer4. What might be the the reason? Does still try tp access the original model. How can get 1024 feature vectors for a given image using Resnet1024.", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1580", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2023-01-09T15:22:23Z", "updated_at": "2023-04-22T00:02:19Z", "user": "pradeep10kumar" }, { "repo": "pytorch/TensorRT", "number": 1579, "title": "When I delete some layers from Resnet152 for example", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1579", "state": "closed", "labels": [ "question" ], "created_at": "2023-01-09T15:17:50Z", "updated_at": "2023-01-09T15:18:31Z", "user": "pradeep10kumar" }, { "repo": "pytorch/TensorRT", "number": 1578, "title": "\u2753 [Question] Failed to compile EfficientNet: Error: Segmentation fault (core dumped)", "body": "I followed the step in the demo notebook `[EfficientNet-example.ipynb](https://github.com/pytorch/TensorRT/blob/main/notebooks/EfficientNet-example.ipynb)`\r\n\r\nWhen I try to compile EfficientNet, an error occurred: `Segmentation fault (core dumped)` \r\n\r\nI have located the error is caused by \r\n`\r\ntrt_model_fp32 = torch_tensorrt.compile(model, inputs = [torch_tensorrt.Input((1, 3, 512, 512), dtype=torch.float32)],\r\n enabled_precisions = torch.float32, # Run with FP32\r\n workspace_size = 1 << 22\r\n)\r\n`\r\n\r\nFull code\r\n\r\n```\r\nimport os\r\nimport numpy as np\r\nfrom PIL import Image\r\nfrom torchvision import transforms\r\nimport sys\r\nimport timm\r\nimport torch.nn as nn\r\nimport torch\r\nimport io\r\nimport torch.backends.cudnn as cudnn\r\nimport torch_tensorrt\r\n\r\nSIZE = 512\r\ncudnn.benchmark = True\r\n\r\npreprocess_transform = transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Resize((SIZE, SIZE)),\r\n transforms.Normalize(\r\n mean=[0.485, 0.456, 0.406],\r\n std=[0.229, 0.224, 0.225],\r\n )])\r\n\r\n\r\ndef preprocess(byteImage):\r\n image = Image.open(io.BytesIO(byteImage))\r\n image = Image.fromarray(np.array(image)[:,:,:3])\r\n return preprocess_transform(image).unsqueeze(0)\r\n\r\n\r\nclass CustomModel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.model = timm.create_model('tf_efficientnet_b0_ns', pretrained=False)\r\n self.n_features = self.model.classifier.in_features\r\n self.model.classifier = nn.Identity()\r\n self.fc = nn.Linear(self.n_features, 1)\r\n\r\n def feature(self, image):\r\n feature = self.model(image)\r\n return feature\r\n \r\n def forward(self, image):\r\n feature = self.feature(image)\r\n output = self.fc(feature)\r\n return output\r\n\r\ndef predict(tensorImage):\r\n tensorImage = tensorImage.to('cuda')\r\n with torch.no_grad:\r\n pred = trt_model_fp32(tensorImage)\r\n torch.cuda.synchronize()\r\n return pred.cpu().detach().numpy()\r\n\r\nmodel = CustomModel().to('cuda')\r\nstate_dict = torch.load(my_weight_path, map_location='cuda')\r\nmodel.load_state_dict(state_dict['model'])\r\nmodel.eval()\r\ntrt_model_fp32 = torch_tensorrt.compile(model, inputs = [torch_tensorrt.Input((1, 3, 512, 512), dtype=torch.float32)],\r\n enabled_precisions = torch.float32, # Run with FP32\r\n workspace_size = 1 << 22\r\n)\r\ninput_data = torch.randn(1,3,512,512).to('cuda')\r\npred = predict(input_data)\r\nprint(pred.shape)\r\nprint(pred)\r\n```\r\n\r\nPlease help\u2757", "url": "https://github.com/pytorch/TensorRT/issues/1578", "state": "closed", "labels": [ "question" ], "created_at": "2023-01-09T14:44:25Z", "updated_at": "2023-02-28T23:40:20Z", "user": "Tonyboy999" }, { "repo": "pytorch/serve", "number": 2057, "title": "what is my_tc ?", "body": "### \ud83d\udc1b Describe the bug\n\ntorchserve --start --model-store model_store --models my_tc=BERTTokenClassification.mar --ncs\r\ncurl -X POST http://127.0.0.1:8080/predictions/my_tc -T Token_classification_artifacts/sample_text_captum_input.txt\r\n\n\n### Error logs\n\n2023-01-05T15:51:41,260 [INFO ] W-9001-my_tc_1.0-stdout MODEL_LOG - model_name: my_tc, batchSize: 1\r\n2023-01-05T15:51:41,628 [INFO ] W-9003-my_tc_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9003\r\n2023-01-05T15:51:41,633 [INFO ] W-9003-my_tc_1.0-stdout MODEL_LOG - Successfully loaded /data//python3.9/site-packages/ts/configs/metrics.yaml.\r\n\r\nSo **model_name** is my_tc ? not BERTTokenClassification\r\n\n\n### Installation instructions\n\nconda install -c pytorch torchserve torch-model-archiver torch-workflow-archiver\n\n### Model Packaing\n\ntorch-model-archiver --model-name BERTTokenClassification --version 1.0 --serialized-file Transformer_model/pytorch_model.bin --handler ./Transformer_handler_generalized.py --extra-files \"Transformer_model/config.json,./setup_config.json,./Token_classification_artifacts/index_to_name.json\"\r\n\r\nmodel_name is what ?\r\n\n\n### config.properties\n\n_No response_\n\n### Versions\n\n------------------------------------------------------------------------------------------\r\nEnvironment headers\r\n------------------------------------------------------------------------------------------\r\nTorchserve branch: \r\n\r\ntorchserve==0.7.0b20221212\r\ntorch-model-archiver==0.7.0b20221212\r\n\r\nPython version: 3.9 (64-bit runtime)\r\nPython executable: /data/python\r\n\r\nVersions of relevant python libraries:\r\ncaptum==0.6.0\r\nfuture==0.18.2\r\nnumpy==1.24.1\r\nnvgpu==0.9.0\r\npsutil==5.9.4\r\nrequests==2.28.1\r\nsentence-transformers==2.2.2\r\nsentencepiece==0.1.97\r\ntorch==1.13.1+cu116\r\ntorch-model-archiver==0.7.0b20221212\r\ntorch-workflow-archiver==0.2.6b20221212\r\ntorchaudio==0.13.1+cu116\r\ntorchserve==0.7.0b20221212\r\ntorchvision==0.14.1+cu116\r\ntransformers==4.26.0.dev0\r\nwheel==0.37.1\r\ntorch==1.13.1+cu116\r\n**Warning: torchtext not present ..\r\ntorchvision==0.14.1+cu116\r\ntorchaudio==0.13.1+cu116\r\n\r\nJava Version:\r\n\r\n\r\nOS: CentOS Linux release 7.9.2009 (Core)\r\nGCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\r\nClang version: N/A\r\nCMake version: version 2.8.12.2\r\n\r\nIs CUDA available: Yes\r\nCUDA runtime version: 11.6.124\r\nGPU models and configuration: \r\nGPU 0: Tesla T4\r\nGPU 1: Tesla T4\r\nGPU 2: Tesla T4\r\nNvidia driver version: 510.108.03\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8\n\n### Repro instructions\n\njust want to know what is my_tc\r\n\n\n### Possible Solution\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/2057", "state": "open", "labels": [ "triaged" ], "created_at": "2023-01-05T07:57:46Z", "updated_at": "2023-01-05T15:00:14Z", "user": "ucas010" }, { "repo": "pytorch/functorch", "number": 1094, "title": "batching over model parameters", "body": "I have a use-case for `functorch`. I would like to check possible iterations of model parameters in a very efficient way (I want to eliminate the loop). Here's an example code for a simplified case I got it working:\r\n\r\n```python\r\nlinear = torch.nn.Linear(10,2)\r\ndefault_weight = linear.weight.data\r\nsample_input = torch.rand(3,10)\r\nsample_add = torch.rand_like(default_weight)\r\ndef interpolate_weights(alpha):\r\n with torch.no_grad():\r\n res_weight = torch.nn.Parameter(default_weight + alpha*sample_add)\r\n linear.weight = res_weight\r\n return linear(sample_input)\r\n```\r\n\r\nnow I could do `for alpha in np.np.linspace(0.0, 1.0, 100)` but I want to vectorise this loop since my code is prohibitively slow. Is functorch here applicable? Executing:\r\n\r\n```python\r\nalphas = torch.linspace(0.0, 1.0, 100)\r\nvmap(interpolate_weights)(alphas)\r\n```\r\n\r\nworks, but how to do something similar for a simple resnet does not work. I've tried using `load_state_dict` but that's not working:\r\n\r\n```python\r\nfrom torchvision import models\r\nmodel_resnet = models.resnet18(pretrained=True)\r\n\r\nnamed_params = list(model_resnet.named_parameters())\r\nnamed_params_data = [(n,p.data.clone()) for (n,p) in named_params]\r\n\r\nsample_data = torch.rand(10,3,224,244)\r\n\r\ndef test_resnet(new_params):\r\n def interpolate(alpha):\r\n with torch.no_grad():\r\n p_dict = {name:(old + alpha*new_params[i]) for i,(name, old) in enumerate(named_params_data)}\r\n model_resnet.load_state_dict(p_dict, strict=False)\r\n out = model_resnet(sample_data)\r\n return out\r\n return interpolate\r\n\r\nrand_tensor = [torch.rand_like(p) for n,p in named_params_data]\r\n\r\nto_vamp_resnet = test_thing(rand_tensor)\r\nvmap(to_vamp_resnet)(alphas)\r\n```\r\n\r\nresults in:\r\n\r\n`\r\nWhile copying the parameter named \"fc.bias\", whose dimensions in the model are torch.Size([1000]) and whose dimensions in the checkpoint are torch.Size([1000]), an exception occurred : ('vmap: inplace arithmetic(self, *extra_args) is not possible because there exists a Tensor `other` in extra_args that has more elements than `self`. This happened due to `other` being vmapped over but `self` not being vmapped over in a vmap. Please try to use out-of-place operators instead of inplace arithmetic. If said operator is being called inside the PyTorch framework, please file a bug report instead.',).\r\n`", "url": "https://github.com/pytorch/functorch/issues/1094", "state": "open", "labels": [], "created_at": "2023-01-04T17:59:59Z", "updated_at": "2023-01-04T21:42:36Z", "comments": 2, "user": "LeanderK" }, { "repo": "pytorch/TensorRT", "number": 1570, "title": "\u2753 [Question] When I use fx2trt, can an unsupported op fallback to pytorch like the TorchScript compiler?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nWhen I use fx2trt, can an unsupported op fallback to pytorch like the TorchScript compiler?", "url": "https://github.com/pytorch/TensorRT/issues/1570", "state": "closed", "labels": [ "question" ], "created_at": "2023-01-03T05:26:00Z", "updated_at": "2023-01-06T22:22:12Z", "user": "chenzhengda" }, { "repo": "pytorch/TensorRT", "number": 1569, "title": "\u2753 [Question] How do you use dynamic shape when using fx as ir and the model is not fully lowerable", "body": "## \u2753 Question\r\n\r\nI have a pytorch model that contains a Pixel Shuffle operation (which is not fully supported) and I would like to convert it to TensorRT, while being able to specify a dynamic shape as input. The \"ts\" path does not work as there is an issue, the \"fx\" path has problems too and I am not able to use a splitted model with dynamic shapes.\r\n\r\n## What you have already tried\r\n\r\n* The conversion using TorchScript as \"ir\" is not working (see Issue #1568)\r\n* The conversion using `torch_tensorrt.fx.compile` succeeds when I use a static shape, however there is no way of specifying a dynamic shape\r\n* Using a manual approach (that is by manually tracing with `acc_tracer`, then constructing the `TRTInterpreter` and finally the `TRTModule`) fails as there is a non supported operation (a pixel shuffle layer) (Maybe I should open an Issue for this too?)\r\n* Using the manual approach with a `TRTSplitter` is maybe the way to go but I don't know how to specify the dynamic shape constraints in this situation.\r\n\r\nThe \"manual\" approach that I mentioned is the one specified in [examples/fx/fx2trt_example.py](https://github.com/pytorch/TensorRT/blob/master/examples/fx/fx2trt_example.py) and in the docs. \r\n\r\nHere is the code as I have it now. Please note that the branch with the splitter is executed and the result is errors when I execute the trt model with different shapes. If `do_split` is set to `False` the conversion fails as `nn.PixelShuffle` is not supported.\r\n\r\n```python\r\nimport tensorrt as trt\r\nimport torch.fx\r\nimport torch.nn as nn\r\n\r\nimport torch_tensorrt.fx.tracer.acc_tracer.acc_tracer as acc_tracer\r\nimport torchvision.models as models\r\nfrom torch_tensorrt.fx import InputTensorSpec, TRTInterpreter, TRTModule\r\nfrom torch_tensorrt.fx.utils import LowerPrecision\r\nfrom torch_tensorrt.fx.tools.trt_splitter import TRTSplitter\r\n\r\n\r\nclass MyModel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.conv = nn.Conv2d(3, 16, kernel_size=3, padding=1)\r\n self.shuffle = nn.PixelShuffle(2)\r\n\r\n def forward(self, x):\r\n return self.shuffle(self.conv(x))\r\n\r\n\r\ntorch.set_grad_enabled(False)\r\n\r\n# inputs\r\ninputs = [torch.rand(1, 3, 224, 224).cuda()]\r\n\r\n\r\nfactory_kwargs = {\"dtype\": torch.float32, \"device\": torch.device(\"cuda:0\")}\r\nmodel = MyModel().to(**factory_kwargs)\r\n\r\nmodel = model.eval()\r\n\r\nout = model(inputs[0])\r\n\r\n# sybolic trace\r\nacc_model = acc_tracer.trace(model, inputs)\r\n\r\ndo_split = True\r\n\r\nif do_split:\r\n # split\r\n splitter = TRTSplitter(acc_model, inputs)\r\n\r\n splitter.node_support_preview(dump_graph=False)\r\n\r\n split_mod = splitter()\r\n\r\n print(split_mod.graph)\r\n\r\n def get_submod_inputs(mod, submod, inputs):\r\n acc_inputs = None\r\n\r\n def get_input(self, inputs):\r\n nonlocal acc_inputs\r\n acc_inputs = inputs\r\n\r\n handle = submod.register_forward_pre_hook(get_input)\r\n mod(*inputs)\r\n handle.remove()\r\n return acc_inputs\r\n\r\n for name, _ in split_mod.named_children():\r\n if \"_run_on_acc\" in name:\r\n submod = getattr(split_mod, name)\r\n # Get submodule inputs for fx2trt\r\n acc_inputs = get_submod_inputs(split_mod, submod, inputs)\r\n\r\n # fx2trt replacement\r\n interp = TRTInterpreter(\r\n submod,\r\n InputTensorSpec.from_tensors(acc_inputs),\r\n explicit_batch_dimension=True,\r\n )\r\n r = interp.run(lower_precision=LowerPrecision.FP32)\r\n trt_mod = TRTModule(*r)\r\n setattr(split_mod, name, trt_mod)\r\n\r\n trt_model = split_mod\r\n\r\nelse:\r\n # input specs\r\n input_specs = [\r\n InputTensorSpec(\r\n shape=(1, 3, -1, -1),\r\n dtype=torch.float32,\r\n device=\"cuda:0\",\r\n shape_ranges=[((1, 3, 112, 112), (1, 3, 224, 224), (1, 3, 512, 512))],\r\n ),\r\n ]\r\n # input_specs = [\r\n # InputTensorSpec(\r\n # shape=(1, 3, 224, 224),\r\n # dtype=torch.float32,\r\n # device=\"cuda:0\",\r\n # ),\r\n # ]\r\n\r\n # TRT interpreter\r\n interp = TRTInterpreter(\r\n acc_model,\r\n input_specs,\r\n explicit_batch_dimension=True,\r\n explicit_precision=True,\r\n logger_level=trt.Logger.INFO,\r\n )\r\n\r\n interpreter_result = interp.run(\r\n max_batch_size=4, lower_precision=LowerPrecision.FP32\r\n )\r\n\r\n # TRT module\r\n trt_model = TRTModule(\r\n interpreter_result.engine,\r\n interpreter_result.input_names,\r\n interpreter_result.output_names,\r\n )\r\n\r\ntrt_out = trt_model(inputs[0])\r\n\r\n\r\ntrt_model(torch.rand(1,3, 112, 112).cuda())\r\ntrt_model(torch.rand(1,3, 150, 150).cuda())\r\ntrt_model(torch.rand(1,3, 400, 400).cuda())\r\ntrt_model(torch.rand(1,3, 512, 512).cuda())\r\n\r\nprint((trt_out - out).max())\r\n\r\n```\r\n\r\n## Environment\r\n\r\nThe official NVIDIA Pytorch Docker image version 22.12 is used.\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug", "url": "https://github.com/pytorch/TensorRT/issues/1569", "state": "closed", "labels": [ "question", "No Activity", "component: fx" ], "created_at": "2023-01-02T14:44:52Z", "updated_at": "2023-04-15T00:02:10Z", "user": "ivan94fi" }, { "repo": "pytorch/pytorch", "number": 91537, "title": "Unclear how to change compiler used by `torch.compile`", "body": "### \ud83d\udcda The doc issue\r\n\r\nIt is not clear from https://pytorch.org/tutorials//intermediate/torch_compile_tutorial.html, nor from the docs in `torch.compile`, nor even from looking through `_dynamo/config.py`, how one can change the compiler used by PyTorch.\r\n\r\nRight now I am seeing the following issue. My code:\r\n\r\n```python\r\nimport torch\r\n\r\n@torch.compile\r\ndef f(x):\r\n return 0.5 * x\r\n\r\nf(torch.tensor(1.0))\r\n```\r\n\r\n<details><summary>This produces the following error message (click to toggle):</summary>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nCalledProcessError Traceback (most recent call last)\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/codecache.py:445, in CppCodeCache.load(cls, source_code)\r\n 444 try:\r\n--> 445 subprocess.check_output(cmd, stderr=subprocess.STDOUT)\r\n 446 except subprocess.CalledProcessError as e:\r\n\r\nFile /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:421, in check_output(timeout, *popenargs, **kwargs)\r\n 419 kwargs['input'] = empty\r\n--> 421 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,\r\n 422 **kwargs).stdout\r\n\r\nFile /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:526, in run(input, capture_output, timeout, check, *popenargs, **kwargs)\r\n 525 if check and retcode:\r\n--> 526 raise CalledProcessError(retcode, process.args,\r\n 527 output=stdout, stderr=stderr)\r\n 528 return CompletedProcess(process.args, retcode, stdout, stderr)\r\n\r\nCalledProcessError: Command '['g++', '/tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.cpp', '-shared', '-fPIC', '-Wall', '-std=c++17', '-Wno-unused-variable', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/torch/csrc/api/include', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/TH', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/THC', '-I/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10', '-lgomp', '-march=native', '-O3', '-ffast-math', '-fno-finite-math-only', '-fopenmp', '-D', 'C10_USING_CUSTOM_GENERATED_MACROS', '-o/tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.so']' returned non-zero exit status 1.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nCppCompileError Traceback (most recent call last)\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/output_graph.py:676, in OutputGraph.call_user_compiler(self, gm)\r\n 675 else:\r\n--> 676 compiled_fn = compiler_fn(gm, self.fake_example_inputs())\r\n 677 _step_logger()(logging.INFO, f\"done compiler function {name}\")\r\n\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py:1032, in wrap_backend_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs)\r\n 1031 else:\r\n-> 1032 compiled_gm = compiler_fn(gm, example_inputs, **kwargs)\r\n 1034 return compiled_gm\r\n\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/__init__.py:1190, in _TorchCompileInductorWrapper.__call__(self, model_, inputs_)\r\n 1189 with self.cm:\r\n-> 1190 return self.compile_fn(model_, inputs_)\r\n\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:398, in compile_fx(model_, example_inputs_, inner_compile)\r\n 393 with overrides.patch_functions():\r\n 394 \r\n 395 # TODO: can add logging before/after the call to create_aot_dispatcher_function\r\n 396 # in torch._functorch/aot_autograd.py::aot_module_simplified::aot_function_simplified::new_func\r\n 397 # once torchdynamo is merged into pytorch\r\n--> 398 return aot_autograd(\r\n 399 fw_compiler=fw_compiler,\r\n 400 bw_compiler=bw_compiler,\r\n 401 decompositions=select_decomp_table(),\r\n 402 partition_fn=functools.partial(\r\n 403 min_cut_rematerialization_partition, compiler=\"inductor\"\r\n 404 ),\r\n 405 )(model_, example_inputs_)\r\n\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/optimizations/training.py:78, in aot_autograd.<locals>.compiler_fn(gm, example_inputs)\r\n 77 with enable_aot_logging():\r\n---> 78 cg = aot_module_simplified(gm, example_inputs, **kwargs)\r\n 79 counters[\"aot_autograd\"][\"ok\"] += 1\r\n\r\nFile ~/venvs/main/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:2355, in aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions, hasher_type, static_argnums)\r\n 2353 full_args.extend(args)\r\n-> 2355 compiled_fn = create_aot_dispatcher_function(\r\n 2356 functional_call,\r\n 2357 full_args,\r\n 2358 aot_config,\r\n 2359 )\r\n 2361 # TODO: Th", "url": "https://github.com/pytorch/pytorch/issues/91537", "state": "closed", "labels": [ "module: docs", "triaged", "oncall: pt2", "module: dynamo" ], "created_at": "2022-12-30T15:40:11Z", "updated_at": "2023-12-01T19:00:48Z", "user": "MilesCranmer" }, { "repo": "pytorch/pytorch", "number": 91498, "title": "how to Wrap normalization layers like LayerNorm in FP32 when use FSDP", "body": "in the blog https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/\r\n\r\n<img width=\"904\" alt=\"image\" src=\"https://user-images.githubusercontent.com/16861194/209910992-619704cd-0ef4-42ec-9d5c-ec7b42005b8b.png\">\r\n\r\nhow to Wrap normalization layers like LayerNorm in FP32 when use FSDP, do we have a example code?\n\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu", "url": "https://github.com/pytorch/pytorch/issues/91498", "state": "closed", "labels": [ "oncall: distributed", "triaged", "module: fsdp" ], "created_at": "2022-12-29T06:13:34Z", "updated_at": "2023-08-04T17:17:32Z", "user": "xiaohu2015" }, { "repo": "pytorch/examples", "number": 1105, "title": "MNIST Hogwild on Apple Silicon", "body": "Any help would be appreciated! Unable to run multiprocessing with mps device\r\n\r\n## Context\r\n<!--- How has this issue affected you? What are you trying to accomplish? -->\r\n<!--- Providing context helps us come up with a solution that is most useful in the real world -->\r\n* Pytorch version: 2.0.0.dev20221220\r\n* Operating System and version: macOS 13.1\r\n\r\n## Your Environment\r\n<!--- Include as many relevant details about the environment you experienced the bug in -->\r\n* Installed using source? [yes/no]: no\r\n* Are you planning to deploy it using docker container? [yes/no]: no\r\n* Is it a CPU or GPU environment?: Trying to use GPU\r\n* Which example are you using: MNIST Hogwild\r\n* Link to code or data to repro [if any]: https://github.com/pytorch/examples/tree/main/mnist_hogwild\r\n\r\n## Expected Behavior\r\n<!--- If you're describing a bug, tell us what should happen -->\r\nAdding argument --mps should result in training with GPU\r\n\r\n## Current Behavior\r\n<!--- If describing a bug, tell us what happens instead of the expected behavior -->\r\nRuntimeerror: _share_filename_: only available on CPU\r\n```\r\nTraceback (most recent call last):\r\n File \"/Volumes/Main/pytorch/main.py\", line 87, in <module>\r\n model.share_memory() # gradients are allocated lazily, so they are not shared here\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 2340, in share_memory\r\n return self._apply(lambda t: t.share_memory_())\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 784, in _apply\r\n module._apply(fn)\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 807, in _apply\r\n param_applied = fn(param)\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 2340, in <lambda>\r\n return self._apply(lambda t: t.share_memory_())\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/_tensor.py\", line 616, in share_memory_\r\n self._typed_storage()._share_memory_()\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/storage.py\", line 701, in _share_memory_\r\n self._untyped_storage.share_memory_()\r\n File \"/Users/jeffreythomas/opt/anaconda3/envs/pytorch/lib/python3.9/site-packages/torch/storage.py\", line 209, in share_memory_\r\n self._share_filename_cpu_()\r\nRuntimeError: _share_filename_: only available on CPU\r\n```\r\n\r\n## Possible Solution\r\n<!--- Not obligatory, but suggest a fix/reason for the bug -->\r\n\r\n## Steps to Reproduce\r\n<!--- Provide a link to a live example, or an unambiguous set of steps to -->\r\n<!--- reproduce this bug. Include code to reproduce, if relevant -->\r\n1. Clone repo\r\n2. Run with --mps on Apple M1 Ultra\r\n...\r\n\r", "url": "https://github.com/pytorch/examples/issues/1105", "state": "open", "labels": [ "help wanted" ], "created_at": "2022-12-22T06:25:48Z", "updated_at": "2023-12-09T09:43:08Z", "comments": 4, "user": "jeffreykthomas" }, { "repo": "pytorch/functorch", "number": 1088, "title": "Add vmap support for PyTorch operators", "body": "We're looking for more motivated open-source developers to help build out functorch (and PyTorch, since functorch is now just a part of PyTorch). Below is a selection of good first issues.\r\n- [x] https://github.com/pytorch/pytorch/issues/91174\r\n- [x] https://github.com/pytorch/pytorch/issues/91175\r\n- [x] https://github.com/pytorch/pytorch/issues/91176\r\n- [x] https://github.com/pytorch/pytorch/issues/91177\r\n- [x] https://github.com/pytorch/pytorch/issues/91402\r\n- [x] https://github.com/pytorch/pytorch/issues/91403\r\n- [x] https://github.com/pytorch/pytorch/issues/91404\r\n- [x] https://github.com/pytorch/pytorch/issues/91415\r\n- [ ] https://github.com/pytorch/pytorch/issues/91700\r\n\r\nIn general there's a high barrier to developing PyTorch and/or functorch. We've collected topics and information over at the [PyTorch Developer Wiki](https://github.com/pytorch/pytorch/wiki/Core-Frontend-Onboarding)\r\n", "url": "https://github.com/pytorch/functorch/issues/1088", "state": "open", "labels": [ "good first issue" ], "created_at": "2022-12-20T18:51:16Z", "updated_at": "2023-04-19T23:40:06Z", "comments": 2, "user": "zou3519" }, { "repo": "pytorch/android-demo-app", "number": 287, "title": "How to convert live camera to landscape object detection with correct camera aspect ratio?", "body": "", "url": "https://github.com/pytorch/android-demo-app/issues/287", "state": "open", "labels": [], "created_at": "2022-12-19T10:01:47Z", "updated_at": "2022-12-19T10:05:06Z", "user": "pratheeshsuvarna" }, { "repo": "pytorch/vision", "number": 7043, "title": "How to generate the score for a determined region of an image using Mask R-CNN", "body": "### \ud83d\udc1b Describe the bug\n\nI want to change the RegionProposalNetwork of Mask R-CNN to generate the score for a determined region of an image using Mask R-CNN.\r\n\r\n```\r\nimport torch\r\nfrom torch import nn\r\nimport torchvision.models as models\r\nimport torchvision\r\nfrom torchvision.models.detection import MaskRCNN\r\nfrom torchvision.models.detection.anchor_utils import AnchorGenerator\r\n\r\nmodel = models.detection.maskrcnn_resnet50_fpn(pretrained=True)\r\n\r\nclass rpn_help(nn.Module):\r\n def __init__(self,) -> None:\r\n super().__init__()\r\n def forward(self,) :\r\n proposals=torch.tensor([ 78.0753, 12.7310, 165.6465, 153.7253])\r\n proposal_losses=0\r\n return proposals, proposal_losses\r\n\r\nmodel.rpn= rpn_help\r\nmodel.eval()\r\nmodel(input_tensor) # input_tensor is an image\r\n```\r\n\r\nIt takes error like this\r\n<img width=\"786\" alt=\"WeChatbb686829d6c0f06106e53c1e3feecb55\" src=\"https://user-images.githubusercontent.com/98499594/208374946-137eb9b2-6a64-4a06-8d4e-57caf1bb72b3.png\">\r\n\r\nDoes anyone know how to generate the score for a determined region of an image using Mask R-CNN\r\n?\n\n### Versions\n\npython 3.8", "url": "https://github.com/pytorch/vision/issues/7043", "state": "open", "labels": [], "created_at": "2022-12-19T08:04:50Z", "updated_at": "2022-12-19T08:04:50Z", "user": "mingqiJ" }, { "repo": "pytorch/serve", "number": 2039, "title": "how to load models at startup for docker", "body": "First, I created docker container by followed https://github.com/pytorch/serve/tree/master/docker#create-torchserve-docker-image, I leaves all configs default except remove `--rm` in `docker run ...` and make docker container start automatically by \r\n```docker update --restart unless-stopped mytorchserve```\r\nThen, I registered some model via Management API.\r\nNow, how to make models automated registed when my PC reboot?\r\n", "url": "https://github.com/pytorch/serve/issues/2039", "state": "closed", "labels": [], "created_at": "2022-12-17T02:55:23Z", "updated_at": "2022-12-18T01:00:29Z", "user": "hungtooc" }, { "repo": "pytorch/TensorRT", "number": 1547, "title": "\u2753 [Question] How can I load a TensorRT model generated with `trtexec`?", "body": "## \u2753 Question\r\n\r\nHow can I load into Pytorch a TensorRT model engine (.trt or .plan) generated with `trtexec` ?\r\n\r\nI have the following TensorRT model engine (generated from a ONNX file) using the `trtexec` tool provided by Nvidia\r\n\r\n```\r\ntrtexec --onnx=../2.\\ ONNX/CLIP-B32-image.onnx \\\r\n --saveEngine=../4.\\ TensorRT/CLIP-B32-image.trt \\\r\n --minShapes=input:1x3x224x224 \\\r\n --optShapes=input:1x3x224x224 \\\r\n --maxShapes=input:32x3x224x224 \\\r\n --fp16\r\n```\r\n\r\nI want to load it into Pytorch for using the Pytorch's dataloader for fast batch ineference.\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1547", "state": "closed", "labels": [ "question" ], "created_at": "2022-12-13T11:47:49Z", "updated_at": "2022-12-13T17:49:06Z", "user": "javiabellan" }, { "repo": "pytorch/tutorials", "number": 2151, "title": "Quantize weights to unisgned 8 bit", "body": "I am trying to quantize the weights of the BERT model to unsigned 8 bits. Am using the 'dynamic_quantize' function for the same.\r\n\r\n`quantized_model = torch.quantization.quantize_dynamic(\r\n model, {torch.nn.Linear}, dtype=torch.quint8\r\n)`\r\n\r\nBut it throws the error 'AssertionError: The only supported dtypes for dynamic quantized linear are qint8 and float16 got: torch.quint8'.\r\n\r\nIs there any specific reason for this not to be supported? Could I use any other method to quantize the weights to unsigned int8 bits?\r\n\r\nHere is a link to the colab sheet:\r\nhttps://colab.research.google.com/drive/14G_jdLuZD5846jUDZUdGNf1x0DMz8GKK?usp=sharing\r\n\r\nThanks!\n\ncc @jerryzh168 @z-a-f @vkuzo", "url": "https://github.com/pytorch/tutorials/issues/2151", "state": "closed", "labels": [ "question", "arch-optimization" ], "created_at": "2022-12-10T08:59:07Z", "updated_at": "2023-02-23T19:53:08Z", "user": "rohanjuneja" }, { "repo": "pytorch/pytorch", "number": 93472, "title": "torch.compile does not bring better performance and even lower than no compile, what is the possible reason?", "body": "### \ud83d\udc1b Describe the bug\n\n_No response_\n\n### Error logs\n\n_No response_\n\n### Minified repro\n\n_No response_\n\ncc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/93472", "state": "closed", "labels": [ "oncall: pt2" ], "created_at": "2022-12-07T17:00:43Z", "updated_at": "2023-02-01T17:47:28Z", "user": "chexiangying" }, { "repo": "pytorch/TensorRT", "number": 1535, "title": "[Bug] Invoke error while implementing TensorRT on pytorch", "body": "## \u2753 Question\r\n\r\nGot the error while using tensorrt on pytorch pretrained resnet model. what is this error and how to solve it.\r\n\r\n## Error\r\nTraceback (most recent call last):\r\n File \"pretrained_resnet.py\", line 116, in <module>\r\n trt_model_32 = torch_tensorrt.compile(traced, inputs=[torch_tensorrt.Input(\r\n File \"/home/am/anaconda3/envs/amrith/lib/python3.8/site-packages/torch_tensorrt/_compile.py\", line 125, in compile\r\n return torch_tensorrt.ts.compile(\r\n File \"/home/am/anaconda3/envs/amrith/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py\", line 136, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nTypeError: compile_graph(): incompatible function arguments. The following argument types are supported:\r\n 1. (arg0: torch::jit::Module, arg1: torch_tensorrt._C.ts.CompileSpec) -> torch::jit::Module\r\n\r\nInvoked with: <torch.ScriptModule object at 0x7fb5cc5c78f0>, <torch_tensorrt._C.ts.CompileSpec object at 0x7fb5cc47d8b0>\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1535", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-12-07T07:46:02Z", "updated_at": "2023-04-01T00:02:09Z", "user": "amrithpartha" }, { "repo": "pytorch/TensorRT", "number": 1521, "title": "\u2753 [Question] How does INT8 inference really work at runtime?", "body": "## \u2753 Question\n\nHi everyone,\n\nI can\u2019t really find an example of how int8 inference works at runtime. What I know is that, given that we are performing uniform symmetric quantisation, we calibrate the model, i.e. we find the best scale parameters for each weight tensor (channel-wise) and *activations* (that correspondto the outputs of the activation functions, if I understood correctly). After the calibration process we can quantize the model by applying these scale parameters and clipping che values that end up outside the dynamic range of the given layer. So at this point we have a new Neural Net where all the weights are int8 in the range [-127,127] and, additionally, we have some scale parameters for the *activations*.\nWhat I don\u2019t understand is how we perform inference on this new neural network, do we feed the input as float32 or directly as int8? All the computations are always in int8 or sometimes we cast from int8 to float32 and viceversa? \nIt would be nice to find a real example of e.g. a CONV2D+BIAS+ReLU layer.", "url": "https://github.com/pytorch/TensorRT/issues/1521", "state": "closed", "labels": [ "question", "component: quantization" ], "created_at": "2022-12-04T16:32:57Z", "updated_at": "2023-02-02T23:54:00Z", "user": "andreabonvini" }, { "repo": "pytorch/data", "number": 911, "title": "`DistributedReadingService` supports multi-processing reading", "body": "### \ud83d\ude80 The feature\n\n`TorchData` is a great work for better data loading! I have tried it and it gives me a nice workflow with tidy code-style.\u2764\ufe0f\r\n\r\nWhen using DDP, I work with the `DataLoader2` where `reading_service=DistributedReadingService()`. I find this service runs one worker for outputting datas per node. This means it has lower reading throughput than the legacy `DataLoader`, which utilizes multiple workers with the total worker number = `num_workers * world_size`.\r\n\r\nTherefore, is it possible to combine `DistributedReadingService` with multi-processing reading? This could be possibly done by introducing `PrototypeMultiProcessingReadingService` into `DistributedReadingService` (Just guessing. I'm not a pro for handling this.).\r\n\r\n\n\n### Motivation, pitch\n\nI think this feature could be a part of #427 . The detailed motivation is declared above.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/911", "state": "closed", "labels": [], "created_at": "2022-12-04T03:49:49Z", "updated_at": "2023-02-07T06:25:35Z", "comments": 9, "user": "xiaosu-zhu" }, { "repo": "pytorch/functorch", "number": 1074, "title": "vmap equivalent for tensor[indices]", "body": "Hi,\r\n\r\nIs there a way of vmapping over the selection of passing indices within a Tensor? Minimal reproducible example below,\r\n```\r\nimport torch\r\nfrom functorch import vmap\r\n\r\ndef select(x, index):\r\n print(x.shape, index.shape)\r\n return x[index]\r\n\r\nx = torch.randn(64, 1000) #64 vectors of length 1000\r\nindex=torch.arange(64) #index for each vector\r\n\r\nout = vmap(select, in_dims=(0, 0))(x, index) #vmap over the process\r\nprint(out) #should output vector of 64 \r\n```\r\nThis should take in a batches of vectors and select the corresponding index from `index` vector (which can be viewed as a batch of scalars, and is hence represented as a vector).\r\n\r\nThe error is as follows,\r\n```\r\nRuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.\r\n```\r\nI tried using `torch.select` but that requires passing the index as an `int` rather than `Tensor` so it must call `.item()` interally. Is there a workaround that already exists for this? \r\n", "url": "https://github.com/pytorch/functorch/issues/1074", "state": "closed", "labels": [], "created_at": "2022-12-03T19:20:18Z", "updated_at": "2022-12-03T19:31:37Z", "comments": 1, "user": "AlphaBetaGamma96" }, { "repo": "pytorch/examples", "number": 1101, "title": "Inconsistency b/w tutorial and the code", "body": "## \ud83d\udcda Documentation\r\n\r\nIn the [DDP Tutorial](https://pytorch.org/tutorials/beginner/ddp_series_multigpu.html), there is inconsistency between the code in the tutorial and [original code](https://github.com/pytorch/examples/blob/main/distributed/ddp-tutorial-series/multigpu.py).\r\n\r\nFor example, under Running the distributed training job section,\r\nthe Trainer object should take train_data as an argument not dataset (in the original code, it is right).\r\n\r\nThe ideal PR to fix this issue is to make the tutorial consistent with the original code.", "url": "https://github.com/pytorch/examples/issues/1101", "state": "closed", "labels": [ "help wanted", "distributed" ], "created_at": "2022-12-03T17:15:42Z", "updated_at": "2023-02-17T18:47:56Z", "comments": 4, "user": "BalajiAI" }, { "repo": "pytorch/android-demo-app", "number": 280, "title": "StreamingASR. How to use custom RNNT model?", "body": "Hey guys\r\nI have a my self trained RNNT model with another smp_bpe model.\r\nHow I can convert my smp_bpe.model to smp_bpe.dict for fairseq.data.Dictionary.load method?", "url": "https://github.com/pytorch/android-demo-app/issues/280", "state": "closed", "labels": [], "created_at": "2022-12-02T12:18:50Z", "updated_at": "2022-12-02T14:44:48Z", "user": "make1986" }, { "repo": "pytorch/serve", "number": 2019, "title": "Diagnosing very slow performance", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI'm trying to work out why my endpoint throughput is very slow. I wasn't sure if this is the best forum but there doesn't appear to be a specific torchserve forum on https://discuss.pytorch.org/\r\n\r\nI have simple text classifier, I've created a custom handler as the default wasn't suitable. I tested the handler by creating a harness based on https://github.com/frank-dong-ms/torchserve-performance/blob/main/test_models_windows.py - I also added custom timer metrics into my `preprocess`/`inference`/`postprocess`\r\n\r\nThe result is that most of the `handle` time is spent in `inference`, and my model performs as expected. It processes a batch of 1 text in about 40ms and a batch of 128 in 80ms - so clearly, to get good throughput, I need larger batches.\r\n\r\nThe throughput of a basic script, passing batches of 128 to the model is about 2000 examples per second. But `torchserve` only achieves 30-60 examples per second.\r\n\r\nI'm fairly sure the bottleneck is not in the handler, the model log seems to imply it's not receiving the request quick enough. I would hope that it could generate a batch of 128 in for maxBatchDelay=50 whilst the model is processing the previous batch, but in fact it only manages a handful. I've attached my model log below\r\n\r\nMy first question is what does the message `Backend received inference at: 1669930796` means - specifically is the number a timestamp and if so why is the same value repeated many times given that the size of the batches being passed to the handler is well below the batch size of 128 set in the model config\r\n\r\nSecond how do I stream data faster to the endpoint? Our use case is to make many requests in succession. I've tried client batching, and that does increase throughput slightly but it's still extremely slow.\r\n\r\nMy test code is based on an [example](https://github.com/pytorch/serve/blob/master/examples/image_classifier/near_real_time_video/request.py), and I've also tried curl with the -P option and the time command. Throughput is orders of magnitude slower than a simple script running inference in a loop.\r\n\r\n```\r\n import requests\r\n from requests_futures.sessions import FuturesSession\r\n from concurrent.futures import as_completed\r\n import json\r\n import time\r\n \r\n api = \"http://localhost:8080/predictions/text_classifier\"\r\n headers = {\"Content-type\": \"application/json\", \"Accept\": \"text/plain\"}\r\n \r\n session = FuturesSession()\r\n\r\n start_time = time.time()\r\n futures = []\r\n for text in texts:\r\n response = session.post(api, data=text)\r\n futures.append(response)\r\n\r\n for response in as_completed(futures):\r\n response = response.result().content.decode(\"utf-8\")\r\n\r\n total_time = int((time.time() - start_time)*1e3)\r\n\r\n print(\"total time in ms:\", total_time)\r\n throughput = len(texts) / total_time *1e3\r\n print(\"throughput:\", throughput)\r\n```\r\n\r\nI'm going to look at gRPC as that is probably a better match for our use case (I think), but I feel I'm doing something wrong, or there's an issue somewhere. In particular, the number of requests per second that the front end is receiving/handling appears to be way lower than I expected - the payload per request is a string of < 128 characters. \r\n\r\n### Error logs\r\n\r\nmodel_log.log looks like\r\n\r\n```\r\n2022-12-02T08:39:55,921 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930795\r\n2022-12-02T08:39:55,925 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 8 text\r\n2022-12-02T08:39:56,015 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930796\r\n2022-12-02T08:39:56,016 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 4 text\r\n2022-12-02T08:39:56,079 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930796\r\n2022-12-02T08:39:56,084 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 5 text\r\n2022-12-02T08:39:56,147 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930796\r\n2022-12-02T08:39:56,149 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 7 text\r\n2022-12-02T08:39:56,214 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930796\r\n2022-12-02T08:39:56,215 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 3 text\r\n2022-12-02T08:39:56,279 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930796\r\n2022-12-02T08:39:56,281 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 6 text\r\n2022-12-02T08:39:56,345 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received inference at: 1669930796\r\n2022-12-02T08:39:56,346 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Received batch of 5 text\r\n2022-12-02T08:39:56,407 [INFO ] W-9000-text_classifier_1.0.0-stdout MODEL_LOG - Backend received infe", "url": "https://github.com/pytorch/serve/issues/2019", "state": "closed", "labels": [ "question" ], "created_at": "2022-12-01T21:59:57Z", "updated_at": "2022-12-02T22:15:06Z", "user": "david-waterworth" }, { "repo": "pytorch/TensorRT", "number": 1509, "title": "\u2753 [Question] What does `is_aten` argument do in torch_tensorrt.fx.compile() ?", "body": "## \u2753 Question\r\n\r\nThe docstring for `is_aten` argument in torch_tensorrt.fx.compile() is missing and hence the users don't know what it does.", "url": "https://github.com/pytorch/TensorRT/issues/1509", "state": "closed", "labels": [ "question" ], "created_at": "2022-12-01T14:38:12Z", "updated_at": "2022-12-02T12:50:56Z", "user": "1559588143" }, { "repo": "pytorch/TensorRT", "number": 1508, "title": "\u2753 [Question] How to save and load compiled model from torch-tensorrt", "body": "I am working on a Jetson Xavier NX16 and using torch-tensorrt.compile(model, \"default\", input, enable_optimization) every time I restart my program seems like it is just doing the same tedious task over and over.\r\nIs there not a way for torch-tensorrt to load the serialized engine created by torch_tensorrt.convert_method_to_trt_engine or is it only on the CXX backend API?\r\nHow would I go about saving and loading a compiled model?\r\n\r\nOkay, so I used print(dir(comp_model)) and saw that the model had a save function. I tried using it and figured out after a friendly pop up, that I should load it with torch.jit.load and it works.\r\nIs this an okay solution or are there some kind of insecurities with it?", "url": "https://github.com/pytorch/TensorRT/issues/1508", "state": "closed", "labels": [ "question" ], "created_at": "2022-12-01T11:20:08Z", "updated_at": "2022-12-16T07:46:59Z", "user": "MartinPedersenpp" }, { "repo": "pytorch/serve", "number": 2016, "title": "Missing mandatory parameter --model-store", "body": "### \ud83d\udcda The doc issue\r\n\r\nI created a config.properties file\r\n\r\n```\r\nmodel_store=\"model_store\"\r\nload_models=all\r\nmodels = {\\\r\n \"tc\": {\\\r\n \"1.0.0\": {\\\r\n \"defaultVersion\": true,\\\r\n \"marName\": \"text_classifier.mar\",\\\r\n \"minWorkers\": 1,\\\r\n \"maxWorkers\": 4,\\\r\n \"batchSize\": 1,\\\r\n \"maxBatchDelay\": 100,\\\r\n \"responseTimeout\": 120\\\r\n }\\\r\n }\\\r\n }\r\n```\r\n\r\nThe [documentation](https://github.com/pytorch/serve/blob/master/docs/configuration.md#command-line-parameters) for `torchserve` states:\r\n\r\n```\r\nCustomize TorchServe behaviour by using the following command line arguments when you call torchserve:\r\n\r\n--model-store Overrides the model_store property in config.properties file\r\n--models Overrides the load_models property in config.properties\r\n```\r\n\r\nThis wording implies to me that --model-store is optional, but running `torchserve --start` (from a folder containing config.properties) results in the error `Missing mandatory parameter --model-store`\r\n\r\nIt seems to me there should only be an error if the model-store location cannot be inferred at all, i.e. it's not passed via `--model-store` or defined in config.properties (it's not clear how `--model-store` can 'override' the value in config.properties if it's mandatory)\r\n\r\n\r\n\r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/2016", "state": "open", "labels": [ "documentation", "question" ], "created_at": "2022-12-01T01:09:00Z", "updated_at": "2022-12-02T01:42:05Z", "user": "david-waterworth" }, { "repo": "pytorch/functorch", "number": 1071, "title": "Different gradients for HyperNet training", "body": "TLDR: Is there a way to optimize model created by combine_state_for_ensemble using torch.backward()?\r\n\r\nHi, I am using combine_state_for_ensemble for HyperNet training. \r\n\r\n```\r\nfmodel, fparams, fbuffers = combine_state_for_ensemble([HyperMLP() for i in range(K)])\r\n[p.requires_grad_() for p in fparams];\r\nweights_and_biases = vmap(fmodel)(fparams, fbuffers, z.expand(self.K,-1,-1)) #in which it parallizes over K\r\n```\r\nAfter I create the `weights_and_biases`, I put them into right shapes `ws_and_bs` and use as parameters of another ensemble.\r\n\r\n```\r\nfmodel, fparams, fbuffers = combine_state_for_ensemble([SimpleMLP() for i in range(K)]) \r\noutputs = vmap(fmodel)(ws_and_bs, fbuffers, inputs)\r\n```\r\n\r\nThis approach generates exactly the same outputs if I use loops instead of vmap. However, (somehow) their gradients are different. \r\n\r\n```\r\nloss = compute_loss(outputs)\r\nloss.backward()\r\n```\r\nDo you have any idea why?\r\n\r\nUpdate: It seems like ws_and_bs does not holding any gradient even though it is requires_grad. \r\n\r\n**Update2: It seems like I can forward by using stateless model with my generated weights but I cannot backprop from them using loss.backward(). Is there any trick that I can use?**", "url": "https://github.com/pytorch/functorch/issues/1071", "state": "open", "labels": [], "created_at": "2022-11-30T21:37:05Z", "updated_at": "2022-12-03T13:03:44Z", "comments": 2, "user": "bkoyuncu" }, { "repo": "pytorch/functorch", "number": 1070, "title": "Applying grad elementwise to tensors of arbitrary shape", "body": "What is the easiest way to apply the grad of a function elementwise to a tensor of arbitrary shape? For example\r\n\r\n```python\r\nimport torch\r\nfrom functorch import grad, vmap\r\n\r\n# These functions can be called with tensor of any shape and will be applied elementwise\r\nsin = torch.sin\r\ncos = torch.cos\r\n\r\n# Create cos function by using grad\r\ncos_from_grad = grad(sin)\r\n\r\nx = torch.rand([4, 2])\r\n\r\n# This is fine\r\nout = sin(x)\r\nout = cos(x)\r\n\r\n# This throws error\r\n# Expected f(*args) to return a scalar Tensor, got tensor with 2 dims\r\nout = cos_from_grad(x)\r\n```\r\n\r\nNow in this specific case, where we have a tensor of shape `(4, 2)`, we can use `vmap` twice\r\n\r\n```python\r\ncos_from_grad = vmap(vmap(grad(sin)))\r\n\r\n# This now works\r\nout = cos_from_grad(x)\r\n```\r\n\r\nHowever, if I later need to call `cos_from_grad` on a tensor of shape `(4, 2, 3)` for example, then the above code will no longer work as I would need to add an extra `vmap`. Is there a way to use `grad` to create a `cos` function that is equivalent to `torch.cos` in the sense that it can be applied elementwise to tensors of arbitrary shape?\r\n\r\nThank you! ", "url": "https://github.com/pytorch/functorch/issues/1070", "state": "closed", "labels": [], "created_at": "2022-11-29T14:45:57Z", "updated_at": "2022-11-29T16:59:34Z", "comments": 4, "user": "EmilienDupont" }, { "repo": "pytorch/serve", "number": 2010, "title": "How to assign one or more specific gpus to each model when deploying multiple models at once.", "body": "How to assign one or more specific gpus to each model when deploying multiple models at once. If I have two models and three gpus, the workers of the first model I only want to deploy on gpus 0 and 1, and the workers of the second model I only want to deploy on gpus 3. Instead of assigning gpus to each model sequentially.", "url": "https://github.com/pytorch/serve/issues/2010", "state": "closed", "labels": [ "question", "gpu" ], "created_at": "2022-11-29T11:26:10Z", "updated_at": "2023-12-17T22:56:55Z", "user": "Git-TengSun" }, { "repo": "pytorch/vision", "number": 6985, "title": "Range compatibility for pytorch dependency", "body": "### \ud83d\ude80 The feature\n\nCurrently `torchvision` only ever supports a hard-pinned version of `torch`. f.e. `torchvision==0.13.0` requires`torch==1.12.0` and `torchvision==0.13.1` requires `torch==1.12.1`. It would be easier for users if torchvision wouldn't put exact restrictions on the `torch` version.\n\n### Motivation, pitch\n\nHello \ud83d\udc4b Thank you for your continued support of `torchvison` we use it frequently and it works great!\r\n\r\nIn the project I maintain we manage our `torch` version regularly and therefore usually upgrade quickly to a new version when it comes out. However, due to the hard-pinning of `torchvision` we are often waiting for `torchvision` to release a new version before we can use bugfixes in `torch` (or exciting new features).\r\n\r\nThis raises a few questions:\r\n\r\n* Is it important for `torchvision` to always hard-pin a version? \r\n* Are the upgrades of `torch` version in `torchvision` truly backwards incompatible?\r\n* Could `torchvision` support a range of `torch` versions? (like `torchmetrics` does)\r\n\r\nAdding a max range for the `torch` requirement would allow users to upgrade to a new version of torch automatically when it comes out.\r\n\r\n**Examples**\r\n\r\n`torchvision==0.13.0` could have depended on `torch<1.13` to include all bugfix releases of `torch==0.13.*`.\r\n`torchvision==0.13.1` could have depended on `torch<1.13` to include all bugfix releases of `torch==0.13.*`.\r\n\r\nA minimum version may also be appropriate when `torch` adds new APIs that `torchvision` wants to consume.\r\n\r\nYour thoughts there would be greatly appreciated! Thank you for your work \ud83d\ude47\u200d\u2642\ufe0f \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/6985", "state": "closed", "labels": [ "question", "topic: binaries" ], "created_at": "2022-11-28T15:01:17Z", "updated_at": "2022-12-08T15:00:36Z", "user": "alexandervaneck" }, { "repo": "pytorch/TensorRT", "number": 1484, "title": "Building on Jetson Xavier NX16GB with Jetpack4.6 (TensorRT8.0.1) python3.9, pytorch1.13", "body": "I am trying to build the torch_tensorrt wheel on my Jetson Xavier NX16GB running Jetpack4.6 which means I run TensorRT8-0-1 with python3.9.15 and a on device compiled pytorch/torchlib 1.13.0. \r\nI just can't seem to get it to compile succesfully.\r\n\r\nI have tried both v1.1.0 until I realized that it was not really backwards compatible with TensorRT 8.0.1, I then downgraded to v1.1.0 and tried using the workspace file from the toolchains/jp_workspaces:\r\n```\r\nworkspace(name = \"Torch-TensorRT\")\r\n\r\nload(\"@bazel_tools//tools/build_defs/repo:git.bzl\", \"git_repository\")\r\nload(\"@bazel_tools//tools/build_defs/repo:http.bzl\", \"http_archive\")\r\n\r\nhttp_archive(\r\n name = \"rules_python\",\r\n sha256 = \"778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f\",\r\n url = \"https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz\",\r\n)\r\n\r\nload(\"@rules_python//python:pip.bzl\", \"pip_install\")\r\n\r\nhttp_archive(\r\n name = \"rules_pkg\",\r\n sha256 = \"038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d\",\r\n urls = [\r\n \"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n \"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n ],\r\n)\r\n\r\nload(\"@rules_pkg//:deps.bzl\", \"rules_pkg_dependencies\")\r\n\r\nrules_pkg_dependencies()\r\n\r\ngit_repository(\r\n name = \"googletest\",\r\n commit = \"703bd9caab50b139428cea1aaff9974ebee5742e\",\r\n remote = \"https://github.com/google/googletest\",\r\n shallow_since = \"1570114335 -0400\",\r\n)\r\n\r\n# External dependency for torch_tensorrt if you already have precompiled binaries.\r\nlocal_repository(\r\n name = \"torch_tensorrt\",\r\n path = \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt\",\r\n)\r\n\r\n# CUDA should be installed on the system locally\r\nnew_local_repository(\r\n name = \"cuda\",\r\n build_file = \"@//third_party/cuda:BUILD\",\r\n path = \"/usr/local/cuda-10.2/\",\r\n)\r\n\r\nnew_local_repository(\r\n name = \"cublas\",\r\n build_file = \"@//third_party/cublas:BUILD\",\r\n path = \"/usr\",\r\n)\r\n\r\n####################################################################################\r\n# Locally installed dependencies (use in cases of custom dependencies or aarch64)\r\n####################################################################################\r\n\r\n# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path\r\n# with your local libtorch, just point deps at the same path to satisfy bazel.\r\n\r\n# NOTE: NVIDIA's aarch64 PyTorch (python) wheel file uses the CXX11 ABI unlike PyTorch's standard\r\n# x86_64 python distribution. If using NVIDIA's version just point to the root of the package\r\n# for both versions here and do not use --config=pre-cxx11-abi\r\n\r\nnew_local_repository(\r\n name = \"libtorch\",\r\n path = \"/home/user/pytorch/torch\",\r\n build_file = \"third_party/libtorch/BUILD\"\r\n)\r\n\r\n# NOTE: Unused on aarch64-jetson with NVIDIA provided PyTorch distribu\u2020ion\r\nnew_local_repository(\r\n name = \"libtorch_pre_cxx11_abi\",\r\n path = \"/home/user/pytorch/torch\",\r\n build_file = \"third_party/libtorch/BUILD\"\r\n)\r\n\r\nnew_local_repository(\r\n name = \"cudnn\",\r\n path = \"/usr/\",\r\n build_file = \"@//third_party/cudnn/local:BUILD\"\r\n)\r\n\r\nnew_local_repository(\r\n name = \"tensorrt\",\r\n path = \"/usr/\",\r\n build_file = \"@//third_party/tensorrt/local:BUILD\"\r\n)\r\n\r\n#########################################################################\r\n# Development Dependencies (optional - comment out on aarch64)\r\n#########################################################################\r\n\r\npip_install(\r\n name = \"devtools_deps\",\r\n requirements = \"//:requirements-dev.txt\",\r\n)\r\n```\r\nWith the setup.py from v1.0.0\r\n```\r\nimport os\r\nimport sys\r\nimport glob\r\nimport setuptools\r\nfrom setuptools import setup, Extension, find_packages\r\nfrom setuptools.command.build_ext import build_ext\r\nfrom setuptools.command.develop import develop\r\nfrom setuptools.command.install import install\r\nfrom distutils.cmd import Command\r\nfrom wheel.bdist_wheel import bdist_wheel\r\n\r\nfrom torch.utils import cpp_extension\r\nfrom shutil import copyfile, rmtree\r\n\r\nimport subprocess\r\nimport platform\r\nimport warnings\r\n\r\ndir_path = os.path.dirname(os.path.realpath(__file__))\r\n\r\nCXX11_ABI = False\r\n\r\nJETPACK_VERSION = None\r\n\r\n__version__ = '1.0.0'\r\n\r\n\r\ndef get_git_revision_short_hash() -> str:\r\n return subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD']).decode('ascii').strip()\r\n\r\n\r\nif \"--release\" not in sys.argv:\r\n __version__ = __version__ + \"+\" + get_git_revision_short_hash()\r\nelse:\r\n sys.argv.remove(\"--release\")\r\n\r\nif \"--use-cxx11-abi\" in sys.argv:\r\n sys.argv.remove(\"--use-cxx11-abi\")\r\n CXX11_ABI = True\r\n\r\nif platform.uname().processor == \"aarch64\":\r\n if \"--jetpack-version\" in sys.argv:\r\n version_idx = sys.argv.index(\"--jetpack-version\") + 1\r\n version = sys.argv[version_idx]\r\n sys.argv.remove(version)\r\n sys.argv.remove(\"--j", "url": "https://github.com/pytorch/TensorRT/issues/1484", "state": "closed", "labels": [ "question", "channel: linux-jetpack" ], "created_at": "2022-11-28T13:08:28Z", "updated_at": "2022-12-01T11:05:36Z", "user": "MartinPedersenpp" }, { "repo": "pytorch/examples", "number": 1097, "title": "argument -a/--arch: invalid choice: 'efficientnet_b0'", "body": "Error reported: \r\n\r\nmain.py: error: argument -a/--arch: invalid choice: 'efficientnet_b0' (choose from 'alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'googlenet', 'inception_v3', 'mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mnasnet1_3', 'mobilenet_v2', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'resnext101_32x8d', 'resnext50_32x4d', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19', 'vgg19_bn', 'wide_resnet101_2', 'wide_resnet50_2')\r\n\r\nHowever, the model `efficientnet_bx` is listed in **README** files, is there any changes in recent commits?\r\n", "url": "https://github.com/pytorch/examples/issues/1097", "state": "closed", "labels": [], "created_at": "2022-11-28T10:36:54Z", "updated_at": "2022-11-28T10:45:53Z", "comments": 1, "user": "Deeeerek" }, { "repo": "pytorch/functorch", "number": 1066, "title": "Unable to compute derivatives due to calling .item()", "body": "Hello, i am getting the error below whenever i try to compute the jacobian of my network.\r\n\r\n\r\nRuntimeError: vmap: It looks like you're either (1) calling .item() on a Tensor or (2) attempting to use a Tensor in some data-dependent control flow or (3) encountering this error in PyTorch internals. For (1): we don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. For (2): If you're doing some control flow instead, we don't support that yet, please shout over at https://github.com/pytorch/functorch/issues/257 . For (3): please file an issue.\r\n\r\n\r\nthe error can be traced back to the line below. \r\n\r\n`weights = interpolation_weights.prod(-1)`\r\n\r\nIs there a way around this ? \r\n\r\nThank you .\r\n", "url": "https://github.com/pytorch/functorch/issues/1066", "state": "closed", "labels": [], "created_at": "2022-11-27T05:16:57Z", "updated_at": "2022-11-29T10:53:43Z", "comments": 3, "user": "elientumba2019" }, { "repo": "pytorch/examples", "number": 1096, "title": "DDP training question", "body": "Hi, I'm using the tutorial [https://github.com/pytorch/tutorials/blob/master/intermediate_source/ddp_tutorial.rst](url) for DDP train,using 4 gpus in myself code, reference Basic Use Case. But when I finished the modification, it was stuck during run the demo,meanwhile,video memory has been occupied.Could you help me?", "url": "https://github.com/pytorch/examples/issues/1096", "state": "open", "labels": [ "help wanted", "distributed" ], "created_at": "2022-11-25T06:58:55Z", "updated_at": "2023-08-24T06:32:13Z", "comments": 2, "user": "Henryplay" }, { "repo": "pytorch/android-demo-app", "number": 278, "title": "How to change portrait to landscape on camera view in Object Detection App ?", "body": "", "url": "https://github.com/pytorch/android-demo-app/issues/278", "state": "open", "labels": [], "created_at": "2022-11-24T05:00:25Z", "updated_at": "2022-12-15T09:07:27Z", "user": "aravinthk00" }, { "repo": "pytorch/tutorials", "number": 2126, "title": "Incorrect use of \"epoch\" in the Optimizing Model Parameters tutorial", "body": "From the first paragraph of the [Optimizing Model Parameters](https://github.com/pytorch/tutorials/blob/master/beginner_source/basics/optimization_tutorial.py) tutorial:\r\n\r\n> in each iteration (called an epoch) the model makes a guess about the output, calculates the error in its guess (loss), collects the derivatives of the error with respect to its parameters (as we saw in the [previous section](https://pytorch.org/tutorials/beginner/basics/autograd_tutorial.html)), and optimizes these parameters using gradient descent.\r\n\r\nWhat is described in this paragraph is a single optimization step. An epoch is a full pass over the dataset (see e.g. https://deepai.org/machine-learning-glossary-and-terms/epoch).\r\n\r\nI propose to simply remove the \"(called an epoch)\" here, as the term is correctly used and explained later in the \"Hyperparameters\" section:\r\n\r\n> Number of Epochs - the number times to iterate over the dataset\r\n\r\n\n\ncc @suraj813", "url": "https://github.com/pytorch/tutorials/issues/2126", "state": "closed", "labels": [ "intro" ], "created_at": "2022-11-22T10:23:34Z", "updated_at": "2022-11-28T21:42:30Z", "comments": 1, "user": "chrsigg" }, { "repo": "pytorch/TensorRT", "number": 1467, "title": "\u2753 [Question] Profiling examples?", "body": "## \u2753 Question\r\n\r\nWhen I'm not using TensorRT, I run my model through an FX interpreter that times each call op (by inserting CUDA events before/after and measuring the elapsed time). I'd like to do something similar after converting/compiling the model to TensorRT, and I see there is some profiling built in with [tensorrt.Proflier](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/Profiler.html) but its usage isn't clear to me. \r\n\r\nIs there an example anywhere on how to time each layer or op with this profiler, or any other means of profiling the TensorRT engine/layers? I don't mind messing with the op converters to do so, but I don't want to have to wrap every op converter my model uses. More generally I think I could use the PyTorch profiler but it would be difficult to parse the output to get clear per-layer/per-op results. ", "url": "https://github.com/pytorch/TensorRT/issues/1467", "state": "closed", "labels": [ "question", "No Activity", "component: runtime" ], "created_at": "2022-11-21T21:13:28Z", "updated_at": "2023-05-04T00:02:17Z", "user": "collinmccarthy" }, { "repo": "pytorch/tutorials", "number": 2122, "title": "using nn.Module(X).argmax(1) - get IndexError", "body": "Hello there, I'm student of NN course, I'm try to implement FFNN (or TDNN) to work on prediction of AR(2)-model, im using PyTorch example, and on my data and NN architecture i got pred.argmax(1) - error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/b0r1ngx/PycharmProjects/ArtificialNeuroNets/group_00201/lab01/lab01_pytorch.py\", line 116, in <module>\r\n first_method()\r\n File \"/home/b0r1ngx/PycharmProjects/ArtificialNeuroNets/group_00201/lab01/lab01_pytorch.py\", line 87, in first_method\r\n test(test_data, time_delay_nn, loss_function)\r\n File \"/home/b0r1ngx/PycharmProjects/ArtificialNeuroNets/group_00201/lab01/lab01_pytorch.py\", line 70, in test\r\n c1 = pred.argmax(1) == y_pred\r\nIndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)\r\n```\r\nwhere it's used in you're examples: \r\nhere in test_loop function - https://github.com/pytorch/tutorials/blob/master/beginner_source/basics/optimization_tutorial.py\r\n\r\nI'm also doesn't think that i get best `Hyperparameter`s / loss_function / optimizer - cos i get bad Accuracy / Avg loss in my case, \r\nplease help me with that:\r\nU can check my code here:\r\n(now im using how its recommended -1 or 0, but there is always 0)\r\nhttps://github.com/b0r1ngx/ArtificialNeuroNets/blob/master/group_00201/lab01/lab01_pytorch.py\r\n\r\nThanks!\n\ncc @jerryzh168 @z-a-f @vkuzo", "url": "https://github.com/pytorch/tutorials/issues/2122", "state": "open", "labels": [ "question", "arch-optimization" ], "created_at": "2022-11-19T16:05:26Z", "updated_at": "2023-03-01T16:22:33Z", "user": "b0r1ngx" }, { "repo": "pytorch/pytorch", "number": 89136, "title": "[FSDP] Adam Gives Different Results Where Only Difference Is Flattening", "body": "Consider the following unit test (that relies on some imports from `common_fsdp.py`):\r\n```\r\ndef test(self):\r\n local_model = TransformerWithSharedParams.init(\r\n self.process_group,\r\n FSDPInitMode.NO_FSDP,\r\n CUDAInitMode.CUDA_BEFORE,\r\n deterministic=True,\r\n )\r\n fsdp_model = FSDP(\r\n copy.deepcopy(local_model),\r\n sharding_strategy=ShardingStrategy.NO_SHARD,\r\n )\r\n ddp_model = DDP(local_model, device_ids=[self.rank])\r\n ddp_optim = torch.optim.Adam(ddp_model.parameters(), lr=1e-2)\r\n fsdp_optim = torch.optim.Adam(fsdp_model.parameters(), lr=1e-2)\r\n max_norm = 1\r\n norm_type = 1\r\n device = torch.device(\"cuda\")\r\n for i in range(10):\r\n ddp_optim.zero_grad(set_to_none=True)\r\n fsdp_optim.zero_grad(set_to_none=True)\r\n inp = ddp_model.module.get_input(device)\r\n for model in (ddp_model, fsdp_model):\r\n out = model(*inp)\r\n loss = nn.functional.cross_entropy(\r\n out.view(-1, out.size(-1)), inp[1].view(-1), reduction=\"sum\"\r\n )\r\n loss.backward()\r\n ddp_total_norm = torch.nn.utils.clip_grad_norm_(\r\n ddp_model.parameters(),\r\n max_norm=max_norm,\r\n norm_type=norm_type,\r\n )\r\n fsdp_total_norm = torch.nn.utils.clip_grad_norm_(\r\n fsdp_model.parameters(),\r\n max_norm=max_norm,\r\n norm_type=norm_type,\r\n )\r\n self.assertEqual(ddp_total_norm, fsdp_total_norm)\r\n ddp_flat_grad = torch.cat(tuple(p.grad.flatten() for p in ddp_model.parameters()))\r\n fsdp_flat_grad = torch.cat(tuple(p.grad.flatten() for p in fsdp_model.parameters()))\r\n self.assertEqual(ddp_flat_grad, fsdp_flat_grad)\r\n ddp_flat_param = torch.cat(tuple(p.flatten() for p in ddp_model.parameters()))\r\n fsdp_flat_param = torch.cat(tuple(p.flatten() for p in fsdp_model.parameters()))\r\n self.assertEqual(ddp_flat_param, fsdp_flat_param)\r\n ddp_optim.step()\r\n fsdp_optim.step()\r\n ddp_flat_param = torch.cat(tuple(p.flatten() for p in ddp_model.parameters()))\r\n fsdp_flat_param = torch.cat(tuple(p.flatten() for p in fsdp_model.parameters()))\r\n self.assertEqual(ddp_flat_param, fsdp_flat_param)\r\n```\r\n\r\nOn the `i == 3` iteration, the assertion `self.assertEqual(ddp_flat_param, fsdp_flat_param)` *after* the optimizer steps fails.\r\n```\r\nMismatched elements: 2 / 8427 (0.0%)\r\nGreatest absolute difference: 1.0077477327286033e-05 at index (6610,) (up to 1e-05 allowed)\r\nGreatest relative difference: 8.842818419533154 at index (6610,) (up to 1.3e-06 allowed)\r\n```\r\n\r\nThe unit test initializes a model (`TransformerWithSharedParams`) and constructs `DDP` and `FSDP` (`NO_SHARD`) instances, which should be semantically equivalent. The _only_ relevant difference should be that FSDP has flattened all parameters into one `FlatParameter`.\r\n\r\nWe run a training loop that includes `torch.nn.utils.clip_grad_norm_(max_norm=1, norm_type=1)` and uses Adam optimizer. We have 3 checks: (1) gradient elements match after backward and clipping, (2) parameter elements match immediately before optimizer step, and (3) parameter elements match immediately after optimizer step.\r\n\r\nSince (1) and (2) pass but (3) does not (on the `i == 3` iteration), this suggests that the optimizer step is not producing the same results. As discussed above, the only difference is that the `fsdp_model` parameters are \"bucketed\" into a `FlatParameter` (1D containing all the same elements), while the `ddp_model` parameters preserve the original shapes.\r\n\r\nA couple of notes:\r\n- [!!] The mismatch does not happen if we pass `use_orig_params=True` to the FSDP constructor. This is a key observation. For `use_orig_params=True`, the optimizer operates on the parameters with their original shapes, just like DDP. This suggests that operating on the flattened parameter is indeed the cause for the difference.\r\n- The mismatch does not happen when using `SGD` instead of `Adam`.\r\n- The mismatch does not happen if we remove the `torch.nn.utils.clip_grad_norm_()`. However, since we have check (1), this should rule out that `clip_grad_norm_()` is producing mismatching results. Rather, we may be relying on `clip_grad_norm_()` to have the gradients be at a sufficiently small magnitude.\r\n- The mismatch also does happen when using `loss = out.sum()` instead of the `cross_entropy` computation.\r\n\r\nIt requires some nontrivial effort to simplify this repro to be equivalent but not rely on DDP, FSDP, or the FSDP utils from `common_fsdp.py`. I will hold off on that for now.\r\n\r\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501", "url": "https://github.com/pytorch/pytorch/issues/89136", "state": "closed", "labels": [ "oncall: distributed", "module: fsdp" ], "created_at": "2022-11-16T15:16:55Z", "updated_at": "2024-06-11T20:01:26Z", "user": "awgu" }, { "repo": "pytorch/TensorRT", "number": 1452, "title": "\ud83d\udc1b [Bug] FX front-end layer norm, missing plugin", "body": "## Bug Description\r\n\r\nI'm using a ConvNeXt model from the timm library which uses `torch.nn.functional.layer_norm`. I'm getting this warning during conversion: \r\n\r\n```\r\nUnable to find layer norm plugin, fall back to TensorRT implementation\r\n```\r\n\r\nwhich is triggered from [this line](https://github.com/pytorch/TensorRT/blob/e3b992941b3ae5f1863de271fc9032829834ec6a/py/torch_tensorrt/fx/converters/acc_ops_converters.py#L717) because it fails to find the `LayerNormDynamic` plugin. \r\n\r\nDo I need to install TensorRT differently from what's described in the README to install this plugin? Or where should that be installed from?\r\n\r\nI'm following the instructions for using the pre-compiled binaries (install commands shown below).\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.21.1\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n\r\n```\r\nconda install python=3.10\r\npip install nvidia-pyindex\r\npip install nvidia-tensorrt==8.4.3.1\r\npip install torch==1.12.1+cu116 --find-links https://download.pytorch.org/whl/torch/\r\npip install torch-tensorrt==1.2.0 --find-links https://github.com/pytorch/TensorRT/releases/expanded_assets/v1.2.0\r\n```\r\n - Are you using local sources or building from archives: Local CUDA and cuDNN\r\n - Python version: 3.10\r\n - CUDA version: 11.6\r\n - GPU models and configuration: TitanV\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1452", "state": "closed", "labels": [ "question", "No Activity", "component: plugins" ], "created_at": "2022-11-15T19:37:26Z", "updated_at": "2023-06-10T00:02:28Z", "user": "collinmccarthy" }, { "repo": "pytorch/tutorials", "number": 2117, "title": "Stable Diffusion Question", "body": "I am looking to leverage torch.nn.parallel.DistributedDataParallel per the documentation you have written to integrate dual 3090s into a workflow. I am using the automatic repo and after trying multiple things to update the following code to include what you have in the torch wiki, I have been unsuccessful in switching the cuda current device to leverage the model methodology outlined in your documentation and stackoverflow examples. Do you have any recommendations on what I can read or leverage to test further? I know that Meta has been releasing some wonderful tools I have been using to support the Stable Diffusion project so I hope this is in your purview. If it is not, feel free to ignore.\r\n\r\ndef caching_allocator_alloc(size, device: Union[Device, int] = None, stream=None):\r\n r\"\"\"Performs a memory allocation using the CUDA memory allocator.\r\n\r\n Memory is allocated for a given device and a stream, this\r\n function is intended to be used for interoperability with other\r\n frameworks. Allocated memory is released through\r\n :func:`~torch.cuda.caching_allocator_delete`.\r\n\r\n Args:\r\n size (int): number of bytes to be allocated.\r\n device (torch.device or int, optional): selected device. If it is\r\n ``None`` the default CUDA device is used.\r\n stream (torch.cuda.Stream or int, optional): selected stream. If is ``None`` then\r\n the default stream for the selected device is used.\r\n\r\n .. note::\r\n See :ref:`cuda-memory-management` for more details about GPU memory\r\n management.\r\n \"\"\"\r\n \r\n if device is None:\r\n device = torch.cuda.current_device()\r\n device = _get_device_index(0)\r\n \r\n if stream is None:\r\n stream = torch.cuda.current_stream(device)\r\n if isinstance(stream, torch.cuda.streams.Stream):\r\n stream = stream.cuda_stream\r\n if not isinstance(stream, int):\r\n raise TypeError('Invalid type for stream argument, must be '\r\n '`torch.cuda.Stream` or `int` representing a pointer '\r\n 'to a exisiting stream')\r\n with torch.cuda.device(device):\r\n return torch._C._cuda_cudaCachingAllocator_raw_alloc(size, stream)\n\ncc @mrshenli @osalpekar @H-Huang @kwen2501", "url": "https://github.com/pytorch/tutorials/issues/2117", "state": "closed", "labels": [ "question", "distributed" ], "created_at": "2022-11-12T06:18:43Z", "updated_at": "2025-05-12T15:33:35Z", "user": "jasonewest" }, { "repo": "pytorch/TensorRT", "number": 1449, "title": "\u2753 [Question] How do you compile for multiple GPU architectures?", "body": "## \u2753 Question\r\n\r\nHow do you compile for multiple GPU architectures? Or do you need to compile one torchscript per architecture?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1449", "state": "closed", "labels": [ "question", "No Activity", "component: runtime" ], "created_at": "2022-11-11T22:02:07Z", "updated_at": "2023-05-04T00:02:18Z", "user": "dfung" }, { "repo": "pytorch/kineto", "number": 681, "title": "what is happen when I use torch.profiler.profile with activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]", "body": "I use profiler with `activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]`, and get time 0.22ms in cpu time , avg 5.14us in cuda time, and when I use `time.time()` with `torch.cuda.synchronize()`,the result is 0.24 ms. What is the difference between these results\uff1f\r\nMy code looks like:\r\n```\r\nactivities = [torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]\r\ntorch.cuda.synchronize()\r\nstart = time.time()\r\nwith torch.profiler.profile(activities= activities, record_shapes=True, profile_memory=False) as pf:\r\n output = model(input)\r\ntorch.cuda.synchronize()\r\nend = time.time()\r\nruntime = end-start()\r\n```", "url": "https://github.com/pytorch/kineto/issues/681", "state": "closed", "labels": [ "question" ], "created_at": "2022-11-10T07:19:18Z", "updated_at": "2023-10-10T15:13:14Z", "user": "qq1243196045" }, { "repo": "pytorch/functorch", "number": 1060, "title": "aten::all not implemented", "body": "When I vmap \"torch.all\" function, I get the following:\n\n/tmp/ipykernel_39088/2496106444.py:7: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::all. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /__w/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)\n f = functorch.vmap(torch.all, in_dims=1)\n\nIs it possible to make it available for v1.12?", "url": "https://github.com/pytorch/functorch/issues/1060", "state": "closed", "labels": [ "actionable", "high priority", "small" ], "created_at": "2022-11-09T13:01:55Z", "updated_at": "2023-01-11T05:56:13Z", "comments": 7, "user": "iliTheFallen" }, { "repo": "pytorch/torchx", "number": 648, "title": "Use GPU with `local_docker`", "body": "## \ud83d\udc1b Bug\r\n\r\nCan't use GPU with the `local_docker` scheduler. \r\n\r\nModule (check all that applies):\r\n * [ ] `torchx.spec`\r\n * [ ] `torchx.component`\r\n * [ ] `torchx.apps`\r\n * [ ] `torchx.runtime`\r\n * [x] `torchx.cli`\r\n * [x] `torchx.schedulers`\r\n * [ ] `torchx.pipelines`\r\n * [ ] `torchx.aws`\r\n * [ ] `torchx.examples`\r\n * [ ] `other`\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. create a `test.py` with \r\n\r\n```python\r\nimport torch\r\nprint(\"torch.cuda.is_available():\", torch.cuda.is_available())\r\n```\r\n\r\n2. create a `Dockerfile`\r\n\r\n```Dockerfile\r\nFROM ghcr.io/pytorch/torchx:0.3.0\r\nCOPY test.py test.py\r\n```\r\n\r\n3. run the following commands\r\n\r\n```bash\r\ndocker build -t test:latest .\r\ndocker run --gpus all test:latest python test.py\r\ntorchx run --scheduler local_cwd utils.python --script test.py\r\ntorchx run --scheduler local_docker utils.python --script test.py\r\n```\r\n```\r\nSending build context to Docker daemon 6.144kB\r\nStep 1/2 : FROM ghcr.io/pytorch/torchx:0.3.0\r\n ---> 343f0f3b1a07\r\nStep 2/2 : COPY test.py test.py\r\n ---> Using cache\r\n ---> fa75170948b2\r\nSuccessfully built fa75170948b2\r\nSuccessfully tagged test:latest\r\ntorch.cuda.is_available(): True\r\ntorchx 2022-11-05 13:29:02 INFO loaded configs from /home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test/.torchxconfig\r\ntorchx 2022-11-05 13:29:02 INFO Log directory not set in scheduler cfg. Creating a temporary log dir that will be deleted on exit. To preserve log directory set the `log_dir` cfg option\r\ntorchx 2022-11-05 13:29:02 INFO Log directory is: /tmp/torchx_6_h698gw\r\nlocal_cwd://torchx/torchx_utils_python-mfc1scwb7dncd\r\ntorchx 2022-11-05 13:29:02 INFO Waiting for the app to finish...\r\npython/0 torch.cuda.is_available(): True\r\ntorchx 2022-11-05 13:29:04 INFO Job finished: SUCCEEDED\r\ntorchx 2022-11-05 13:29:05 WARNING `gpus = all` was declared in the [local_docker] section of the config file but is not a runopt of `local_docker` scheduler. Remove the entry from the config file to no longer see this warning\r\ntorchx 2022-11-05 13:29:05 INFO loaded configs from /home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test/.torchxconfig\r\ntorchx 2022-11-05 13:29:05 INFO Checking for changes in workspace `file:///home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test`...\r\ntorchx 2022-11-05 13:29:05 INFO To disable workspaces pass: --workspace=\"\" from CLI or workspace=None programmatically.\r\ntorchx 2022-11-05 13:29:06 INFO Built new image `sha256:32cf796cecfd488d7e0e5ba5069e9218098bed75597b3b402b9c557a796e5f4a` based on original image `ghcr.io/pytorch/torchx:0.3.0` and changes in workspace `file:///home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test` for role[0]=python.\r\nlocal_docker://torchx/torchx_utils_python-bq7cx57f1c6wr\r\ntorchx 2022-11-05 13:29:06 INFO Waiting for the app to finish...\r\npython/0 torch.cuda.is_available(): False\r\ntorchx 2022-11-05 13:29:07 INFO Job finished: SUCCEEDED\r\n```\r\n\r\n## Expected behavior\r\n\r\nNotice that torch identifies the GPU device when running with `poetry run torchx run --scheduler local_cwd utils.python --script test.py`, but it fails to do so when running with `poetry run torchx run --scheduler local_docker utils.python --script test.py`. Also, when running `docker run --gpus all test:latest python test.py`, GPU is also recognized. \r\n\r\n## Environment\r\n\r\n```Collecting environment information...\r\nPyTorch version: 1.13.0+cu117\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Pop!_OS 21.10 (x86_64)\r\nGCC version: (Ubuntu 11.2.0-7ubuntu2) 11.2.0\r\nClang version: Could not collect\r\nCMake version: version 3.18.4\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.9.5 (default, Jul 19 2021, 13:27:26) [GCC 10.3.0] (64-bit runtime)\r\nPython platform: Linux-5.17.5-76051705-generic-x86_64-with-glibc2.34\r\nIs CUDA available: True\r\nCUDA runtime version: 11.3.109\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA GeForce RTX 3060 Ti\r\nGPU 1: NVIDIA GeForce RTX 3060 Ti\r\n\r\nNvidia driver version: 470.103.01\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] botorch==0.6.0\r\n[pip3] gpytorch==1.9.0\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.4\r\n[pip3] pytorch-lightning==1.5.10\r\n[pip3] torch==1.13.0\r\n[pip3] torch-model-archiver==0.6.0\r\n[pip3] torchmetrics==0.10.2\r\n[pip3] torchserve==0.6.0\r\n[pip3] torchtext==0.14.0\r\n[pip3] torchvision==0.14.0\r\n[pip3] torchx==0.3.0\r\n[conda] Could not collect", "url": "https://github.com/meta-pytorch/torchx/issues/648", "state": "closed", "labels": [ "question", "docker" ], "created_at": "2022-11-05T17:31:18Z", "updated_at": "2022-11-13T01:26:30Z", "comments": 2, "user": "vwxyzjn" }, { "repo": "pytorch/data", "number": 884, "title": "Steps per epoch for training ", "body": "### \ud83d\ude80 The feature\n\nFor huge datasets, an epoch may take a very long time to complete and it's good practice to perform evaluation and model checkpointing every N steps instead of at the end of an epoch. The tricky part lies at resuming training: how to tell the data loader to start from where it was left off? It would be great if torchdata could provide such a feature. \r\n\r\nI have no idea how such a feature could be implemented, but from a user perspective, the interface would be best to resemble the common usage:\r\n\r\n- A `dataloader.state_dict()` method that returns necessary information on where the data loading was left off.\r\n- A `dataloader.load_state_dict(saved_state_dict)` method for loading the saved state_dict.\n\n### Motivation, pitch\n\nSee above.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/884", "state": "closed", "labels": [], "created_at": "2022-11-04T16:39:32Z", "updated_at": "2022-11-04T21:00:30Z", "comments": 6, "user": "netw0rkf10w" }, { "repo": "pytorch/xla", "number": 4157, "title": "How to wrap a model with dynamo", "body": "## \u2753 Questions and Help\r\n\r\nI am trying to add a dynamo model test with\r\n```\r\nimport torch\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.utils.utils as xu\r\nimport torch_xla.debug.metrics as met\r\nimport torch._dynamo as dynamo\r\nimport torchvision\r\nimport unittest\r\n\r\nclass DynamoBasicTest(unittest.TestCase):\r\n\r\n @dynamo.optimize('torchxla_trace_once')\r\n def resetnet_18_dynamo(self, data):\r\n model = torchvision.models.resnet18()\r\n #model.eval()\r\n return model(data)\r\n\r\n def test_resnet18(self):\r\n batch_size = xu.getenv_as('BATCH_SIZE', int, defval=4)\r\n sample_count = xu.getenv_as('SAMPLE_COUNT', int, defval=10)\r\n loader = xu.SampleGenerator(\r\n data=(torch.zeros(batch_size, 3, 224,\r\n 224), torch.zeros(batch_size, dtype=torch.int64)),\r\n sample_count=sample_count) \r\n for data, _ in loader:\r\n import pdb; pdb.set_trace()\r\n output = self.resetnet_18_dynamo(data)\r\n```\r\n\r\nI get an error\r\n```\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/backends.py\", line 53, in inner\r\n return fn(model, **kwargs)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/backends.py\", line 823, in torchxla_trace_once\r\n return integration.extract_compiled_graph(model, example_inputs)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/torchxla_integration.py\", line 79, in extract_compiled_graph\r\n orig_device = example_inputs[0].device\r\nIndexError: list index out of range\r\n```\r\n\r\nand I saw that `example_inputs` is empty. If I try with a simple example\r\n```\r\n def fn_simple(self, x, y):\r\n a = torch.cos(x)\r\n b = torch.sin(y)\r\n return a + b\r\n\r\n @dynamo.optimize('torchxla_trace_once')\r\n def fn_simple_dynamo(self, x, y):\r\n return self.fn_simple(x, y)\r\n```\r\n\r\nit worked as expected. I am wondering what did I missed here.\r\n\r\n@shunting314 @wconstab \r\n", "url": "https://github.com/pytorch/xla/issues/4157", "state": "closed", "labels": [ "dynamo" ], "created_at": "2022-11-04T02:19:01Z", "updated_at": "2022-11-04T22:55:13Z", "user": "JackCaoG" }, { "repo": "pytorch/TensorRT", "number": 1437, "title": "\u2753 [Question] Are the interpolate plugins with align_corners=True still necessary?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\nhttps://github.com/pytorch/TensorRT/blob/master/core/conversion/converters/impl/interpolate.cpp#L566\r\nThis note is in the aten::upsample_bilinear2d converter:\r\n`Align corners and scale factor behave slightly different together in TRT and PyTorch so run the layer in ATen to maintain consistency between Torch-TensorRT and PyTorch https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate`\r\n\r\nWith TRT 8.2.3 when I manually disable the plugin implementation and use the TRT resize_layer implementation I don't see any additional inaccuracy in my model. I also don't see any failures in the interpolate unit tests.\r\n\r\nAre these plugins still necessary? What was the nature of the discrepancy with align corners?\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1437", "state": "closed", "labels": [ "question" ], "created_at": "2022-11-02T18:26:01Z", "updated_at": "2022-12-15T17:59:25Z", "user": "mfeliz-cruise" }, { "repo": "pytorch/functorch", "number": 1058, "title": "Cuda Memory Overflow in Jacobian Computation", "body": "Hi,\r\n\r\nI implemented a Jacobian computation using functorch, but encoutnered a memory overflow issue.\r\n\r\nThe function that I want to differentiate is `ResidualFunctional.residual`. I'd like to compute the Jacobian of this function w.r.t. its first argument `inputs`.\r\n\r\nThe output of `ResidualFunctional.residual` is a tensor of size (10000, ) and `inputs` is a tensor of size (1001, ). Thus, the Jacobian is 10000 by 1001, which takes about 74 MB using double precision.\r\n\r\nHowever, `functorch.jacrev` had a memory overflow error on a 24 GB GPU. The error message is shown below. I am wondering why FuncTorch takes so much memory in the reverse mode autodiff, and if there is a solution to this issue.\r\n```\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 38.00 GiB (GPU 0; 23.69 GiB total capacity; 810.80 MiB already allocated; 21.25 GiB free; 824.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\n\r\nBelow is a working example that reproduce this issue.\r\n\r\nCUDA 11.4\r\nFuncTorch 1.13.0\r\nPyTorch 1.13.0\r\nGPyTorch 1.9.0\r\n\r\nThanks!\r\n\r\n```\r\nimport torch\r\nimport gpytorch\r\n\r\nimport functorch\r\nfrom functorch import make_functional_with_buffers\r\n\r\n\r\nclass ResidualFunctional():\r\n def __init__(self,\r\n kernel, m, d,\r\n outputscale=None, sigma=None,\r\n lengthscale_penalty=None\r\n ):\r\n self.func, _, self.buffers = make_functional_with_buffers(kernel)\r\n self.m = m\r\n self.d = d\r\n\r\n self.outputscale = outputscale\r\n self.sigma = sigma\r\n\r\n def _residual(self, u, x, y, params, sigma):\r\n with gpytorch.settings.trace_mode(), gpytorch.settings.lazily_evaluate_kernels(False):\r\n m = u.size(0)\r\n\r\n func_nl = lambda params, buffers, x1, x2: self.func(params, buffers, x1, x2).evaluate()\r\n\r\n Kxu = func_nl(params, self.buffers, x, u)\r\n A = torch.cat(\r\n [Kxu, sigma * torch.eye(m, device=u.device)],\r\n dim=-2,\r\n )\r\n ybar = torch.cat([y, y.new_zeros(m)], dim=-1)\r\n c = torch.linalg.lstsq(A, ybar.unsqueeze(-1), rcond=None).solution.squeeze()\r\n r = ybar - A @ c\r\n return r\r\n\r\n def residual(self, inputs, x, y):\r\n u = inputs[:self.m * self.d].view(self.m, self.d)\r\n\r\n lengthscale = torch.nn.functional.softplus(inputs[-1])\r\n\r\n return self._residual(u, x, y, (lengthscale, self.outputscale), self.sigma)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n device = \"cuda:0\"\r\n\r\n n = 10000\r\n d = 10\r\n m = 100\r\n\r\n u = torch.randn(m, d, device=device)\r\n x = torch.randn(n, d, device=device)\r\n y = torch.randn(n, device=device)\r\n\r\n kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())\r\n kernel = kernel.to(device)\r\n\r\n functional = ResidualFunctional(\r\n kernel, m=m, d=d,\r\n outputscale=kernel.outputscale, sigma=1e-2,\r\n )\r\n\r\n inputs = torch.cat(\r\n (u.view(-1), kernel.base_kernel.raw_lengthscale.view(-1)),\r\n dim=-1\r\n )\r\n residual = functional.residual(inputs, x, y)\r\n print(residual.shape)\r\n\r\n jacobian = functorch.jacrev(functional.residual, argnums=0)(inputs, x, y)\r\n print(jacobian.shape)\r\n```", "url": "https://github.com/pytorch/functorch/issues/1058", "state": "open", "labels": [], "created_at": "2022-10-31T22:29:07Z", "updated_at": "2022-11-08T14:53:53Z", "comments": 6, "user": "kayween" }, { "repo": "pytorch/pytorch", "number": 88073, "title": "How to export pytorch model to onnx, with input of List[Tuple[Tensor,Tensor]] and output of List[Tuple[Tensor,Tensor]]", "body": "I have no idea how to export this model to onnx. One of the inputs for this model accepts a list of uncertain tuple, each of which contains 2 tensor with size of (2, 1024). This model also returns a list of tuple of two tensors(2, 1024).\r\n\r\nHow can I export it? I've already searched in pytorch community, but most of the issues have no replies.\r\n\r\n## Code example\r\n\r\nstate[in] is a list, and state[out] is also a list.\r\n\r\nmodel definition\r\n```python\r\nclass Module(nn.Module):\r\n ...\r\n def forward(self, enc_out, enc_mask, tgt_seq,\r\n state: Optional[List[Tuple[torch.Tensor, torch.Tensor]]] = None,\r\n bias_embedding: Optional[torch.Tensor] = None):\r\n if state is not None:\r\n hid = list()\r\n cell = list()\r\n for h, c in state:\r\n hid.append(h)\r\n cell.append(c)\r\n state_in = (torch.stack(hid, dim=1), torch.stack(cell, dim=1))\r\n else:\r\n state_in = None\r\n logit, attn, state_out = self.decoder(tgt_seq, enc_out, enc_mask, state_in,\r\n bias_embedding=bias_embedding)\r\n hid, cell = state_out\r\n state = [(hid[:, j, :], cell[:, j, :]) for j in range(logit.size(0))]\r\n logit = logit[:, -1, :].squeeze(1)\r\n return torch.log_softmax(logit, -1), attn, state\r\n```\r\n\r\nmodel export\r\n```python\r\n input_names = [\"enc_out\", \"enc_mask\", \"tgt_seq\", \"cache_state\", \"bias_embedding\"]\r\n output_names = [\"dec_out\", \"attn\", \"state\"]\r\n dynamic_axes = {\r\n \"enc_out\": {0: \"batch_size\", 1: \"enc_out_len\"},\r\n \"enc_mask\": {0: \"batch_size\", 1: \"enc_out_len\"},\r\n \"tgt_seq\": {0: \"batch_size\"},\r\n \"dec_out\": {0: \"batch_size\"},\r\n \"attn\": {0: \"batch_size\", 3: \"enc_out_len\"},\r\n \"state\": {1: \"batch_size\"},\r\n }\r\n torch.onnx.export(\r\n model,\r\n (enc_out, enc_mask, tgt_seq, cache_state, bias_embedding),\r\n \"decoder.onnx\",\r\n export_params=True,\r\n opset_version=13,\r\n do_constant_folding=True,\r\n input_names=input_names,\r\n output_names=output_names,\r\n dynamic_axes=dynamic_axes\r\n )\r\n```", "url": "https://github.com/pytorch/pytorch/issues/88073", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-triaged", "onnx-needs-info" ], "created_at": "2022-10-31T08:22:16Z", "updated_at": "2022-11-22T06:07:07Z", "user": "yszhou2019" }, { "repo": "pytorch/tutorials", "number": 2105, "title": "training fail ", "body": "image https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/index.html?from=groupmessage\r\n\r\nI train my model with ngc docker.\r\nSometime I train the network (like yolov7 ) , it will let linux disconect and reboot . \r\nHow can I debug it to find cause root?\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/2105", "state": "closed", "labels": [ "question" ], "created_at": "2022-10-30T04:10:22Z", "updated_at": "2022-11-14T20:50:47Z", "user": "alicera" }, { "repo": "pytorch/functorch", "number": 1057, "title": "Installing functorch breaks torchaudio", "body": "I'm following along with [this](https://colab.research.google.com/drive/1GNfb01W_xf8JRu78ZKoNnLqiwcrJrbYG#scrollTo=nBj3vMvIhD9t) colab from the [functorch installation docs](https://pytorch.org/functorch/stable/install.html#colab).\r\n\r\nAfter installing and restarting, when I try to import `torchaudio`, the runtime crashes. At first, I got this error:\r\n\r\n```python\r\nOSError: /usr/local/lib/python3.7/dist-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN2at4_ops7resize_4callERKNS_6TensorEN3c108ArrayRefIlEENS5_8optionalINS5_12MemoryFormatEEE\r\n```\r\n\r\nNow, I'm just getting the runtime crashing with no visible error.\r\n\r\nI know functorch was merged into pytorch proper, but I don't see any instructions about how to use it from there. Would that fix the issue? If so, should the main docs be updated?", "url": "https://github.com/pytorch/functorch/issues/1057", "state": "closed", "labels": [ "actionable" ], "created_at": "2022-10-28T18:16:13Z", "updated_at": "2022-12-09T18:59:35Z", "comments": 11, "user": "dellis23" }, { "repo": "pytorch/pytorch", "number": 87862, "title": "torch.where: `out` kwarg support is undocumented", "body": "### \ud83d\udcda The doc issue\r\n\r\nhttps://pytorch.org/docs/stable/generated/torch.where.html doesn't mention anything about `out` kwarg support.\r\n\r\n\r\n\r\nRef:\r\nhttps://github.com/pytorch/pytorch/blob/aaba0bd30641c56db1dc0550b81fbc458db46276/aten/src/ATen/native/native_functions.yaml#L5653\r\n\r\nEg.\r\n```python\r\n>>> torch.where(x < 0, x, -x, out=x)\r\ntensor([-0.6862, -0.6860, -1.4944])\r\n```\r\n\r\n### Suggest a potential alternative/fix\r\n\r\n_No response_\n\ncc @svekars @carljparker", "url": "https://github.com/pytorch/pytorch/issues/87862", "state": "closed", "labels": [ "module: docs", "good first issue", "actionable" ], "created_at": "2022-10-27T14:50:44Z", "updated_at": "2022-10-27T21:03:47Z", "user": "kshitij12345" }, { "repo": "pytorch/pytorch", "number": 87789, "title": "Any ideas on how we can convert a model from huggingface (transformers library )to tensorflow lite?", "body": "### \ud83d\udc1b Describe the bug\n\nI want to convert CamembertQuestionAnsewring model to tensoflow lite, i download it from huggingface platform, because when i want to save the model locally it gives me the model with 'bin' format. \r\ni'm asking here because huggingface use pytorch pretrained models.\r\n\r\n- when i try to convert the model it gives me this error : AttributeError: 'CamembertForQuestionAnswering' object has no attribute 'call' by using tf_model.h5 file. \r\n- Also i can't load it using : tf.keras.models.load_model() it gives me : ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x7f27cceb1810>.\r\n- when i want to save the transformers model locally it gives me the model with 'bin' format, so i download it from the platform.\n\n### Versions\n\nhttps://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf?context=Etalab+est+une+administration+publique+fran%C3%A7aise+qui+fait+notamment+office+de+Chief+Data+Officer+de+l%27%C3%89tat+et+coordonne+la+conception+et+la+mise+en+%C5%93uvre+de+sa+strat%C3%A9gie+dans+le+domaine+de+la+donn%C3%A9e+%28ouverture+et+partage+des+donn%C3%A9es+publiques+ou+open+data%2C+exploitation+des+donn%C3%A9es+et+intelligence+artificielle...%29.+Ainsi%2C+Etalab+d%C3%A9veloppe+et+maintient+le+portail+des+donn%C3%A9es+ouvertes+du+gouvernement+fran%C3%A7ais+data.gouv.fr.+Etalab+promeut+%C3%A9galement+une+plus+grande+ouverture+l%27administration+sur+la+soci%C3%A9t%C3%A9+%28gouvernement+ouvert%29+%3A+transparence+de+l%27action+publique%2C+innovation+ouverte%2C+participation+citoyenne...+elle+promeut+l%E2%80%99innovation%2C+l%E2%80%99exp%C3%A9rimentation%2C+les+m%C3%A9thodes+de+travail+ouvertes%2C+agiles+et+it%C3%A9ratives%2C+ainsi+que+les+synergies+avec+la+soci%C3%A9t%C3%A9+civile+pour+d%C3%A9cloisonner+l%E2%80%99administration+et+favoriser+l%E2%80%99adoption+des+meilleures+pratiques+professionnelles+dans+le+domaine+du+num%C3%A9rique.+%C3%80+ce+titre+elle+%C3%A9tudie+notamment+l%E2%80%99opportunit%C3%A9+de+recourir+%C3%A0+des+technologies+en+voie+de+maturation+issues+du+monde+de+la+recherche.+Cette+entit%C3%A9+charg%C3%A9e+de+l%27innovation+au+sein+de+l%27administration+doit+contribuer+%C3%A0+l%27am%C3%A9lioration+du+service+public+gr%C3%A2ce+au+num%C3%A9rique.+Elle+est+rattach%C3%A9e+%C3%A0+la+Direction+interminist%C3%A9rielle+du+num%C3%A9rique%2C+dont+les+missions+et+l%E2%80%99organisation+ont+%C3%A9t%C3%A9+fix%C3%A9es+par+le+d%C3%A9cret+du+30+octobre+2019.%E2%80%89+Dirig%C3%A9+par+Laure+Lucchesi+depuis+2016%2C+elle+rassemble+une+%C3%A9quipe+pluridisciplinaire+d%27une+trentaine+de+personnes.&question=Comment+s%27appelle+le+portail+open+data+du+gouvernement+%3F", "url": "https://github.com/pytorch/pytorch/issues/87789", "state": "closed", "labels": [], "created_at": "2022-10-26T16:00:34Z", "updated_at": "2022-10-27T05:40:57Z", "user": "BENSAFOUAN-Abdelhalim" }, { "repo": "pytorch/pytorch", "number": 87564, "title": "Meta impl for pirms.where is incorrect", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n```\r\ndevice = \"meta\"\r\n\r\npred = torch.randn(5, 5, device=device) > 0\r\na = torch.rand(5, 5, device=device).t()\r\nout = torch.where(pred, a, 0)\r\n\r\nprint(\"pred.stride()\", pred.stride())\r\nprint(\"a.stride()\", a.stride())\r\nprint(\"out.stride()\", out.stride()) \r\n\r\npred.stride() (5, 1)\r\na.stride() (1, 5)\r\nout.stride() (1, 5)\r\n```\r\nif I have device=\u201ccuda\u201d, the output is \r\n```\r\npred.stride() (5, 1)\r\na.stride() (1, 5)\r\nout.stride() (5, 1)\r\n```\r\n\r\n### Versions\r\n\r\nmaster\r\n\r\n\r\ncc @ezyang @mruberry @ngimel @Lezcano @fdrocha ", "url": "https://github.com/pytorch/pytorch/issues/87564", "state": "closed", "labels": [ "triaged", "module: primTorch", "module: decompositions" ], "created_at": "2022-10-23T01:15:28Z", "updated_at": "2022-10-26T00:48:07Z", "user": "SherlockNoMad" }, { "repo": "pytorch/data", "number": 848, "title": "[RFC] Verify that the docs contain working code and self-contained examples using doctest", "body": "### \ud83d\ude80 The feature\r\n\r\nCurrently there does not seem to be an automatic way to verify that the examples in the documentation are actually working. This leads to issues like (https://github.com/pytorch/data/issues/433).\r\n\r\nAn example should also be complete enough so that developers can easily try out the code.\r\n\r\nA solution could be to use the sphinx doctest extension to test the documentation before building it. Docstrings can be continuously migrated from standard reST doctests to test code that runs using the sphinx doctest extension.\r\n\r\n### Motivation, pitch\r\n\r\nWorking examples that are up-to-date boost adoption of the library and make it easier for developers to become proficient in using the library.\r\n\r\nTherefore one could consider using doctest in order to be forced to write self-contained examples that execute without error.\r\n\r\n### Alternatives\r\n\r\nDoctests can be executed in different ways:\r\n\r\n- Invoking plain python to execute the doctests as described [here](https://docs.python.org/3/library/doctest.html)\r\n- Using [pytest --doctest](https://docs.pytest.org/en/7.1.x/how-to/doctest.html) to execute the tests\r\n- Run within the documentation build process as `cd docs && make doctest`\r\n\r\nI would recommend running the doctests while building the documentation using sphinx because it is easy to continuously\r\nmigrate the existing non-tested example code to code being tested.\r\n\r\n\r\n### Additional context\r\n\r\nA minimal example of the RFC can be found\r\n[here](https://github.com/pytorch/data/pull/850). Please\r\nnote that the code is only meant as an example for discussion and might not (yet) meet the quality criteria of a PR.\r\n\r\nThe example implementation consists of the following parts:\r\n\r\n- An updated `docs/Makefile` with a `doctest` target\r\n- Enabling the sphinx extension `sphinx.ext.doctest` in `docs/source/conf.py`\r\n- A minimal example of an updated docstring in `torchdata/dataloader2/adapter.py`\r\n- Adding the `doctest` step to the CI in `.github/workflows/_build_test_upload.yml`\r\n\r\nThe tests can be executed like this: `cd docs && make doctest`.\r\n", "url": "https://github.com/meta-pytorch/data/issues/848", "state": "closed", "labels": [ "documentation", "Better Engineering" ], "created_at": "2022-10-21T14:45:36Z", "updated_at": "2022-10-27T20:55:48Z", "comments": 1, "user": "mathiasburger" }, { "repo": "pytorch/vision", "number": 6779, "title": "How do you put a LibTorch (C++) torch::nn::Module on the CUDA device?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI get an error when I try to put a `torch::nn::Module` on the CUDA device. How do I put the model on the CUDA device?\r\n\r\n```\r\n#include <torch/torch.h>\r\nusing namespace torch::indexing;\r\ntorch::Device device(torch::kCUDA);\r\n\r\nstruct Critic_Net : torch::nn::Module {\r\n torch::Tensor next_state_batch__sampled_action;\r\n public:\r\n Critic_Net() {\r\n lin1 = torch::nn::Linear(427, 42);\r\n lin2 = torch::nn::Linear(42, 286);\r\n lin3 = torch::nn::Linear(286, 1);\r\n }\r\n torch::Tensor forward(torch::Tensor next_state_batch__sampled_action) {\r\n auto h = next_state_batch__sampled_action;\r\n h = torch::relu(lin1->forward(h));\r\n h = torch::tanh(lin2->forward(h));\r\n h = lin3->forward(h);\r\n return torch::nan_to_num(h);\r\n }\r\n torch::nn::Linear lin1{nullptr}, lin2{nullptr}, lin3{nullptr};\r\n};\r\n```\r\n\r\nI have tried putting it on the CUDA device like so \r\n`auto critic = Critic_Net();` \r\n`critic->to(device);`\r\n\r\nThis causes `/home/iii/tor/m_gym/multiv_normal.cpp:190:1: error: \u2018critic\u2019 does not name a type\r\n 190 | critic->to(device);\r\n | ^~~~~~` \r\n\r\nI have actually tried to put `->to(device);` behind almost everything everywhere the model shows up and I get these errors. \r\n\r\nI've also tried using `auto critic = torch::jit::load(critic, device);` after [reading this](https://discuss.pytorch.org/t/how-to-load-model-on-specific-device-in-libtorch/94416). \r\nI get this error. \r\nIs putting a model on CUDA possible with `torch::jit::load`? I think this \"model\" is the kind that is saved on a disk and not the kind that is an nn::Module. \r\n\r\n```\r\n error: no matching function for call to \u2018load(Critic_Net&, c10::Device&)\u2019\r\n 186 | auto critico = torch::jit::load(critic, device);\r\n | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~\r\nIn file included from /home/iii/tor/m_gym/libtorch/include/torch/script.h:9,\r\n from /home/iii/tor/m_gym/multiv_normal.cpp:2:\r\n/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:66:1: note: candidate: \u2018torch::jit::Module torch::jit::load(std::istream&, c10::optional<c10::Device>)\u2019\r\n 66 | load(std::istream& in, c10::optional<c10::Device> device = c10::nullopt);\r\n | ^~~~\r\n/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:66:20: note: no known conversion for argument 1 from \u2018Critic_Net\u2019 to \u2018std::istream&\u2019 {aka \u2018std::basic_istream<char>&\u2019}\r\n 66 | load(std::istream& in, c10::optional<c10::Device> device = c10::nullopt);\r\n | ~~~~~~~~~~~~~~^~\r\n/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:68:18: note: candidate: \u2018torch::jit::Module torch::jit::load(std::istream&, c10::optional<c10::Device>, torch::jit::ExtraFilesMap&)\u2019\r\n 68 | TORCH_API Module load(\r\n | ^~~~\r\n/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:68:18: note: candidate expects 3 arguments, 2 provided\r\n/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:78:18: note: candidate: \u2018torch::jit::Module torch::jit::load(const string&, c10::optional<c10::Device>)\u2019\r\n 78 | TORCH_API Module load(\r\n | ^~~~\r\n/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:79:24: note: no known conversion for argument 1 from \u2018Critic_Net\u2019 to \u2018const string&\u2019 {aka \u2018const std::basic_string<char>&\u2019}\r\n 79 | const std::string& filename,\r\n```\r\n \r\n\r\n\r\n\r\n\r\n### Versions\r\n\r\nThis is my LibTorch version, 1.12.1+cu116 \r\n\r\nI don't have any problems putting a tensor on the CUDA device and I assume I would not have a problem putting a simpler model on the CUDA device. The issue is the question of where to point this large nn::Module struct to the CUDA device.", "url": "https://github.com/pytorch/vision/issues/6779", "state": "closed", "labels": [ "question" ], "created_at": "2022-10-16T23:36:15Z", "updated_at": "2022-10-17T14:52:20Z", "user": "MotorCityCobra" }, { "repo": "pytorch/pytorch", "number": 87029, "title": "how to add adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'", "body": "### \ud83d\udc1b Describe the bug\r\n\r\ni have added attention mechanism to yolov5 repo, while doing the training I'm getting this issue , how can I solve this error?\r\n\r\nerror\r\n\r\ni tried to train model using yolov5 command ,& i used C3CBAM attention mechanism , & I'm getting this error\r\n\r\n```\r\n!python train.py --img 640 --batch 16 --cfg /content/yolov5/models/yolov5s.yaml --epochs 250 --data coco128.yaml --weights yolov5s.pt --cache\r\n\r\n```\r\n\r\n<img width=\"359\" alt=\"image\" src=\"https://user-images.githubusercontent.com/62583018/196018476-062b6719-2804-4e86-a8c9-c12550a244a5.png\">\r\n\r\n\r\n````\r\nEpoch GPU_mem box_loss obj_loss cls_loss Instances Size\r\n 0% 0/8 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"train.py\", line 637, in <module>\r\n main(opt)\r\n File \"train.py\", line 531, in main\r\n train(opt.hyp, opt, device, callbacks)\r\n File \"train.py\", line 320, in train\r\n scaler.scale(loss).backward()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/_tensor.py\", line 396, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py\", line 175, in backward\r\n allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass\r\nRuntimeError: adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.\r\n```", "url": "https://github.com/pytorch/pytorch/issues/87029", "state": "closed", "labels": [], "created_at": "2022-10-16T04:37:56Z", "updated_at": "2022-10-16T05:17:15Z", "user": "akashAD98" }, { "repo": "pytorch/pytorch", "number": 87027, "title": "how to add adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'.", "body": "\r\nI'm adding an attention mechanism on yolov5 to train my model, I added C3CBAM ,& im getting this issue, what should I need to do to solve this issue?\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/87027", "state": "closed", "labels": [], "created_at": "2022-10-16T04:01:06Z", "updated_at": "2022-10-16T04:33:34Z", "user": "akashAD98" }, { "repo": "pytorch/data", "number": 831, "title": "document of parameter buffer_size in MaxTokenBucketizer is wrong", "body": "According to the document [MaxTokenBucketizer](https://pytorch.org/data/main/generated/torchdata.datapipes.iter.MaxTokenBucketizer.html#torchdata.datapipes.iter.MaxTokenBucketizer)\r\nbuffer_size \u2013 This restricts how many **tokens** are taken from prior DataPipe to bucketize\r\n\r\nHowever, in the code, [bucketbatcher.py#L277](https://github.com/pytorch/data/blob/84587ff57575fd47fcae61635a3f4ffc1e639941/torchdata/datapipes/iter/transform/bucketbatcher.py#L277)\r\nThe unit of buffer_size is **sample** not **token**", "url": "https://github.com/meta-pytorch/data/issues/831", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-10-14T08:41:20Z", "updated_at": "2022-10-17T17:36:48Z", "comments": 1, "user": "ling0322" }, { "repo": "pytorch/tensorpipe", "number": 457, "title": "Question: how to disable IB at runtime?", "body": "I wonder if there is an environment variable like `NCCL_IB_DISABLE` in NCCL so that I can disable IB at runtime. \r\n\r\nThanks!", "url": "https://github.com/pytorch/tensorpipe/issues/457", "state": "open", "labels": [], "created_at": "2022-10-14T02:22:23Z", "updated_at": "2022-10-14T09:49:57Z", "user": "jasperzhong" }, { "repo": "pytorch/examples", "number": 1082, "title": "Query on loss calculation in word language model", "body": "In the main.py of word language model, I find that in the evaluate function the total_loss is getting multiplied by length of data\r\nhttps://github.com/pytorch/examples/blob/ca1bd9167f7216e087532160fc5b98643d53f87e/word_language_model/main.py#L163\r\n\r\nHowever in the train function, total_loss is not getting multiplied by length of data https://github.com/pytorch/examples/blob/ca1bd9167f7216e087532160fc5b98643d53f87e/word_language_model/main.py#L194\r\n\r\nIs this proper? ", "url": "https://github.com/pytorch/examples/issues/1082", "state": "open", "labels": [ "help wanted" ], "created_at": "2022-10-14T01:23:15Z", "updated_at": "2022-10-17T21:31:55Z", "comments": 0, "user": "AvisP" }, { "repo": "pytorch/functorch", "number": 1043, "title": "Is there a way to parallelize or accelerate a loop of column-by-column jvp?", "body": "Hi, experts.\r\nI am currently calculating a Jacobian column-by-column and calculating the squared sum of each column to calculate the Trace of the Jacobian.\r\nThe code looks something like this:\r\n\r\n```\r\ndef jvp_func(x, tgt):\r\n return jvp(net, (x,), (tgt,))\r\n\r\ntr = 0\r\nfor j in range(x[0].shape[0]):\r\n tgt = torch.zeros_like(x)\r\n tgt[:, j] = 1.\r\n _, grad = vmap(jvp_func)(x, tgt)\r\n tr += torch.sum(grad * grad, dim=1)\r\n```\r\n\r\nAs you can see, my code calculates a batched Jacobian column by column (inside each j loop) and calculates the Trace.\r\n(motivated by this code: https://github.com/facebookresearch/jacobian_regularizer/blob/main/jacobian/jacobian.py)\r\nI am mainly doing this instead of calculating the entire Jacobian at once because the entire Jacobian is huge and it blows up the memory.\r\nHowever, this code is quite slow. I am not sure if this code is doing a lot of redundant computation, e.g., I wonder if net(x) is being calculated repetitively on each loop of j.\r\n\r\nIs there a way to parallelize the j loop, or at least remove any repetitive computation for each j loop to speed up the current code?\r\nI briefly looked at functorch.compile.ts_compile but was not able to make it work, and am not sure if that is something that can be helpful.\r\n\r\nAny suggestions will be highly appreciated!\r\n\r\nThank you,\r\nBest regards,\r\nKiwan", "url": "https://github.com/pytorch/functorch/issues/1043", "state": "open", "labels": [], "created_at": "2022-10-11T00:43:43Z", "updated_at": "2022-10-11T21:34:44Z", "comments": 3, "user": "kwmaeng91" }, { "repo": "pytorch/torchx", "number": 611, "title": "Kubernetes: Support mounting secrets as a volume", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\n\r\nKubernetes has a concept of a secret that can be mounted as a volume to a pod. \r\nhttps://kubernetes.io/docs/concepts/configuration/secret/\r\n\r\n```\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: mypod\r\nspec:\r\n containers:\r\n - name: mypod\r\n image: redis\r\n volumeMounts:\r\n - name: foo\r\n mountPath: \"/etc/foo\"\r\n volumes:\r\n - name: foo\r\n secret:\r\n secretName: mysecret\r\n defaultMode: 0400\r\n```\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nWe can add a new bind mount type for secrets so a user can add the secret mount as normal.\r\n\r\n```\r\ntorchx run utils.sh --mounts type=secret,name=foo,dst=/etc/foo ...\r\n```\r\n\r\nSpecs https://github.com/pytorch/torchx/blob/main/torchx/specs/api.py#L218-L269 and add new SecretMount\r\n\r\nIntegrate it into kubernetes_scheduler.py at https://github.com/pytorch/torchx/blob/main/torchx/schedulers/kubernetes_scheduler.py#L267\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\n* utils.sh mount argument https://github.com/pytorch/torchx/blob/main/torchx/components/utils.py#L83\r\n* Docker also has a slightly different concept of secrets https://docs.docker.com/engine/swarm/secrets/\r\n* AWS Batch has environment variable secrets https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data-secrets.html\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/611", "state": "open", "labels": [ "enhancement", "module: specs", "kubernetes" ], "created_at": "2022-10-10T18:53:20Z", "updated_at": "2022-10-10T18:56:25Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/TensorRT", "number": 1396, "title": "Question about triton example in tutorial", "body": "Why using platform: \"pytorch_libtorch\" while model.pt as Torch TensorRT \r\n\r\n> model.pt as platform: \"pytorch_libtorch\" \r\n\r\ninstead of \r\n\r\n> model.pt as platform: \"tensorrt_plan\" in [serving_torch_tensorrt_with_triton](https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html)", "url": "https://github.com/pytorch/TensorRT/issues/1396", "state": "closed", "labels": [ "question", "examples" ], "created_at": "2022-10-10T11:57:33Z", "updated_at": "2022-12-15T17:55:53Z", "user": "allen-ash" }, { "repo": "pytorch/examples", "number": 1077, "title": "Running on Windows", "body": "## \ud83d\udcda Documentation\r\nI'm trying to get DCGAN running on my Windows machine. It appears that the code may not support windows, but this is not mentioned in the readme. Is there a procedure to get it running on Windows?\r\n ", "url": "https://github.com/pytorch/examples/issues/1077", "state": "open", "labels": [], "created_at": "2022-10-07T03:11:19Z", "updated_at": "2023-03-21T23:00:09Z", "comments": 6, "user": "maxbonzulak" }, { "repo": "pytorch/pytorch", "number": 86205, "title": "How to save only parts of the state_dict()", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nHi, I want to save only a small part of the model. \r\ne.g. A layer requires grad but B layer does not. So I only want to save A layer rather than both A and B . Many thanks!\r\n\r\n```\r\ndef model(nn.Module):\r\n ...\r\n\r\nmodel=model()\r\n\r\nmodel.save(model.state_dict())\r\n```\r\n\r\n### Versions\r\n```\r\nPyTorch version: 1.13.0.dev20220709\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 12.4 (arm64)\r\nGCC version: Could not collect\r\nClang version: 13.1.6 (clang-1316.0.21.2.5)\r\nCMake version: version 3.23.1\r\nLibc version: N/A\r\n\r\nPython version: 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:25:34) [Clang 11.1.0 ] (64-bit runtime)\r\nPython platform: macOS-12.4-arm64-arm-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.2\r\n[pip3] pytorch-ignite==0.4.9\r\n[pip3] pytorch-lightning==1.6.5\r\n[pip3] torch==1.13.0.dev20220709\r\n[pip3] torchaudio==0.14.0.dev20220603\r\n[pip3] torchmetrics==0.9.2\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchvision==0.14.0.dev20220708\r\n[conda] numpy 1.23.2 pypi_0 pypi\r\n[conda] pytorch-ignite 0.4.9 pypi_0 pypi\r\n[conda] pytorch-lightning 1.6.5 pypi_0 pypi\r\n[conda] torch 1.13.0.dev20220709 pypi_0 pypi\r\n[conda] torchaudio 0.14.0.dev20220603 pypi_0 pypi\r\n[conda] torchmetrics 0.9.2 pypi_0 pypi\r\n[conda] torchsummary 1.5.1 pypi_0 pypi\r\n[conda] torchvision 0.14.0.dev20220708 pypi_0 pypi\r\n```\n\ncc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are", "url": "https://github.com/pytorch/pytorch/issues/86205", "state": "closed", "labels": [ "module: nn", "triaged" ], "created_at": "2022-10-04T13:47:47Z", "updated_at": "2022-10-13T13:46:21Z", "user": "CaffreyR" }, { "repo": "pytorch/pytorch", "number": 86204, "title": "How to perform unstructured interpolation ", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nMy feature request is very simple, I'm not sure if there already exists some approach or implementation to achieve this functionality.\r\n\r\nIn scipy, there is scipy.interpolate.NearestNDInterpolator class or scipy.interpolate.LinearNDInterpolator class to achieve unstructured interpolation, i.e., giving a set of sparse points that distribute non-uniformly in the spatial domain and interpolate values at any given points, however, currently torch seems only support simple grid structured interpolation like grid_sample\n\n### Alternatives\n\nI have found some implementation for this in 1d, but not sure if this is very efficient and support GPU, also how to extend it to 2D.\r\n\r\nhttps://github.com/aliutkus/torchinterp1d\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/86204", "state": "open", "labels": [ "triaged", "module: interpolation" ], "created_at": "2022-10-04T13:03:50Z", "updated_at": "2022-10-10T11:59:06Z", "user": "twangnh" }, { "repo": "pytorch/TensorRT", "number": 1388, "title": "\u2753 [Question] How can we use torch_executed_modules?", "body": "## \u2753 Question\r\n\r\nCould you give us and example of how to use `torch_executed_modules` in `torch_tensorrt.ts.compile`\r\n\r\n## What you have already tried\r\n\r\nI tried many things. I would appreciate a little sample of how to use it.\r\nThanks\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1388", "state": "closed", "labels": [ "question", "examples" ], "created_at": "2022-10-03T21:55:23Z", "updated_at": "2022-10-04T16:57:03Z", "user": "mjack3" }, { "repo": "pytorch/functorch", "number": 1037, "title": "Get intermediate derivatives with nested jacobian and has_aux", "body": "Is it possible to get intermediate results with nested jacobian?\r\nSay `functorch.jacfwd `is nested twice with `has_aux=True`, how to get 1st derivative in this case?\r\n\r\n```python\r\nimport torch\r\nimport functorch\r\n\r\ndef foo(x):\r\n y = torch.cos(x)\r\n return y, y\r\n\r\ndef nest(fun, num):\r\n bar = fun\r\n for _ in range(num):\r\n bar = functorch.jacfwd(bar, has_aux=True)\r\n return bar\r\n\r\nx = torch.tensor(0.0)\r\n\r\nprint(nest(foo, 1)(x))\r\n# 1st derivative and value\r\n# (tensor(-0.), tensor(1.000000000000e+00))\r\n\r\nprint(nest(foo, 2)(x))\r\n# 2nd derivative and value, no 1st derivative\r\n# (tensor(-1.000000000000e+00), tensor(1.000000000000e+00))\r\n```", "url": "https://github.com/pytorch/functorch/issues/1037", "state": "closed", "labels": [], "created_at": "2022-10-03T08:29:27Z", "updated_at": "2022-10-03T16:54:06Z", "comments": 2, "user": "i-a-morozov" }, { "repo": "pytorch/vision", "number": 6676, "title": "torchvision.transforms.Normalize has large absolute difference", "body": "### \ud83d\udc1b Describe the bug\n\nBy definition, `torchvision.transforms.Normalize` should produce the same results if all input, std, and mean are divided or multiplied by the same number. When I test the API with the following input, I get the absolute difference up to 24978131.5 and the relative difference up to 3.5765e-08 for the float64 data type. Is this kind of difference expected? What's the threshold pytorch uses in tests to determine normal behavior versus buggy behavior?\r\n\r\nReproduce code:\r\n```\r\nimport torch\r\nimport torchvision\r\n\r\ninput = torch.tensor([ 0.0000, 41.3108, 0.0000], dtype=torch.float64).view(3, 1, 1)\r\nstd = torch.tensor([61860.0, 3586.0, 60300.0])\r\nmean = torch.tensor([4287419396147613455, -7376768754095287866, -6969696485275369284])\r\n\r\nr1 = torchvision.transforms.Normalize(mean, std)(input)\r\n\r\ninput = input / 255.0\r\nstd = std / 255.0\r\nmean = mean / 255.0\r\n\r\nr2 = torchvision.transforms.Normalize(mean, std)(input)\r\n\r\nprint(r2 - r1)\r\nprint((r2-r1)/r1)\r\n```\r\nOutput:\r\n```\r\ntensor([[[ 588325.7422]],\r\n\r\n [[24978131.5000]],\r\n\r\n [[ 4133818.0781]]], dtype=torch.float64)\r\ntensor([[[-8.4885e-09]],\r\n\r\n [[ 1.2142e-08]],\r\n\r\n [[ 3.5765e-08]]], dtype=torch.float64)\r\n```\n\n### Versions\n\n```\r\nCollecting environment information...\r\nPyTorch version: 1.13.0.dev20220919+cpu\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.6 LTS (x86_64)\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.17\r\n\r\nPython version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-4.15.0-176-generic-x86_64-with-debian-buster-sid\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.6\r\n[pip3] torch==1.13.0.dev20220919+cpu\r\n[pip3] torchaudio==0.13.0.dev20220919+cpu\r\n[pip3] torchvision==0.14.0.dev20220919+cpu\r\n[conda] numpy 1.21.6 pypi_0 pypi\r\n[conda] torch 1.13.0.dev20220919+cpu pypi_0 pypi\r\n[conda] torchaudio 0.13.0.dev20220919+cpu pypi_0 pypi\r\n[conda] torchvision 0.14.0.dev20220919+cpu pypi_0 pypi\r\n```\n\ncc @vfdev-5 @datumbox", "url": "https://github.com/pytorch/vision/issues/6676", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2022-10-02T20:56:19Z", "updated_at": "2022-10-03T16:42:13Z", "user": "jiannanWang" }, { "repo": "pytorch/examples", "number": 1075, "title": "Need C++ L2 regularization example", "body": "How to add L2 Regularization to a layer like keras\r\nmodel.add(Dense(kernel_regularizer=regularizers.l2(0.01), activation='elu'))\r\n", "url": "https://github.com/pytorch/examples/issues/1075", "state": "closed", "labels": [ "help wanted" ], "created_at": "2022-10-02T08:35:30Z", "updated_at": "2022-10-04T01:47:36Z", "comments": 1, "user": "bitnick10" }, { "repo": "pytorch/functorch", "number": 1036, "title": "support scan", "body": "it would be really nice to be able to eg take models implemented in jax with `jax.lax.scan` and port them over to torch without having to unroll scans over modules", "url": "https://github.com/pytorch/functorch/issues/1036", "state": "open", "labels": [], "created_at": "2022-09-30T18:02:23Z", "updated_at": "2023-02-13T08:41:10Z", "comments": 3, "user": "GallagherCommaJack" }, { "repo": "pytorch/TensorRT", "number": 1386, "title": "\u2753 [Question] Why my model is not accelerated by using fp16 ? ", "body": "## \u2753 Question\r\n\r\nI tried the exact same example in the [notebook](https://github.com/pytorch/TensorRT/blob/master/notebooks/Hugging-Face-BERT.ipynb).\r\n\r\n## What you have already tried\r\n\r\nsee the code (I just copy the code in the [.ipynb](https://github.com/pytorch/TensorRT/blob/master/notebooks/Hugging-Face-BERT.ipynb). But the result is very different while I use the same A100 GPU!!!\r\n\r\n```python\r\nfrom transformers import BertTokenizer, BertForMaskedLM\r\nimport torch\r\nimport timeit\r\nimport numpy as np\r\nimport torch_tensorrt\r\nimport torch.backends.cudnn as cudnn\r\n\r\n\r\nenc = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n\r\n\r\nbatch_size = 4\r\n\r\nbatched_indexed_tokens = [[101, 64]*64]*batch_size\r\nbatched_segment_ids = [[0, 1]*64]*batch_size\r\nbatched_attention_masks = [[1, 1]*64]*batch_size\r\n\r\ntokens_tensor = torch.tensor(batched_indexed_tokens)\r\nsegments_tensor = torch.tensor(batched_segment_ids)\r\nattention_masks_tensor = torch.tensor(batched_attention_masks)\r\n\r\n\r\n\r\n\r\nmlm_model_ts = BertForMaskedLM.from_pretrained('bert-base-uncased', torchscript=True)\r\ntraced_mlm_model = torch.jit.trace(mlm_model_ts, [tokens_tensor, segments_tensor, attention_masks_tensor])\r\n\r\n\r\n\r\nmasked_sentences = ['Paris is the [MASK] of France.',\r\n 'The primary [MASK] of the United States is English.',\r\n 'A baseball game consists of at least nine [MASK].',\r\n 'Topology is a branch of [MASK] concerned with the properties of geometric objects that remain unchanged under continuous transformations.']\r\npos_masks = [4, 3, 9, 6]\r\n\r\nencoded_inputs = enc(masked_sentences, return_tensors='pt', padding='max_length', max_length=128)\r\noutputs = mlm_model_ts(**encoded_inputs)\r\nmost_likely_token_ids = [torch.argmax(outputs[0][i, pos, :]) for i, pos in enumerate(pos_masks)]\r\nunmasked_tokens = enc.decode(most_likely_token_ids).split(' ')\r\nunmasked_sentences = [masked_sentences[i].replace('[MASK]', token) for i, token in enumerate(unmasked_tokens)]\r\nfor sentence in unmasked_sentences:\r\n print(sentence)\r\n\r\nencoded_inputs = enc(masked_sentences, return_tensors='pt', padding='max_length', max_length=128)\r\noutputs = traced_mlm_model(encoded_inputs['input_ids'], encoded_inputs['token_type_ids'], encoded_inputs['attention_mask'])\r\nmost_likely_token_ids = [torch.argmax(outputs[0][i, pos, :]) for i, pos in enumerate(pos_masks)]\r\nunmasked_tokens = enc.decode(most_likely_token_ids).split(' ')\r\nunmasked_sentences = [masked_sentences[i].replace('[MASK]', token) for i, token in enumerate(unmasked_tokens)]\r\nfor sentence in unmasked_sentences:\r\n print(sentence)\r\n\r\ntrt_model = torch_tensorrt.compile(traced_mlm_model, \r\n inputs= [torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # input_ids\r\n torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # token_type_ids\r\n torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32)], # attention_mask\r\n enabled_precisions= {torch.float32}, # Run with 32-bit precision\r\n workspace_size=2000000000,\r\n truncate_long_and_double=True\r\n)\r\n\r\nenc_inputs = enc(masked_sentences, return_tensors='pt', padding='max_length', max_length=128)\r\nenc_inputs = {k: v.type(torch.int32).cuda() for k, v in enc_inputs.items()}\r\noutput_trt = trt_model(enc_inputs['input_ids'], enc_inputs['token_type_ids'], enc_inputs['attention_mask'])\r\nmost_likely_token_ids_trt = [torch.argmax(output_trt[i, pos, :]) for i, pos in enumerate(pos_masks)]\r\nunmasked_tokens_trt = enc.decode(most_likely_token_ids_trt).split(' ')\r\nunmasked_sentences_trt = [masked_sentences[i].replace('[MASK]', token) for i, token in enumerate(unmasked_tokens_trt)]\r\nfor sentence in unmasked_sentences_trt:\r\n print(sentence)\r\n\r\ntrt_model_fp16 = torch_tensorrt.compile(traced_mlm_model, \r\n inputs= [torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # input_ids\r\n torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # token_type_ids\r\n torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32)], # attention_mask\r\n enabled_precisions= {torch.half}, # Run with 16-bit precision\r\n workspace_size=2000000000,\r\n truncate_long_and_double=True\r\n)\r\n\r\ndef timeGraph(model, input_tensor1, input_tensor2, input_tensor3, num_loops=50):\r\n print(\"Warm up ...\")\r\n with torch.no_grad():\r\n for _ in range(20):\r\n features = model(input_tensor1, input_tensor2, input_tensor3)\r\n\r\n torch.cuda.synchronize()\r\n\r\n print(\"Start timing ...\")\r\n timings = []\r\n with torch.no_grad():\r\n for i in range(num_loops):\r\n start_time = timeit.default_timer()\r\n features = model(input_tensor1, input_tensor2, input_tensor3)\r\n torch.cuda.synchronize()\r\n end_time = timeit.default_timer()\r\n timings.append(end_time - start_time)\r\n # print(\"Iteration {}: {:.6f} s\".format(i, end_time - start_time))\r\n\r\n return timings\r\n\r\n\r\n\r\ndef printStats(graphN", "url": "https://github.com/pytorch/TensorRT/issues/1386", "state": "closed", "labels": [ "question", "No Activity", "performance" ], "created_at": "2022-09-30T16:04:34Z", "updated_at": "2023-01-13T00:02:26Z", "user": "jcyk" }, { "repo": "pytorch/vision", "number": 6664, "title": "Add a function to remove degenerate boxes", "body": "### \ud83d\ude80 The feature\n\nThis function would filter boxes where x2 <= x1 or y2 <= y1\n\n### Motivation, pitch\n\nDegenerate boxes are filtered in at least two places in the current torchvision code:\r\n\r\n* https://github.com/pytorch/vision/blob/f725901dde5bc996fe3d4e163f4d4e7d53720146/torchvision/prototype/transforms/_augment.py\r\n* https://github.com/pytorch/vision/blob/96dbada4d588cabbd24ab1eee57cd261c9b93d20/references/detection/transforms.py\r\n\r\nThis could be refactored, a function ```remove_degenerate_boxes``` could be added in ```ops.boxes``` and exposed publicaly.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nIf relevant I would be happy to work on it !\n\ncc @vfdev-5 @datumbox", "url": "https://github.com/pytorch/vision/issues/6664", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2022-09-28T19:23:36Z", "updated_at": "2022-09-28T21:54:52Z", "user": "Quintulius" }, { "repo": "pytorch/examples", "number": 1071, "title": "resnet training on imagenet is failing", "body": "## Environment\r\npyTorch - upstream code base > 1.12\r\nUB 20.04\r\nGPU - 4\r\n\r\n\r\n## Steps to Reproduce\r\n\r\n`python imagenet/main.py -a resnet50 --dist-url tcp://127.0.0.1:8080 --dist-backend nccl --multiprocessing-distributed --world-size 1 --rank 0 <imagenet data dir> --epochs 3 --batch-size 256 -j64`\r\n\r\n## Failure signature\r\n731:731 [2] NCCL INFO comm 0x7f0078000ef0 rank 2 nranks 4 cudaDev 2 busId 88000 - Abort COMPLETE\r\n730:730 [1] NCCL INFO comm 0x7f1750000ef0 rank 1 nranks 4 cudaDev 1 busId 3d000 - Abort COMPLETE\r\n732:732 [3] NCCL INFO comm 0x7fe214000ef0 rank 3 nranks 4 cudaDev 3 busId b1000 - Abort COMPLETE\r\n729:729 [0] NCCL INFO comm 0x7f2edc000ef0 rank 0 nranks 4 cudaDev 0 busId 1a000 - Abort COMPLETE\r\nTraceback (most recent call last):\r\n File \"main.py\", line 516, in <module>\r\n main()\r\n File \"main.py\", line 117, in main\r\n mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 240, in spawn\r\n return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 198, in start_processes\r\n while not context.join():\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 160, in join\r\n raise ProcessRaisedException(msg, error_index, failed_process.pid)\r\ntorch.multiprocessing.spawn.ProcessRaisedException:\r\n\r\n-- Process 1 terminated with the following error:\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 69, in _wrap\r\n fn(i, *args)\r\n File \"/var/lib/jenkins/examples/imagenet/main.py\", line 278, in main_worker\r\n train(train_loader, model, criterion, optimizer, epoch, args)\r\n File \"/var/lib/jenkins/examples/imagenet/main.py\", line 331, in train\r\n loss = criterion(output, target)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1131, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py\", line 1166, in forward\r\n label_smoothing=self.label_smoothing)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py\", line 2970, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu! (when checking argument for argument target in method wrapper_nll_loss_forward)\r\n\r\n\r\n## Possible Regression\r\nGit reset to commit 5a06e9cac1728c860b53ebfc6792e0a0e21a5678\r\nis working fine.\r\nhttps://github.com/pytorch/examples/commit/5a06e9cac1728c860b53ebfc6792e0a0e21a5678\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/1071", "state": "closed", "labels": [ "bug", "help wanted" ], "created_at": "2022-09-28T06:16:22Z", "updated_at": "2022-09-30T19:42:39Z", "comments": 1, "user": "pruthvistony" }, { "repo": "pytorch/data", "number": 794, "title": "Does torchdata already work with GCP and Azure blob storage", "body": "### \ud83d\ude80 The feature\r\n\r\nWe already have an S3 integration and it seems like the S3 API already works with both\r\n* Azure: https://devblogs.microsoft.com/cse/2016/05/22/access-azure-blob-storage-from-your-apps-using-s3-api/\r\n* GCP: https://vamsiramakrishnan.medium.com/a-study-on-using-google-cloud-storage-with-the-s3-compatibility-api-324d31b8dfeb\r\n\r\n### Motivation, pitch\r\n\r\nSo ideally we can already support Azure, GCP without doing much\r\n\r\n### Alternatives\r\n\r\nBuild a new integration for each of Azure and GCP using their native APIs\r\n\r\nh/t: @chauhang for the idea", "url": "https://github.com/meta-pytorch/data/issues/794", "state": "closed", "labels": [], "created_at": "2022-09-27T21:37:43Z", "updated_at": "2022-10-20T17:52:35Z", "comments": 7, "user": "msaroufim" }, { "repo": "pytorch/pytorch", "number": 85695, "title": "How to load checkpoint from .pt file", "body": "### \ud83d\udc1b Describe the bug\r\nI finetuned T5-large by pytorch lightning and saved a ckpt file.\r\n```\r\nckpt = torch.load(<ckpt_path>)\r\nprint(ckpt.keys())\r\n\r\ndict_keys(['epoch', 'global_step', 'pytorch-lightning_version', 'state_dict', 'loops', 'callbacks', 'optimizer_states', 'lr_schedulers', 'hparams_name', 'hyper_parameters'])\r\n```\r\nIt does have state_dict which means I can use it as my inference task.\r\nI have tried the following two snippets.\r\n\r\n1.\r\nThis can not run correctly\r\n```\r\nckpt = torch.load(<ckpt_path>)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('t5-large')\r\nmodel.load_state_dict(ckpt)\r\nprint(model.lm_head.weight)\r\n```\r\n\r\n2. This seems don't load the weight correctly.\r\n```\r\nckpt = torch.load(<ckpt_path>)\r\nmodel_config = AutoConfig.from_pretrained('t5-large')\r\nmodel_2 = AutoModelForSeq2SeqLM.from_pretrained(None,config = model_config,state_dict = ckpt)\r\nprint(model_2.lm_head.weight)\r\n```\r\n\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 1.12.1+cu102\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.10\r\n\r\nPython version: 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] (64-bit runtime)\r\nPython platform: Linux-4.15.0-189-generic-x86_64-with-debian-buster-sid\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration:\r\nGPU 0: Quadro RTX 8000\r\nGPU 1: Quadro RTX 8000\r\nGPU 2: Quadro RTX 8000\r\nGPU 3: Quadro RTX 8000\r\nGPU 4: Quadro RTX 8000\r\nGPU 5: Quadro RTX 8000\r\nGPU 6: Quadro RTX 8000\r\nGPU 7: Quadro RTX 8000\r\nGPU 8: Quadro RTX 8000\r\nGPU 9: Quadro RTX 8000\r\n\r\nNvidia driver version: 470.141.03\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.6\r\n[pip3] pytorch-lightning==1.7.7\r\n[pip3] torch==1.12.1\r\n[pip3] torchmetrics==0.7.2\r\n[conda] numpy 1.21.6 pypi_0 pypi\r\n[conda] pytorch-lightning 1.7.7 pypi_0 pypi\r\n[conda] torch 1.12.1 pypi_0 pypi", "url": "https://github.com/pytorch/pytorch/issues/85695", "state": "closed", "labels": [], "created_at": "2022-09-27T08:23:40Z", "updated_at": "2022-09-29T17:38:58Z", "user": "ZeyiLiao" }, { "repo": "pytorch/functorch", "number": 1030, "title": "Add support for `tree_map` or document recommended alternative", "body": "I'm working on testing some models using `functorch` along with `torch-mlir` and IREE. I don't see an analog of jax's `tree_map`. Is this something it makes sense for `functorch` to implement, or is there a recommended alternative?", "url": "https://github.com/pytorch/functorch/issues/1030", "state": "open", "labels": [], "created_at": "2022-09-26T19:20:21Z", "updated_at": "2022-11-12T15:22:23Z", "comments": 9, "user": "dellis23" }, { "repo": "pytorch/pytorch", "number": 85625, "title": "How to install pytorch with cuda 11.7 in anaconda envirment?", "body": "### \ud83d\udcda The doc issue\r\n![image](https://user-images.githubusercontent.com/32769358/192286577-2ed4b09a-b062-4f21-a6ae-eadab21b5a3e.png)\r\n\r\n![image](https://user-images.githubusercontent.com/32769358/192285880-7ffcbaa7-ad35-418c-a9ca-c7c7e9b25fc4.png)\r\ncould not find the version of cuda 11.7 when use conda or pip \r\n\r\n### Suggest a potential alternative/fix\r\n\r\nadd cuda 11.7 in conda", "url": "https://github.com/pytorch/pytorch/issues/85625", "state": "open", "labels": [ "triaged" ], "created_at": "2022-09-26T13:15:02Z", "updated_at": "2022-10-04T08:23:30Z", "user": "verigle" }, { "repo": "pytorch/TensorRT", "number": 1379, "title": "Why is size of tensorrt compiled INT8 model after QAT is same as size of FP16 model", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nI have been trying to use INT8 inference for a trained pytorch model.\r\n\r\nI followed this:\r\nhttps://pytorch.org/TensorRT/_notebooks/vgg-qat.html\r\nand\r\nhttps://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/tutorials/quant_resnet50.html\r\n\r\n**My steps are outlined as:**\r\n1) **Train pytorch model**\r\n\r\n- Normal training loop\r\n\r\n2) **Calibrate the model with quant_modules**\r\n\r\n`\r\nquant_modules.initialize(float_module_list=['ConvTranspose2d']) # error when used in quantization, so I add it to the exception list\r\nquant_desc_input = QuantDescriptor(calib_method='histogram')\r\n\r\n# conv and Linear layers to be replaced by there quantized versions\r\nquant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)\r\nquant_nn.QuantLinear.set_default_quant_desc_input(quant_desc_input)\r\n# quant_nn.QuantConvTranspose2d.set_default_quant_desc_input(quant_desc_input)\r\n\r\n# now load pre-trained model\r\nnet1 = model_simple.BevDetNetSimple(input_channel_numbers, settings.N_CHANNELS_PREDICTION_KP, \r\n scale_H=2, scale_W=2, predict_3d_center=True).cuda(DEVICE_ID_GPU)\r\ncuda_dev = \"cuda:{0}\".format(DEVICE_ID_GPU)\r\nnet1.load_state_dict(torch.load(model_weights, map_location=cuda_dev))\r\nprint(net1)\r\n\r\ncalib_data = dataset_classes.CalibDataset(path_calib_dataset_bev=settings.val_bev_save_path)\r\ncalib_loader = DataLoader(calib_data, batch_size=6, drop_last=True)\r\n\r\n# calibrate\r\nwith torch.no_grad():\r\n collect_stats(net1, calib_loader, num_batches=4)\r\n compute_amax(net1, method=\"percentile\", percentile=99.99)\r\n`\r\n\r\n3. **Train the model with quantized layers**\r\n\r\n- Normal training loop, 50 epochs\r\n- Save as torchscript\r\n\r\n`# export to torchscript\r\nquant_nn.TensorQuantizer.use_fb_fake_quant = True\r\nwith torch.no_grad():\r\n jit_model = torch.jit.trace(net1, train_x)\r\n torch.jit.save(jit_model, settings.MODEL_SAVE_PATH_INTERIM + str(epoch) + '_qat.ts')\r\n`\r\n\r\n4. **Convert QAT trained torchscript to tensorrt - int8**\r\n\r\n`\r\ndef export_qat_to_trt_int8(path_trained_qat_ts, path_save_ts_trt_int8):\r\n \"\"\"\r\n Function exports the QAT trained model saved as torchscript and weighst to tensorrt usig INT8 precision\r\n \"\"\"\r\n\r\n # load the saved QAT ts model\r\n qat_model = torch.jit.load(path_trained_qat_ts).eval()\r\n \r\n # compile to torchscript\r\n compile_spec = {\"inputs\": [trt.Input([1, 4, 384, 384])],\r\n \"enabled_precisions\": [torch.int8], \r\n \"truncate_long_and_double\":True,\r\n \"sparse_weights\": True}\r\n trt_mod = trt.compile(qat_model, **compile_spec)\r\n\r\n torch.jit.save(trt_mod, path_save_ts_trt_int8)\r\n`\r\n\r\nAfter doing the above steps, I get a model of size 48MB, which is same as that of FP16 model. The runtime is also similar.\r\n\r\nI have then tried PTQ technique for INT8 - this gives me a model of size 28MB, which is expected. Also, the runtime is about half of FP16 model. This is fine. However, the accuracy is not acceptable.\r\n\r\nPlease let me know what am I missing in the QAT way? Why is my model larger and slower wrt PTQ?\r\n\r\n\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.12\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8.10\r\n - CUDA version: 11.6\r\n - GPU models and configuration: RTX3090/ RTX2080 MAXQ\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1379", "state": "closed", "labels": [ "question", "No Activity", "component: quantization" ], "created_at": "2022-09-26T09:30:28Z", "updated_at": "2023-05-02T15:33:16Z", "user": "SM1991CODES" }, { "repo": "pytorch/data", "number": 782, "title": "Definition of `IterDataPipe` in `pyi` file breaks inheritance path for static type checking", "body": "See comments in https://github.com/pytorch/data/pull/780\r\n\r\n@pmeier \r\nAt list for the first Error, the proper typing should be:\r\n```py\r\ndef load(path: pathlib.Path) -> IterDataPipe[Tuple[str, BinaryIO]]:\r\n if path.is_dir():\r\n dp: IterDataPipe = FileLister(str(path), recursive=True)\r\n else:\r\n dp = IterableWrapper([str(path)])\r\n return FileOpener(dp, mode=\"rb\")\r\n```\r\n\r\nHowever, even with the proper typing shown above, the Error is changed to `Incompatible types in assignment (expression has type \"FileListerIterDataPipe\", variable has type \"IterDataPipe[Any]\")`. And, it doesn't explain what causes the second Error.\r\nSo, I spent a few hours figuring out what is the root cause of the `mypy` Error. In the generated `datapipe.pyi` file, a new `IterDataPipe` class is defined, which overrides the original `IterDataPipe` from the inheritance graph for all other `DataPipe`.\r\n\r\nAll Errors are eliminated when I remove new `IterDataPipe` definition from `datapipe.pyi` and import `IterDataPipe` directly from `torch.utils.data.datapipe`. And, the reason we define new `IterDataPipe` in `pyi` file is attaching all functional APIs to it. We need to do it in a different way by keeping the original `IterDataPipe` and extending the class with all functional APIs.\r\n\r\ncc: @NivekT for python interface file\r\n\r\nFor this PR, I will revert it because our typing system needs to be fixed generally.\r\n\r\n_Originally posted by @ejguan in https://github.com/pytorch/data/issues/780#issuecomment-1252595095_", "url": "https://github.com/meta-pytorch/data/issues/782", "state": "open", "labels": [ "Better Engineering" ], "created_at": "2022-09-20T16:23:07Z", "updated_at": "2023-04-11T16:49:04Z", "comments": 3, "user": "ejguan" }, { "repo": "pytorch/functorch", "number": 1024, "title": "Get .item() error without calling .item()", "body": "Hello guys, I'm new to this package and I want to calculate batched Jacobian w.r.t a self-implemented vector function. But I got the following error when I'm doing this.\r\n\r\n_RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report._ \r\n\r\nHere is my code. I don't understand where the `.item()` comes from. Is this slicing operation ` q_current[0:3]` wrong? How can I fix this?\r\n\r\n```python\r\nimport torch\r\nfrom functorch import jacrev,vmap\r\n\r\n#batch * len\r\nq_current = torch.randn((4,4*3-1),requires_grad=True)\r\n\r\n\r\ndef geoCompute(q_current):\r\n k1 = q_current[0:3]\r\n return k1\r\n\r\n\r\njacobian = vmap(jacrev(geoCompute))(q_current)\r\n```", "url": "https://github.com/pytorch/functorch/issues/1024", "state": "open", "labels": [], "created_at": "2022-09-20T07:57:25Z", "updated_at": "2022-09-20T12:16:59Z", "comments": 1, "user": "LiXinrong1012" }, { "repo": "pytorch/examples", "number": 1063, "title": "question about drop_last=True on validation mode", "body": "I don't know why this code use drop_last=True on validation mode.\r\nAlso, this code only uses batch_size dividable datas for calculating average top1,5 errors.\r\nAnd then re-generate auxiliary validation data&dataloader for printing remaining logs.\r\n\r\nCan anyone tell me why this code uses this method?", "url": "https://github.com/pytorch/examples/issues/1063", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2022-09-19T01:41:05Z", "updated_at": "2022-09-23T04:08:12Z", "user": "DY112" }, { "repo": "pytorch/TensorRT", "number": 1362, "title": "\u2753 [Question] Why do you not build & release Windows wheels?", "body": "## \u2753 Question\r\n\r\nJust curious why you only make Linux wheels. It seems like since the last release it totally should have been possible for you guys to pre-build windows wheels. Just curious why this is\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1362", "state": "closed", "labels": [ "question", "channel: windows" ], "created_at": "2022-09-17T14:22:22Z", "updated_at": "2022-09-19T15:53:58Z", "user": "joeyballentine" }, { "repo": "pytorch/torchx", "number": 602, "title": "YAML example of submitting a job using kubernetes", "body": "## \u2753 Questions and Help\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our\r\n[documentation](https://pytorch.org/torchx).\r\n\r\n\r\n### Question\r\nI am new to use torchx with kubernetes scheduling. I followed the document to launch the etcd service successfully, which gives me the following results:\r\n\r\n```console\r\nfoo@bar:~$ kubectl get svc\r\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\r\netcd-client ClusterIP 192.168.50.248 <none> 2379/TCP 30m\r\netcd-server ClusterIP 192.168.53.173 <none> 2379/TCP,2380/TCP 30m\r\n```\r\nThis is a little bit different than the example result given by the elastic tutorial (with two clusters): https://github.com/pytorch/elastic/tree/master/kubernetes. \r\n\r\nI am not sure who to write or modify a yaml file to submit a training job similar to the example provided by the elastic tutorial: \r\nhttps://github.com/pytorch/elastic/blob/master/kubernetes/config/samples/imagenet.yaml .\r\n\r\nI wonder if it is possible to provide a similar training yaml file for me to study?\r\n\r\nBest,\r\nYihao\r\n\r\n\r\n ", "url": "https://github.com/meta-pytorch/torchx/issues/602", "state": "closed", "labels": [], "created_at": "2022-09-15T22:42:21Z", "updated_at": "2022-10-20T17:44:15Z", "comments": 1, "user": "yihaocs" }, { "repo": "pytorch/pytorch", "number": 84988, "title": "Document how to use parameters in C++ modular API (was How to use torch.nn.Parameter in libtorch cpp?)", "body": "### \ud83d\udc1b Describe the bug\n\nHI Dear torch team:\r\n in pytorch python env ,we always use torch.nn.Parameter for cache some tensor variable ,like this\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\n memory_value = nn.Parameter(torch.cat([self.init_memory_value.unsqueeze(0) for _ in range(batch_size)], 0).data)\r\n self.mem.init_value_memory(memory_value)\r\n\r\n```\r\nbut when I want to use cpp libtorch 1.12.1 , I am not found [ torch::nn::Parameter() ], I don't how to use Parameter in libtorch cpp env, could you help me ,thanks a lot.\r\n\n\n### Versions\n\nlibtorch 1.12.1\r\ncpp 20\r\nMacOS lastest", "url": "https://github.com/pytorch/pytorch/issues/84988", "state": "closed", "labels": [], "created_at": "2022-09-14T06:31:31Z", "updated_at": "2022-09-16T02:59:51Z", "user": "mullerhai" }, { "repo": "pytorch/TensorRT", "number": 1355, "title": "\u2753 [Question] How can I add torchvision.transforms.functional.gaussian_blur to the conversion?", "body": "## \u2753 Question\r\n\r\nHow could I make torchvision.transforms.functional.gaussian_blur operation compatible with torch_tensorrt ?\r\n\r\n## What you have already tried\r\n\r\nHello. The last step of my forward method is to apply gaussian_blur. Unfortunately this is not compatible with this framework and I must to put it out of the forward method. If I do, the model is correctly parsed to TensorRT engine. If not, I get this error \r\n\r\n```\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: (Unnamed Layer* 613) [Convolution]: two inputs (data and weights) are allowed only in explicit-quantization mode.\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [network.cpp::validateWeightedLayersInputs::2378] Error Code 4: Internal Error ((Unnamed Layer* 613) [Convolution]: Cannot set more than one input unless network has Q/DQ layers.)\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )\r\nTraceback (most recent call last):\r\n File \"/home/jack3/tkh-projects/02-AD/code/TKHAD/kk.py\", line 78, in <module>\r\n trt_model = torch_tensorrt.compile(\r\n File \"/home/jack3/tkh-projects/02-AD/code/TKHAD/env/lib/python3.10/site-packages/torch_tensorrt/_compile.py\", line 115, in compile\r\n return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)\r\n File \"/home/jack3/tkh-projects/02-AD/code/TKHAD/env/lib/python3.10/site-packages/torch_tensorrt/ts/_compiler.py\", line 113, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nRuntimeError: [Error thrown at core/conversion/conversionctx/ConversionCtx.cpp:147] Building serialized network failed in TensorRT\r\n```\r\nI suppose there is any operation inside gaussian_blur incompatible, although the error is not clear for me.\r\n\r\nThis is the code that convert the model\r\n\r\n```\r\ndummy_input = torch.empty((1, 3, 224 ,224), device=torch.device('cuda'))\r\njit_model = torch.jit.trace(model, dummy_input)\r\n\r\ntrt_model = torch_tensorrt.compile(\r\n jit_model,\r\n \"default\",\r\n [torch_tensorrt.Input((1, 3, 224, 224), dtype=torch.float32)],\r\n torch.float32,\r\n truncate_long_and_double = True\r\n)\r\n```\r\n\r\nAnd this is the part of my model with gaussian_blur\r\n\r\n```\r\nfrom torchvision.transforms.functional import gaussian_blur\r\nimport torch\r\n\r\nclass MyModel(torch.nn.Module):\r\n def __init__(self):\r\n ... \r\n self.kernel = 2 * int(4.0 * 4 + 0.5) + 1\r\n\r\n def forward(self, x: torch.Tensor):\r\n ...\r\n \r\n map_scores = gaussian_blur(map_scores, [self.kernel , self.kernel ], [4, 4])\r\n return map_scores\r\n```\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0+cu113\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Python version: 3.10\r\n - CUDA version: 11.7\r\n - Torch_tensorrt Version: 1.1.0\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1355", "state": "closed", "labels": [ "question", "component: converters", "No Activity" ], "created_at": "2022-09-13T09:34:32Z", "updated_at": "2023-04-23T00:02:34Z", "user": "mjack3" }, { "repo": "pytorch/pytorch", "number": 84923, "title": "[FX] How to replace torch.functional with nn.module? TypeError: forward() takes 2 positional arguments but 3 were given", "body": "### \ud83d\udc1b Describe the bug\n\nI would like to use `torch.fx` to replace `toirch.functional` into `nn.module` for further model optimization.\r\n\r\nExample:\r\n```Python\r\n# Original\r\nF.adaptive_avg_pool2d(x, 1)\r\n# Target\r\nnn.AdaptiveAvgPool2d(1)\r\n```\r\n\r\nHere is my code:\r\n\r\n```Python\r\nwith model.graph.inserting_before(node):\r\n new_module_str = str(node._prev).split('_')[0] + \".adaptive_avg_pool2d\"\r\n model.add_submodule(new_module_str, nn.AdaptiveAvgPool2d(node.args[1:]))\r\n new_node = model.graph.call_module(new_module_str, node.args)\r\n node.replace_all_uses_with(new_node)\r\nmodel.graph.erase_node((node))\r\n```\r\n\r\nThe generated model can be recompiled through fx \r\n```Python\r\n...\r\n (conv1): Module(\r\n (0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), B=1)\r\n (1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, B=1)\r\n (2): ReLU6(inplace=True)\r\n (adaptive_avg_pool2d): AdaptiveAvgPool2d(output_size=(1,))\r\n )\r\n (conv2): Conv2d(1280, 10, kernel_size=(1, 1), stride=(1, 1), B=1)\r\n\r\n...\r\n conv1_0 = getattr(self.conv1, \"0\")(stage7_residual_7); stage7_residual_7 = None\r\n conv1_1 = getattr(self.conv1, \"1\")(conv1_0); conv1_0 = None\r\n conv1_2 = getattr(self.conv1, \"2\")(conv1_1); conv1_1 = None\r\n conv1_adaptive_avg_pool2d = self.conv1.adaptive_avg_pool2d(conv1_2, 1); conv1_2 = None\r\n conv2 = self.conv2(conv1_adaptive_avg_pool2d); conv1_adaptive_avg_pool2d = None\r\n flatten_replacement = hydro_fx_fuse_flatten_replacement(conv2, 1); conv2 = None\r\n return flatten_replacement\r\n```\r\n\r\nbut can not train:\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/home/xxx/cifar_fuse.py\", line 148, in <module>\r\n train_result = train_epoch(train_loader, model, criterion, optimizer)\r\n File \"/home/xxx/cifar_fuse.py\", line 102, in train_epoch\r\n pred = model(X)\r\n File \"/home/xxx/miniconda3/lib/python3.9/site-packages/torch/fx/graph_module.py\", line 652, in call_wrapped\r\n return self._wrapped_call(self, *args, **kwargs)\r\n File \"/home/xxx/miniconda3/lib/python3.9/site-packages/torch/fx/graph_module.py\", line 277, in __call__\r\n raise e\r\n File \"/home/xxx/miniconda3/lib/python3.9/site-packages/torch/fx/graph_module.py\", line 267, in __call__\r\n return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]\r\n File \"/home/xxx/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"<eval_with_key>.3\", line 157, in forward\r\n File \"/home/xxx/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() takes 2 positional arguments but 3 were given\r\n```\r\n\n\n### Versions\n\nVersion: Pytorch 1.12.1\n\ncc @ezyang @SherlockNoMad @soumith", "url": "https://github.com/pytorch/pytorch/issues/84923", "state": "closed", "labels": [ "fx" ], "created_at": "2022-09-13T06:17:55Z", "updated_at": "2022-09-13T07:23:46Z", "user": "Qinghao-Hu" }, { "repo": "pytorch/TensorRT", "number": 1351, "title": "\u2753 [Question] Not enough inputs provided (runtime.RunCudaEngine)", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\ni make a pressure test on my model compiled by torch-tensorrt, it will report errors after 5 minutes, the traceback as blow:\r\n```shell\r\n2022-09-09T09:16:01.618971735Z File \"/component/text_detector.py\", line 135, in __call__\r\n2022-09-09T09:16:01.618975181Z outputs = self.net(inp)\r\n2022-09-09T09:16:01.618978313Z File \"/miniconda/envs/python36/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n2022-09-09T09:16:01.618981965Z return forward_call(*input, **kwargs)\r\n2022-09-09T09:16:01.618985142Z RuntimeError: The following operation failed in the TorchScript interpreter.\r\n2022-09-09T09:16:01.618988457Z Traceback of TorchScript, serialized code (most recent call last):\r\n2022-09-09T09:16:01.618991980Z File \"code/__torch__.py\", line 8, in forward\r\n2022-09-09T09:16:01.618995305Z input_0: Tensor) -> Tensor:\r\n2022-09-09T09:16:01.618998495Z __torch___ModelWrapper_trt_engine_ = self_1.__torch___ModelWrapper_trt_engine_\r\n2022-09-09T09:16:01.619001820Z _0 = ops.tensorrt.execute_engine([input_0], __torch___ModelWrapper_trt_engine_)\r\n2022-09-09T09:16:01.619005168Z ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n2022-09-09T09:16:01.619008442Z _1, = _0\r\n2022-09-09T09:16:01.619011485Z return _1\r\n2022-09-09T09:16:01.619014563Z \r\n2022-09-09T09:16:01.619017565Z Traceback of TorchScript, original code (most recent call last):\r\n2022-09-09T09:16:01.619020865Z RuntimeError: [Error thrown at core/runtime/register_trt_op.cpp:101] Expected compiled_engine->exec_ctx->allInputDimensionsSpecified() to be true but got false\r\n2022-09-09T09:16:01.619024625Z Not enough inputs provided (runtime.RunCudaEngine)\r\n```\r\nthen i get an error about cuda memory illegal access:\r\n```shell\r\n2022-09-13T02:32:46.621963863Z File \"/component/text_detector.py\", line 136, in __call__\r\n2022-09-13T02:32:46.621966267Z inp = inp.cuda()\r\n2022-09-13T02:32:46.621968419Z RuntimeError: CUDA error: an illegal memory access was encountered\r\n```\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\nI have tried upgrade the pytorch version from 1.10.0 to 1.10.2, also tried upgrade torch to 1.11.0 python 3.7, but it didn't works.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.2\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): centos 7\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): /\r\n - Are you using local sources or building from archives: no\r\n - Python version: 3.6\r\n - CUDA version: 11.3\r\n - GPU models and configuration: gpu is nvidia-T4 with 16G memory\r\n - Any other relevant information: \r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1351", "state": "closed", "labels": [ "question", "No Activity", "component: runtime" ], "created_at": "2022-09-13T02:39:11Z", "updated_at": "2023-03-26T00:02:17Z", "user": "Pekary" }, { "repo": "pytorch/TensorRT", "number": 1340, "title": "\u2753 [Question] No improvement when I use sparse-weights? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n **No speed improvement when I use sparse-weights.**\r\nI just modified this notebook https://github.com/pytorch/TensorRT/blob/master/notebooks/Hugging-Face-BERT.ipynb\r\nAnd add the sparse_weights=True in the compile part. I also changed the regional bert-base model when I apply 2:4 sparse on most parts of the FC layers.\r\n![image](https://user-images.githubusercontent.com/32805624/189258468-4600a5f8-1e23-4989-806c-a031757ffbb9.png)\r\n\r\nBut whether I set the \"sparse_weights=True\", the results look like no changes.\r\nHere are some results.\r\n\r\nset sparse_weights=False\r\n![image](https://user-images.githubusercontent.com/32805624/189258701-bff96d33-d344-4365-9b94-2c4c153494ca.png)\r\n\r\nset sparse_weights=True\r\n![image](https://user-images.githubusercontent.com/32805624/189258757-9a2b2a4a-059e-4b98-bf64-8c476f94b98e.png)\r\n\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.13\r\n - CPU Architecture:x86-64\r\n - OS (e.g., Linux):Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8\r\n - CUDA version: 11.7.1\r\n - GPU models and configuration: Nvidia A100 GPU & CUDA Driver Version 515.65.01\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n![image](https://user-images.githubusercontent.com/32805624/189259139-954863df-b625-4de5-8913-c43cea4c2361.png)\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1340", "state": "closed", "labels": [ "question", "No Activity", "performance" ], "created_at": "2022-09-09T02:26:48Z", "updated_at": "2023-03-26T00:02:17Z", "user": "wzywzywzy" }, { "repo": "pytorch/vision", "number": 6545, "title": "add quantized vision transformer model", "body": "### \ud83d\ude80 The feature\n\nhi, thanks for your great work. I hope to be able to add quantized vit model (for ptq or qat).\n\n### Motivation, pitch\n\nIn 'torchvision/models/quantization', there are several quantized model (Eager Mode Quantization) that is very useful for me to learn quantization. In recent years, Transformer model is very popular. I want to learn how to quantized Transformer model, e.g Vision Transformer, Swin Transformer etc, using pytorch official tools like Eager Mode Quantization. I also tried to modify it myself, but failed. I don't know how to quantify 'pos_embedding' (nn.Parameter) and nn.MultiheadAttention module. look forward to your reply.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/6545", "state": "open", "labels": [ "question", "module: models.quantization" ], "created_at": "2022-09-08T09:34:33Z", "updated_at": "2022-09-09T11:17:45Z", "user": "WZMIAOMIAO" }, { "repo": "pytorch/vision", "number": 6543, "title": "Inconsistent use of FrozenBatchNorm in Faster-RCNN?", "body": "Hi,\r\nwhile customizing and training a Faster-RCNN object detection model based on `torchvision.models.detection.faster_rcnn`, I've noticed that the pre-trained model of type `fasterrcnn_resnet50_fpn_v2` always use `nn.BatchNorm2d` normalization layers, while `fasterrcnn_resnet50_fpn` uses `torchvision.models.ops.misc.FrozenBatchNorm2d` when pretrained weights are loaded. I've noticed deteriorating performance of the V2 model when training a COCO pretrained model with low batch size. I am suspecting that this is related to the un-frozen `nn.BatchNorm2d` layers, and indeed, replacing `nn.BatchNorm2d` with `torchvision.models.ops.misc.FrozenBatchNorm2d` improves the performance for my task. \r\n\r\nThus, my question is: Is this discrepancy in normalization layers intentional, and if yes what could be other reasons for V2 model underperforming compared to the V1 model?\r\n\r\nI'm using pytorch 1.12, torchvision 0.13.\r\n\r\nThanks!\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6543", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2022-09-07T08:16:00Z", "updated_at": "2024-06-23T16:24:37Z", "user": "MoPl90" }, { "repo": "pytorch/data", "number": 763, "title": "Online doc for DataLoader2/ReadingService and etc.", "body": "### \ud83d\udcda The doc issue\n\nAs we are preparing the next release with `DataLoader2`, we might need to add a few pages for `DL2`, `ReadingService` and all other related functionalities in https://pytorch.org/data/main/\r\n\r\n- [x] DataLoader2\r\n- [x] ReadingService\r\n- [x] Adapter\r\n- [ ] Linter\r\n- [x] Graph function\r\n- [ ] \n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/763", "state": "open", "labels": [ "documentation" ], "created_at": "2022-09-06T15:37:49Z", "updated_at": "2022-11-15T15:13:49Z", "comments": 4, "user": "ejguan" }, { "repo": "pytorch/TensorRT", "number": 1335, "title": "[Question? Bug?] Tried to allocate 166.38 GiB, seems weird", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI got errors\r\n```\r\n model_new_trt = trt.compile(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt/_compile.py\", line 109, in compile\r\n return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py\", line 113, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nRuntimeError: The following operation failed in the TorchScript interpreter.\r\nTraceback of TorchScript (most recent call last):\r\n %1 : bool = prim::Constant[value=0]()\r\n %2 : int[] = prim::Constant[value=[0, 0, 0]]()\r\n %4 : Tensor = aten::_convolution(%x, %w, %b, %s, %p, %d, %1, %2, %g, %1, %1, %1, %1)\r\n ~~~~ <--- HERE\r\n return (%4)\r\nRuntimeError: CUDA out of memory. Tried to allocate 166.38 GiB (GPU 0; 31.75 GiB total capacity; 1.31 GiB already allocated; 29.14 GiB free; 1.53 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\n\r\nConverting Script\r\n``` \r\n\r\n model_new_trt = trt.compile(\r\n model_new,\r\n inputs=[trt.Input(\r\n min_shape=[1, 1, 210, 748, 748],\r\n opt_shape=[1, 1, 210, 748, 748],\r\n max_shape=[1, 1, 210, 748, 748],\r\n dtype=torch.float32\r\n )],\r\n )\r\n```\r\n\r\nMy model takes 28GB on inference forward.\r\nBut the 166GB so huge, is this correct memory usage?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n - Docker : nvcr.io/nvidia/pytorch:22.07-py3\r\n - TRT : 1.2.0a0\r\n - GPU models and configuration: V100 32GB\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1335", "state": "closed", "labels": [ "question", "No Activity", "component: partitioning" ], "created_at": "2022-09-06T15:16:41Z", "updated_at": "2022-12-26T00:02:39Z", "user": "zsef123" }, { "repo": "pytorch/data", "number": 762, "title": "Allow Header(limit=None) ?", "body": "Not urgent at all, just a minor suggestion:\r\n\r\nIn the benchmark scripts I'm currently running I want to limit the number of samples in a data-pipe according to an `args.limit` CLI parameter. I'd be nice to be able to just write:\r\n\r\n```py\r\ndp = Header(dp, limit=args.limit)\r\n```\r\n\r\nand let `Header` be a no-op when `limit=None`. This might be a bit niche, and the alternative is to just protect the call in a `if` block, so I would totally understand if this isn't in scope (and it's really not urgent in any case)", "url": "https://github.com/meta-pytorch/data/issues/762", "state": "closed", "labels": [ "good first issue" ], "created_at": "2022-09-06T11:04:57Z", "updated_at": "2022-12-06T20:20:58Z", "comments": 4, "user": "NicolasHug" }, { "repo": "pytorch/pytorch", "number": 84553, "title": "[ONNX] Change how context is given to symbolic functions", "body": "Current symbolic functions can take a context as an input, pushing graphs to the second argument. To support these functions, we need to annotate the first argument as symbolic context and tell them part in call time by examining the annotations. \r\n\r\nChecking annotations is slow and this process complicates the logic in the caller. \r\n\r\nInstead we can wrap the graph object in a GraphContext, exposing all methods used from the graph and include the context in the GraphContext. This way all the old symbolic functions continue to work and we do not need to do the annotation checking if we know the symbolic function is a \"new function\". \r\n\r\nWe can edit a private field in the functions at registration time to tag them as \"new style\" symbolic functions that always takes a wrapped Graph with context object as input.\r\n\r\nThis also has the added benefit where we no longer need to monkey patch the Graph object to expose the g.op method. Instead the method can be defined in the graph context object. ", "url": "https://github.com/pytorch/pytorch/issues/84553", "state": "closed", "labels": [ "module: onnx", "triaged", "topic: improvements" ], "created_at": "2022-09-05T22:04:52Z", "updated_at": "2022-09-28T22:56:39Z", "user": "justinchuby" }, { "repo": "pytorch/TensorRT", "number": 1332, "title": "\u2753 [Question] Using torch-trt to test bert's qat quantitative model", "body": "## \u2753 Question\r\n\r\nWhen using torch-trt to test Bert's qat quantization ( https://zenodo.org/record/4792496#.YxGrdRNBy3J ) model, I encountered many FakeTensorQuantFunction nodes in the pass, and at the same time triggered many nodes that could not convert TRT, and split the graph into many subgraphs\r\n![image](https://user-images.githubusercontent.com/17673134/188449461-55791f9e-884a-4961-b861-7b82110c0db2.png)\r\n\r\n![image](https://user-images.githubusercontent.com/17673134/188449320-cf8e345d-25af-4d0f-b7af-78e94750da73.png)\r\n\r\nquestion:\r\n1. Can you tell me how to explain the nodes that appear in the pass, and how to explain the symbols (^) in front of these nodes?\r\n2. How can these quantization nodes be converted into qat nodes corresponding to torch-trt\uff08 https://github.com/pytorch/TensorRT/blob/master/core/conversion/converters/impl/quantization.cpp \uff09?", "url": "https://github.com/pytorch/TensorRT/issues/1332", "state": "closed", "labels": [ "question", "No Activity", "component: quantization" ], "created_at": "2022-09-05T12:35:41Z", "updated_at": "2023-03-25T00:02:27Z", "user": "lixiaolx" }, { "repo": "pytorch/serve", "number": 1851, "title": "High utilization of hardware ", "body": "HI, I'm trying to use torchserve as a backend with a custom hardware setup. How do you suggest to run such that the hardware is maximally utilized? For example I tried using the benchmarks-ab.py script to test the server for throughput on resnet18 but only achieved ~200 requests per second (tried different batch sizes) while the hardware is capable of crunching at least 10,000 images per second.\r\n\r\nThanks for any help.", "url": "https://github.com/pytorch/serve/issues/1851", "state": "closed", "labels": [ "question", "triaged" ], "created_at": "2022-09-05T05:15:29Z", "updated_at": "2022-09-08T09:13:40Z", "user": "Vert53" }, { "repo": "pytorch/data", "number": 761, "title": "Would TorchData provide GPU support for loading and preprocessing images? ", "body": "### \ud83d\ude80 The feature\n\nWould TorchData provide GPU support for loading and preprocessing images? \n\n### Motivation, pitch\n\nWhen I am learning PyTorch, I find, currently, it do not support using GPU to load images or any other transforms of preprocessing and encoding data.\r\nI want to know whether this would be taken into consideration into the design of TorchData.\n\n### Alternatives\n\nCurrently, NVIDIA-DALI is an impressive alternative for loading and preprocessing images with GPU.\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/761", "state": "open", "labels": [ "topic: new feature", "triaged" ], "created_at": "2022-09-03T09:16:30Z", "updated_at": "2022-11-21T20:06:25Z", "comments": 5, "user": "songyuc" }, { "repo": "pytorch/serve", "number": 1842, "title": "initial parameters transmit", "body": "### \ud83d\ude80 The feature\n\nhow transmit the initial parameters from the first model to laters in workflow.\n\n### Motivation, pitch\n\nhow transmit the initial parameters from the first model to laters in workflow.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/1842", "state": "open", "labels": [ "question", "triaged_wait", "workflowx" ], "created_at": "2022-09-02T14:51:38Z", "updated_at": "2022-09-06T10:42:39Z", "user": "jack-gits" }, { "repo": "pytorch/serve", "number": 1841, "title": "how to register a workflow directly when docker is started.", "body": "### \ud83d\ude80 The feature\n\nhow to register a workflow directly when docker is started.\n\n### Motivation, pitch\n\nhow to register a workflow directly when docker is started.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/1841", "state": "open", "labels": [ "help wanted", "triaged", "workflowx" ], "created_at": "2022-09-02T14:21:34Z", "updated_at": "2023-11-15T06:49:21Z", "user": "jack-gits" }, { "repo": "pytorch/TensorRT", "number": 1328, "title": "\u2753 [Question] How do you ....? ", "body": "## \u2753 Question\r\n\r\nHi,\r\n\r\nI am trying to use torch-tensorrt to optimize my model for inference. I first compile the model with torch.jit.script and then covnert it to tesnsorrt. \r\n\r\n```shell\r\nmodel = MoViNet(movinet_c.MODEL.MoViNetA0)\r\nmodel.eval().cuda()\r\nscripted_model = torch.jit.script(model)\r\ntrt_model = torch_tensorrt.compile(model,\r\n inputs = [torch_tensorrt.Input((8, 3, 16, 344, 344))],\r\n enabled_precisions= {torch.half}, # Run with FP16\r\n workspace_size= 1 << 20,\r\n truncate_long_and_double=True,\r\n require_full_compilation=True, #True\r\n )\r\n```\r\n\r\nHowever, the tensorrt model has almost the same speed as the regular PyTorch model. And the torchscript model is about 2 times slower:\r\n\r\n```shell\r\ncur_time = time.time()\r\nwith torch.inference_mode():\r\n for _ in range(100):\r\n x = torch.rand(4, 3, 16, 344, 344).cuda()\r\n detections_batch = model(x)\r\nprint(time.time() - cur_time) #11.20 seconds\r\n\r\ncur_time = time.time()\r\nwith torch.inference_mode():\r\n scripted_model(x)\r\n for _ in range(100):\r\n x = torch.rand(4, 3, 16, 344, 344).cuda()\r\n detections_batch = scripted_model(x)\r\nprint(time.time() - cur_time) #23.76 seconds\r\n\r\ncur_time = time.time()\r\nwith torch.inference_mode():\r\n trt_model(x)\r\n for _ in range(100):\r\n x = torch.rand(4, 3, 16, 344, 344).cuda()\r\n detections_batch = trt_model(x)\r\nprint(time.time() - cur_time) #11.01 seconds \r\n```\r\nI'd really appreciate it if someone can help me understand what could be causing this issue.\r\n\r\n## What you have already tried\r\n\r\nI tried compiling and converting the model layer by layer and it doesn't seem like there is a specific operation or layer that takes too much time, however, each layer adds a little bit (0.5 seconds) to the runtime of the scripted model while it only adds about 0.01 to the runtime of the regular PyTorch model. \r\n\r\n## Environment\r\n\r\nTorch-TensorRT Version: 1.1.0\r\nPyTorch Version: 1.11.0+cu113\r\nCPU Architecture: x86_64\r\nOS: Ubuntu 20.04\r\nHow you installed PyTorch: pip\r\nPython version: 3.8\r\nCUDA version: 11.3\r\nGPU models and configuration: NVIDIA GeForce RTX 3070\r\n\r\n## Additional context\r\n\r\nThis is the model. It's taken from here: [MoViNet-pytorch/models.py at main \u00b7 Atze00/MoViNet-pytorch \u00b7 GitHub](https://github.com/Atze00/MoViNet-pytorch/blob/main/movinets/models.py) \r\nI made some changes to resolve the errors I was getting from torch.jit.script and torch-tensorrt.\r\n```shell\r\nclass Swish(nn.Module):\r\n def __init__(self) -> None:\r\n super().__init__()\r\n\r\n def forward(self, x: Tensor) -> Tensor:\r\n return x * torch.sigmoid(x)\r\n\r\nclass Conv3DBNActivation(nn.Sequential):\r\n def __init__(\r\n self,\r\n in_planes: int,\r\n out_planes: int,\r\n *,\r\n kernel_size: Union[int, Tuple[int, int, int]],\r\n padding: Union[int, Tuple[int, int, int]],\r\n stride: Union[int, Tuple[int, int, int]] = 1,\r\n groups: int = 1,\r\n norm_layer: Optional[Callable[..., nn.Module]] = None,\r\n activation_layer: Optional[Callable[..., nn.Module]] = None,\r\n **kwargs: Any,\r\n ) -> None:\r\n super().__init__()\r\n\r\n kernel_size = _triple(kernel_size)\r\n stride = _triple(stride)\r\n padding = _triple(padding)\r\n if norm_layer is None:\r\n norm_layer = nn.Identity\r\n if activation_layer is None:\r\n activation_layer = nn.Identity\r\n self.kernel_size = kernel_size\r\n self.stride = stride\r\n\r\n dict_layers = OrderedDict({\r\n \"conv3d\": nn.Conv3d(in_planes, out_planes,\r\n kernel_size=kernel_size,\r\n stride=stride,\r\n padding=padding,\r\n groups=groups,\r\n **kwargs),\r\n \"norm\": norm_layer(out_planes, eps=0.001),\r\n \"act\": activation_layer()\r\n })\r\n\r\n self.out_channels = out_planes\r\n self.seq_layer = nn.Sequential(dict_layers)\r\n # super(Conv3DBNActivation, self).__init__(dict_layers)\r\n \r\n def forward(self, input):\r\n return self.seq_layer(input)\r\n\r\nclass ConvBlock3D(nn.Module):\r\n def __init__(\r\n self,\r\n in_planes: int,\r\n out_planes: int,\r\n *,\r\n kernel_size: Union[int, Tuple[int, int, int]],\r\n conv_type: str,\r\n padding: Union[int, Tuple[int, int, int]] = 0,\r\n stride: Union[int, Tuple[int, int, int]] = 1,\r\n norm_layer: Optional[Callable[..., nn.Module]] = None,\r\n activation_layer: Optio", "url": "https://github.com/pytorch/TensorRT/issues/1328", "state": "closed", "labels": [ "question", "No Activity", "performance" ], "created_at": "2022-08-31T15:06:50Z", "updated_at": "2022-12-12T00:03:55Z", "user": "ghazalehtrb" }, { "repo": "pytorch/data", "number": 756, "title": "[RFC] More support for functionalities from `itertools`", "body": "### \ud83d\ude80 The feature\r\n\r\nOver time, we have received more and more request for additional `IterDataPipe` (e.g. #648, #754, plus many more). Sometimes, these functionalities are very similar to what is already implemented in [`itertools`](https://docs.python.org/3/library/itertools.html) and [`more-itertools`](https://github.com/more-itertools/more-itertools).\r\n\r\nKeep adding more `IterDataPipe` one at a time seems unsustainable(?). Perhaps, we should draw a line somewhere or provide better interface for users to directly use functions from `itertools`. At the same time, providing APIs with names that are already familiar to Python users can improve the user experience. As @msaroufim mentioned, the Core library does aim to match operators with what is available in `numpy`.\r\n\r\nWe will need to decide on:\r\n1. Coverage - which set of functionalities should we officially in `torchdata`?\r\n2. Implementation - how will users be able to invoke those functions?\r\n\r\n### Coverage\r\n\r\n0. Arbitrary based on estimated user requests/contributions\r\n1. `itertools` ~20 functions (some of which already exist in `torchdata`)\r\n - **This seems common enough and reasonable?**\r\n2. `more-itertools` ~100 functions?\r\n - This is probably too much.\r\n\r\nIf we provide a good wrapper, we might not need to worry about the actual coverage too much?\r\n\r\n### Implementation\r\n\r\n0. Keep adding each function as a new `IterDataPipe`\r\n - This is what we have been doing. We can keep doing that but the cost of maintenance will increase over time.\r\n\r\nCurrently, you can use `IterableWrapper`, but it doesn't always work well since it accepts an iterable, and an iterable doesn't guarantee to restart if you call `iter()` on it again.\r\n\r\n```python\r\nfrom torchdata.datapipes.iter import IterableWrapper\r\nfrom itertools import accumulate\r\n\r\nsource_dp = IterableWrapper(range(10))\r\ndp3 = IterableWrapper(accumulate(source_dp), deepcopy=False)\r\nlist(dp3) # [0, 1, 3, 6, 10, 15, 21, 28, 36, 45]\r\nlist(dp3) # []\r\n```\r\nOne idea to work around that is to:\r\n1. Provide a different wrapper that accepts a `Callable` that returns an `Iterable`, which will be iterated over\r\n - Users can use `functool.partial` to pass in arguments (including `DataPipes` if desired)\r\n - **I personally think we should do this since the cost of doing so is low and unlocks other possibilities.**\r\n\r\n2. Create an `Itertools` DataPipe that delegates other DataPipes, it might look some like this:\r\n\r\n```python\r\nclass ItertoolsIterDataPipe(IterDataPipe):\r\n\r\n supported_operations: Dict[str, Callable] = {\r\n \"repeat\": Repeater,\r\n \"chain\": Concater,\r\n \"filterfalse\": filter_false_constructor,\r\n # most/all 20 `itertools` functions here?\r\n }\r\n\r\n def __new__(cls, name, *args, **kwargs):\r\n if name not in cls.supported_operations:\r\n raise RuntimeError(\"Operator is not supported\")\r\n constructor = cls.supported_operations[name]\r\n return constructor(*args, **kwargs)\r\n\r\nsource_dp = IterableWrapper(range(10))\r\ndp1 = source_dp.filter(lambda x: x >= 5)\r\ndp2 = ItertoolsIterDataPipe(\"filterfalse\", source_dp, lambda x: x >= 5)\r\n\r\nlist(dp1) # [5, 6, 7, 8, 9]\r\nlist(dp2) # [0, 1, 2, 3, 4]\r\n```\r\n\r\nThese options are incomplete. If you have more ideas, please comment below.\r\n\r\n### Motivation, pitch\r\n\r\nThese functionalities are commonly used and can be valuable for users.\r\n\r\n### Additional context\r\n\r\nCredit to @NicolasHug @msaroufim @pmeier and many others for past feedback and discussion related to this topic.\r\n\r\ncc: @VitalyFedyunin @ejguan ", "url": "https://github.com/meta-pytorch/data/issues/756", "state": "open", "labels": [], "created_at": "2022-08-30T21:30:19Z", "updated_at": "2022-09-08T06:54:28Z", "comments": 5, "user": "NivekT" }, { "repo": "pytorch/TensorRT", "number": 1322, "title": "Error when I'm trying to use torch-tensorrt", "body": "## \u2753 Question\r\n\r\nHi\r\nI'm trying to use torch-tensorrt with the pre built ngc container\r\n\r\n\r\nI built it with 22.04 branch and with 22.04 version of ngc\r\nMy versions are:\r\ncuda 10.2\r\ntorchvision 0.13.1\r\ntorch 1.12.1\r\n\r\nBut I get that error:\r\nTraceback (most recent call last):\r\n File \"main.py\", line 31, in <module>\r\n import torch_tensorrt\r\n File \"/usr/local/lib/python3.8/dist-packages/torch_tensorrt/__init__.py\", line 11, in <module>\r\n from torch_tensorrt._compile import *\r\n File \"/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_compile.py\", line 2, in <module>\r\n from torch_tensorrt import _enums\r\n File \"/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_enums.py\", line 1, in <module>\r\n from torch_tensorrt._C import dtype, DeviceType, EngineCapability, TensorFormat\r\nImportError: /usr/local/lib/python3.8/dist-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE\r\n\r\n\r\n\r\nThank's!!", "url": "https://github.com/pytorch/TensorRT/issues/1322", "state": "closed", "labels": [ "question", "channel: NGC" ], "created_at": "2022-08-30T13:09:18Z", "updated_at": "2022-12-15T17:43:52Z", "user": "EstherMalam" }, { "repo": "pytorch/functorch", "number": 1011, "title": "memory_efficient_fusion leads to RuntimeError for higher-order gradients calculation. RuntimeError: You are attempting to call Tensor.requires_grad_() ", "body": "Hi All,\r\n\r\nI've tried improving the speed of my code via using `memory_efficient_fusion`, however, it leads to `Tensor.requires_grad_()` error and I have no idea why. The error is as follows,\r\n```\r\nRuntimeError: You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function being transformed by a functorch transform. This is unsupported, please attempt to use the functorch transforms (e.g. grad, vjp, jacrev, jacfwd, hessian) or call requires_grad_() outside of a function being transformed instead.\r\n```\r\n\r\nI've attached a 'minimal' reproducible example of this behaviour below. I've tried a few different things but nothing's seems to have worked. I did see in #840 `memory_efficient_fusion` is done within a context manager, however, when using that I get the same error. \r\n\r\nThanks in advance! \r\n\r\nEDIT: When I tried running it, it tried to use the `networkx` package but that wasn't installed by default. So, I had to manually install that (which wasn't a problem), just not sure if installing from source should also include install those packages as well! \r\n\r\n```\r\nimport torch\r\nfrom torch import nn\r\n\r\nimport functorch\r\nfrom functorch import make_functional, vmap, jacrev, grad\r\nfrom functorch.compile import memory_efficient_fusion\r\n\r\nimport time\r\n\r\n_ = torch.manual_seed(1234)\r\n\r\n#version info\r\nprint(\"PyTorch version: \", torch.__version__)\r\nprint(\"CUDA version: \", torch.version.cuda)\r\nprint(\"FuncTorch version: \", functorch.__version__)\r\n\r\n#=============================================#\r\n\r\n#time with torch synchronization\r\ndef sync_time() -> float:\r\n torch.cuda.synchronize()\r\n return time.perf_counter()\r\n\r\nclass model(nn.Module):\r\n\r\n def __init__(self, num_inputs, num_hidden):\r\n super(model, self).__init__()\r\n \r\n self.num_inputs=num_inputs\r\n self.func = nn.Tanh()\r\n \r\n self.fc1 = nn.Linear(2, num_hidden)\r\n self.fc2 = nn.Linear(num_hidden, num_inputs)\r\n \r\n def forward(self, x):\r\n \"\"\"\r\n Takes x in [B,A,1] and maps it to sign/logabsdet value in Tuple([B,], [B,])\r\n \"\"\"\r\n \r\n idx=len(x.shape) #creates args for repeat if vmap is used or not\r\n rep=[1 for _ in range(idx)]\r\n rep[-2] = self.num_inputs\r\n g = x.mean(dim=(idx-2), keepdim=True).repeat(*rep)\r\n f = torch.cat((x,g), dim=-1)\r\n\r\n h = self.func(self.fc1(f))\r\n \r\n mat = self.fc2(h)\r\n sgn, logabs = torch.linalg.slogdet(mat)\r\n return sgn, logabs\r\n\r\n#=============================================#\r\n\r\nB=4096 #batch\r\nN=2 #input nodes\r\nH=64 #number of hidden nodes\r\ndevice = torch.device('cuda')\r\n\r\nx = torch.randn(B, N, 1, device=device) #input data\r\n\r\nnet = model(N, H) #our model\r\nnet=net.to(device)\r\n\r\nfnet, params = make_functional(net)\r\n\r\ndef calc_logabs(params, x):\r\n _, logabs = fnet(params, x)\r\n return logabs\r\n\r\ndef calc_dlogabs_dx(params, x):\r\n dlogabs_dx = jacrev(func=calc_logabs, argnums=1)(params, x)\r\n return dlogabs_dx, dlogabs_dx #return aux\r\n\r\ndef local_kinetic_from_log_vmap(params, x):\r\n d2logabs_dx2, dlogabs_dx = jacrev(func=calc_dlogabs_dx, argnums=1, has_aux=True)(params, x)\r\n _local_kinetic = -0.5*(d2logabs_dx2.diagonal(0,-4,-2).sum() + dlogabs_dx.pow(2).sum())\r\n return _local_kinetic \r\n\r\n#memory efficient fusion here\r\n#with torch.jit.fuser(\"fuser2\"): is this needed (from functorch/issues/840)\r\nps_elocal = grad(local_kinetic_from_log_vmap, argnums=0)\r\nps_elocal_fusion = memory_efficient_fusion(grad(local_kinetic_from_log_vmap, argnums=0))\r\n\r\n#ps_elocal_fusion(params, x) #no vmap attempt (throws size mis-match error)\r\n\r\nt1=sync_time()\r\n\r\nvmap(ps_elocal, in_dims=(None, 0))(params, x) #works fine \r\n\r\nt2=sync_time()\r\n\r\nvmap(ps_elocal_fusion, in_dims=(None, 0))(params, x) #error (crashes on this line)\r\n\r\nt3=sync_time()\r\n\r\nprint(\"Laplacian (standard): %4.2e (s)\",t2-t1)\r\nprint(\"Laplacian (fusion): %4.2e (s)\",t3-t2)\r\n```", "url": "https://github.com/pytorch/functorch/issues/1011", "state": "open", "labels": [], "created_at": "2022-08-28T16:56:02Z", "updated_at": "2022-12-22T19:59:22Z", "comments": 3, "user": "AlphaBetaGamma96" }, { "repo": "pytorch/functorch", "number": 1010, "title": "Multiple gradient calculation for single sample", "body": "[According to the README](https://github.com/pytorch/functorch#working-with-nn-modules-make_functional-and-friends), we are able to calculate **per-sample-gradients** with functorch.\r\n\r\nBut what if we want to get multiple gradients for a **single sample**? For example, imagine that we are calculating multiple losses.\r\n\r\nWe can split each loss calculation as a different sample, but that implementation is inefficient, especially when the forward pass is expensive. Can we at least re-use forward computations?", "url": "https://github.com/pytorch/functorch/issues/1010", "state": "closed", "labels": [], "created_at": "2022-08-28T14:31:11Z", "updated_at": "2023-01-08T10:23:04Z", "comments": 23, "user": "JoaoLages" }, { "repo": "pytorch/TensorRT", "number": 1317, "title": "caffe2", "body": "Why don't you install caffe2 with pytorch in NGC container 22.08?", "url": "https://github.com/pytorch/TensorRT/issues/1317", "state": "closed", "labels": [ "question", "channel: NGC" ], "created_at": "2022-08-27T15:45:17Z", "updated_at": "2023-01-03T18:30:26Z", "user": "s-mohaghegh97" }, { "repo": "pytorch/serve", "number": 1819, "title": "How to transfer files to a custom handler with curl command", "body": "I have created a custom handler that inputs and outputs wav files. \r\nThe code is as follows\r\n```Python\r\n# custom handler file\r\n\r\n# model_handler.py\r\n\r\n\"\"\"\r\nModelHandler defines a custom model handler.\r\n\"\"\"\r\nimport os\r\nimport soundfile\r\nfrom espnet2.bin.enh_inference import *\r\n\r\nfrom ts.torch_handler.base_handler import BaseHandler\r\n\r\nclass ModelHandler(BaseHandler):\r\n \"\"\"\r\n A custom model handler implementation.\r\n \"\"\"\r\n\r\n def __init__(self):\r\n self._context = None\r\n self.initialized = False\r\n self.model = None\r\n self.device = None\r\n\r\n def initialize(self, context):\r\n \"\"\"\r\n Invoke by torchserve for loading a model\r\n :param context: context contains model server system properties\r\n :return:\r\n \"\"\"\r\n\r\n # load the model\r\n self.manifest = context.manifest\r\n\r\n properties = context.system_properties\r\n model_dir = properties.get(\"model_dir\")\r\n self.device = torch.device(\"cuda:\" + str(properties.get(\"gpu_id\")) if torch.cuda.is_available() else \"cpu\")\r\n\r\n # Read model serialize/pt file\r\n serialized_file = self.manifest['model']['serializedFile']\r\n model_pt_path = os.path.join(model_dir, serialized_file)\r\n\r\n if not os.path.isfile(model_pt_path):\r\n raise RuntimeError(\"Missing the model.pt file\")\r\n\r\n self.model = SeparateSpeech(\"./train_enh_transformer_tf.yaml\", \"./valid.loss.best.pth\", normalize_output_wav=True)\r\n\r\n self.initialized = True\r\n\r\n def preprocess(self,data):\r\n audio_data, rate = soundfile.read(data)\r\n preprocessed_data = audio_data[np.newaxis, :]\r\n\r\n return preprocessed_data\r\n\r\n def inference(self, model_input):\r\n model_output = self.model(model_input)\r\n return model_output\r\n\r\n def postprocess(self, inference_output):\r\n \"\"\"\r\n Return inference result.\r\n :param inference_output: list of inference output\r\n :return: list of predict results\r\n \"\"\"\r\n # Take output from network and post-process to desired format\r\n postprocess_output = inference_output\r\n #wav ni suru\r\n return postprocess_output\r\n\r\n def handle(self, data, context):\r\n model_input = self.preprocess(data)\r\n model_output = self.inference(model_input)\r\n return self.postprocess(model_output)\r\n```\r\n\r\nI transferred the wav file to torhserve with the following command\r\n> curl --data-binary @Mix.wav --noproxy '*' http://127.0.0.1:8080/predictions/denoise_transformer -v\r\n\r\nHowever, I got the following response\r\n```\r\n* Trying 127.0.0.1...\r\n* TCP_NODELAY set\r\n* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)\r\n> POST /predictions/denoise_transformer HTTP/1.1\r\n> Host: 127.0.0.1:8080\r\n> User-Agent: curl/7.58.0\r\n> Accept: */*\r\n> Content-Length: 128046\r\n> Content-Type: application/x-www-form-urlencoded\r\n> Expect: 100-continue\r\n>\r\n< HTTP/1.1 100 Continue\r\n* We are completely uploaded and fine\r\n< HTTP/1.1 500 Internal Server Error\r\n< content-type: application/json\r\n< x-request-id: 445155a4-5971-490a-ba7c-206f8eda5ea0\r\n< Pragma: no-cache\r\n< Cache-Control: no-cache; no-store, must-revalidate, private\r\n< Expires: Thu, 01 Jan 1970 00:00:00 UTC\r\n< content-length: 89\r\n< connection: close\r\n<\r\n{\r\n \"code\": 500,\r\n \"type\": \"ErrorDataDecoderException\",\r\n \"message\": \"Bad end of line\"\r\n}\r\n* Closing connection 0\r\n```\r\n\r\nWhat is wrong?\r\n\r\nI have confirmed that the following command returns the response.\r\n> curl --noproxy '*' http://127.0.0.1:8081/models\r\n```\r\n{\r\n \"models\": [\r\n {\r\n \"modelName\": \"denoise_transformer\",\r\n \"modelUrl\": \"denoise_transformer.mar\"\r\n }\r\n ]\r\n}\r\n```\r\n", "url": "https://github.com/pytorch/serve/issues/1819", "state": "closed", "labels": [ "triaged_wait", "support" ], "created_at": "2022-08-27T10:30:27Z", "updated_at": "2022-08-30T23:40:53Z", "user": "Shin-ichi-Takayama" }, { "repo": "pytorch/data", "number": 754, "title": "A more powerful Mapper than can restrict function application to only part of the datapipe items?", "body": "We often have datapipes that return tuples `(img, target)` where we just want to call transformations on the img, but not the target. Sometimes it's the opposite: I want to apply a function to the target, and not to the img.\r\nThis usually forces us to write wrappers that \"passthrough\" either the img or the target. For example:\r\n\r\n```py\r\n\r\ndef decode_img_only(data): # boilerplate wrapper\r\n img, target = data\r\n img = decode(img)\r\n return img, data\r\n\r\ndef resize_img_only(data): # boilerplate wrapper\r\n img, target = data\r\n img = resize(img)\r\n return img, data\r\n\r\ndef add_label_noise(data): # boilerplate wrapper\r\n img, target = data\r\n target = make_noisy_label(target)\r\n return img, data\r\n\r\ndp = ...\r\ndp = dp.map(decode_img_only).map(resize_img_only).map(add_label_noise)\r\n```\r\n\r\nPerhaps a more convenient way of doing this would be to implement something similar to WebDataset's `map_dict` and `map_tuple`? This would avoid all the boilerplate wrappers. For example we could imagine the code above to simply be:\r\n\r\n```py\r\ndp = ...\r\ndp = dp.map_tuple(decode, None).map(resize, None).map(None, make_noisy_label)\r\n# or even\r\ndp = dp.map_tuple(decode, None).map(resize, make_noisy_label)\r\n\r\n# if the datapipes was returning a dict with \"img\" and \"target\" keys this could also be\r\n\r\ndp = dp.map_dict(\"img\"=decode).map_dict(\"img\"=decode, \"target\"=make_noisy_label)\r\n```\r\n\r\nI even think it might be possible to implement all of `map_dict()` and `map_tuple()` functionalities withing the `.map()` function:\r\n- 1 arg == current `map()`\r\n- 1+ arg == `map_tuple()`\r\n- keyword arg == `map_dict()`\r\n\r\nCC @pmeier and @msaroufim to whom this might be of interest", "url": "https://github.com/meta-pytorch/data/issues/754", "state": "open", "labels": [], "created_at": "2022-08-26T21:16:32Z", "updated_at": "2022-08-30T21:48:10Z", "comments": 5, "user": "NicolasHug" }, { "repo": "pytorch/torchx", "number": 589, "title": "Add per workspace runopts/config", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nCurrently Workspaces piggyback on the config options for the scheduler. This means that every scheduler is deeply tied to the workspace and we have to copy the options to every runner.\r\n\r\nhttps://github.com/pytorch/torchx/blob/main/torchx/schedulers/kubernetes_scheduler.py#L654-L658\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\n1. Add a new method to the Workspace base class that allows specifying runopts from them\r\n\r\n```python\r\n@abstractmethod\r\ndef workspace_run_opts(self) -> runopts:\r\n ...\r\n```\r\n\r\n2. Update runner to call the workspace runopts method\r\n\r\n3. Migrate all `image_repo` DockerWorkspace runopts to the class.\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\nhttps://github.com/pytorch/torchx/blob/main/torchx/schedulers/api.py#L187\r\nhttps://github.com/pytorch/torchx/blob/main/torchx/schedulers/docker_scheduler.py\r\nhttps://github.com/pytorch/torchx/blob/main/torchx/workspace/docker_workspace.py\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/589", "state": "open", "labels": [ "enhancement", "module: runner", "docker" ], "created_at": "2022-08-25T18:18:35Z", "updated_at": "2022-08-25T18:18:35Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/pytorch", "number": 84014, "title": "fill_ OpInfo code not used, also, doesn't test the case where the second argument is a Tensor", "body": "Two observations:\r\n1. `sample_inputs_fill_` is no longer used. Can be deleted (https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py#L1798-L1807)\r\n2. The new OpInfo for fill doesn't actually test the `tensor.fill_(other_tensor)` case. Previously we did test this, as shown by `sample_inputs_fill_`\n\ncc @mruberry", "url": "https://github.com/pytorch/pytorch/issues/84014", "state": "open", "labels": [ "module: tests", "triaged" ], "created_at": "2022-08-24T20:39:11Z", "updated_at": "2022-08-24T20:40:39Z", "user": "zou3519" }, { "repo": "pytorch/examples", "number": 1040, "title": "In example DCGAN, curl timed out ", "body": "Your issue may already be reported!\r\nPlease search on the [issue tracker](https://github.com/pytorch/serve/examples) before creating one.\r\n\r\n## Context\r\n<!--- How has this issue affected you? What are you trying to accomplish? -->\r\n<!--- Providing context helps us come up with a solution that is most useful in the real world -->\r\n* Pytorch version: 1.12.1\r\n* Operating System and version: 20.04.4 LTS (Focal Fossa)\r\n\r\n## Your Environment\r\n<!--- Include as many relevant details about the environment you experienced the bug in -->\r\n* Installed using source? [yes/no]: no\r\n* Are you planning to deploy it using docker container? [yes/no]: yes\r\n* Is it a CPU or GPU environment?: GPU\r\n* Which example are you using: DCGAN\r\n* Link to code or data to repro [if any]:\r\n\r\n## Expected Behavior\r\n<!--- If you're describing a bug, tell us what should happen -->\r\ndcgan finishes without errors\r\n\r\n## Current Behavior\r\n<!--- If describing a bug, tell us what happens instead of the expected behavior -->\r\ndcgan fails with exceptions\r\n\r\n## Possible Solution\r\n<!--- Not obligatory, but suggest a fix/reason for the bug -->\r\n\r\n## Steps to Reproduce\r\n<!--- Provide a link to a live example, or an unambiguous set of steps to -->\r\n<!--- reproduce this bug. Include code to reproduce, if relevant -->\r\n1. cd examples\r\n2. bash run_python_examples.sh \"install_deps, dcgan\"\r\n\r\n## Failure Logs [if any]\r\n<!--- Provide any relevant log snippets or files here. -->\r\n```\r\nDownloading classroom train set\r\n--\r\n181 | curl: /opt/conda/lib/libcurl.so.4: no version information available (required by curl)\r\n182 | % Total % Received % Xferd Average Speed Time Time Time Current\r\n183 | Dload Upload Total Spent Left Speed\r\n...\r\ncurl: (18) transfer closed with 3277022655 bytes remaining to read\r\n\r\n...\r\nSome examples failed:\r\n\r\ncouldn't unzip classroom\r\n```\r\n\r\nI know this is a thrid-party repo issue, which I have already raised in [lsun repo](https://github.com/fyu/lsun/issues/46)\r\nIs it possible that you could have a solution on your end? The request speed of the domain http://dl.yf.io is just slow in general.\r\n\r\nThank you!", "url": "https://github.com/pytorch/examples/issues/1040", "state": "open", "labels": [ "data" ], "created_at": "2022-08-23T18:34:09Z", "updated_at": "2022-08-24T02:46:44Z", "comments": 1, "user": "ShiboXing" }, { "repo": "pytorch/TensorRT", "number": 1303, "title": "How to correctly format input for Fp16 inference using torch-tensorrt C++", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\nHi, I am using the following to export a torch scripted model to Fp16 tensorrt which will then be used in a C++ environment.\r\n\r\n`network.load_state_dict(torch.load(path_weights, map_location=\"cuda:0\"))\r\n network.eval().cuda()\r\n\r\n dummy_input = torch.rand(1, 6, 320, 224).cuda()\r\n network_traced = torch.jit.trace(network, dummy_input) # converting to plain torchscript\r\n\r\n # convert/ compile to trt\r\n compile_settings = {\r\n \"inputs\": [torchtrt.Input([1, 6, 320, 224])],\r\n \"enabled_precisions\": {torch.half},\r\n \"workspace_size\": 6 << 22\r\n }\r\n\r\n trt_ts_module = torchtrt.compile(network_traced, inputs=[torchtrt.Input((1, 6, 320, 224), dtype=torch.half)],\r\n enabled_precisions={torch.half},\r\n workspace_size=6<<22)\r\n torch.jit.save(trt_ts_module, trt_ts_save_path)`\r\n\r\nIs this correct?\r\n\r\n\r\nIf yes, then what is the correct way to cast the input tensor in C++?\r\nDo I need to convert it to torck::kHalf explicitly? Or can the inputs stay as FP32\r\n\r\nPlease let me know.\r\n\r\nHere is my code for loading the CNN for inference:\r\n\r\n`try {\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n trt_ts_mod_cnn = torch::jit::load(trt_ts_module_path);\r\n trt_ts_mod_cnn.to(torch::kCUDA);\r\n cout << trt_ts_mod_cnn.type() << endl;\r\n cout << trt_ts_mod_cnn.dump_to_str(true, true, false) << endl;\r\n } catch (const c10::Error& e) {\r\n std::cerr << \"error loading the model from : \" << trt_ts_module_path << std::endl;\r\n // return -1;\r\n }\r\n auto inBEVInference = torch::rand({1, bevSettings.N_CHANNELS_BEV, bevSettings.N_ROWS_BEV, bevSettings.N_COLS_BEV},\\\r\n {at::kCUDA}).to(torch::kFloat32);\r\n // auto inBEVInference = torch::rand({1, bevSettings.N_CHANNELS_BEV, bevSettings.N_ROWS_BEV, bevSettings.N_COLS_BEV},\\\r\n // {at::kCUDA}).to(torch::kFloat16);\r\n std::vector<torch::jit::IValue> trt_inputs_ivalues;\r\n trt_inputs_ivalues.push_back(inBEVInference);\r\n auto outputs = trt_ts_mod_cnn.forward(trt_inputs_ivalues).toTuple();\r\n auto kp = outputs->elements()[0].toTensor();\r\n auto hwl = outputs->elements()[1].toTensor();\r\n auto rot = outputs->elements()[2].toTensor();\r\n auto dxdy = outputs->elements()[3].toTensor();\r\n cout << \"Size KP out -> \" << kp.sizes() << endl;\r\n cout << \"Size HWL out -> \" << hwl.sizes() << endl;\r\n cout << \"Size ROT out -> \" << rot.sizes() << endl;\r\n cout << \"Size DXDY out -> \" << dxdy.sizes() << endl;`\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0+cu113\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Linux, Ubuntu 20.04, docker container\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: local\r\n - Python version: 3.8.10\r\n - CUDA version: Cuda compilation tools, release 11.4, V11.4.152 (on the linux system)\r\n - GPU models and configuration: RTX2080 MaxQ\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1303", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-08-23T14:05:05Z", "updated_at": "2022-12-04T00:02:10Z", "user": "SM1991CODES" }, { "repo": "pytorch/examples", "number": 1039, "title": "FileNotFoundError: Couldn't find any class folder in /content/train2014.", "body": "Your issue may already be reported!\r\nPlease search on the [issue tracker](https://github.com/pytorch/serve/examples) before creating one.\r\n\r\nI wanna train new style model \r\nrun this cmd\r\n\r\n!unzip train2014.zip -d /content\r\n\r\n!python /content/examples/fast_neural_style/neural_style/neural_style.py train --dataset /content/train2014 --style-image /content/A.jpg --save-model-dir /content --epochs 2 --cuda 1\r\n\r\n\r\n## Context\r\n<!--- How has this issue affected you? What are you trying to accomplish? -->\r\n<!--- Providing context helps us come up with a solution that is most useful in the real world -->\r\n* Pytorch version:\r\n* Operating System and version:\r\n\r\n## Your Environment\r\n\r\nColab\r\nhttps://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/style_transfer_inference.ipynb#scrollTo=EozMXwIV9iOJ\r\n\r\ngot this error\r\n\r\nTraceback (most recent call last):\r\n File \"/content/examples/fast_neural_style/neural_style/neural_style.py\", line 249, in <module>\r\n main()\r\n File \"/content/examples/fast_neural_style/neural_style/neural_style.py\", line 243, in main\r\n train(args)\r\n File \"/content/examples/fast_neural_style/neural_style/neural_style.py\", line 43, in train\r\n train_dataset = datasets.ImageFolder(args.dataset, transform)\r\n File \"/usr/local/lib/python3.7/dist-packages/torchvision/datasets/folder.py\", line 316, in __init__\r\n is_valid_file=is_valid_file,\r\n File \"/usr/local/lib/python3.7/dist-packages/torchvision/datasets/folder.py\", line 145, in __init__\r\n classes, class_to_idx = self.find_classes(self.root)\r\n File \"/usr/local/lib/python3.7/dist-packages/torchvision/datasets/folder.py\", line 219, in find_classes\r\n return find_classes(directory)\r\n File \"/usr/local/lib/python3.7/dist-packages/torchvision/datasets/folder.py\", line 43, in find_classes\r\n raise FileNotFoundError(f\"Couldn't find any class folder in {directory}.\")\r\nFileNotFoundError: Couldn't find any class folder in /content/train2014.\r\n\r\nHow can I fix it?\r\nthx", "url": "https://github.com/pytorch/examples/issues/1039", "state": "open", "labels": [ "bug", "data" ], "created_at": "2022-08-23T07:33:17Z", "updated_at": "2023-06-08T03:09:42Z", "comments": 2, "user": "sevaroy" }, { "repo": "pytorch/functorch", "number": 1006, "title": "RuntimeError: CUDA error: no kernel image is available for execution on the device", "body": "Hi, I have cuda 11.7 on my system and I am trying to install functorch, since the stable version of pytorch for cuda 11.7 is not available [here](https://pytorch.org/get-started/previous-versions/), I just run `pip install functorch` which also installs the compatible version of pytorch.\r\n\r\nBut when I run my code that uses the GPU, I get the following error :\r\n\r\n`RuntimeError: CUDA error: no kernel image is available for execution on the device` \r\n\r\nIs it possible to use functorch in my case?", "url": "https://github.com/pytorch/functorch/issues/1006", "state": "closed", "labels": [], "created_at": "2022-08-21T19:30:34Z", "updated_at": "2022-08-24T13:58:45Z", "comments": 8, "user": "ykemiche" }, { "repo": "pytorch/TensorRT", "number": 1295, "title": "Jetpack 5.0.2", "body": "## \u2753 Question\r\n\r\nIs it known yet whether Torch TensorRT is compatible with NVIDIA Jetpack 5.0.2 on NVIDIA Jetson devices?\r\n\r\n## What you have already tried\r\n\r\nI am trying to install torch-tensorrt for Python on my Jetson Xavier NX with Jetpack 5.0.2. Followed the instructions for the Jetpack 5.0 install and have successfully run everything up until ```python3 setup.py install --use-cxx11-abi``` which ran all the way until it got to \u201cAllowing ninja to set a default number of workers\u201d which it hung on for quite some time until eventually erroring out with the output listed below. Any advice would be much appreciated.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.13.0a0+08820cb0.nv22.07\r\n - CPU Architecture: aarch64\r\n - OS (e.g., Linux): Jetson Linux (Ubuntu)\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: Honestly don't know the difference\r\n - Python version: 3.8.10\r\n - CUDA version: 11.4\r\n - GPU models and configuration: Jetson Xavier NX\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n```\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\n[1/4] c++ -MMD -MF /home/nvidia/TensorRT/py/build/temp.linux-aarch64-3.8/torch_tensorrt/csrc/tensorrt_classes.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -UNDEBUG -I/home/nvidia/TensorRT/pytorch_tensorrt/csrc -I/home/nvidia/TensorRT/pytorch_tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-TRTorch/external/tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-Torch-TensorRT/external/tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-TensorRT/external/tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-tensorrt/external/tensorrt/include -I/home/nvidia/TensorRT/py/../ -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/TH -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/usr/include/python3.8 -c -c /home/nvidia/TensorRT/py/torch_tensorrt/csrc/tensorrt_classes.cpp -o /home/nvidia/TensorRT/py/build/temp.linux-aarch64-3.8/torch_tensorrt/csrc/tensorrt_classes.o -Wno-deprecated -Wno-deprecated-declarations -D_GLIBCXX_USE_CXX11_ABI=1 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE=\"_gcc\"' '-DPYBIND11_STDLIB=\"_libstdcpp\"' '-DPYBIND11_BUILD_ABI=\"_cxxabi1013\"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14\r\nFAILED: /home/nvidia/TensorRT/py/build/temp.linux-aarch64-3.8/torch_tensorrt/csrc/tensorrt_classes.o\r\nc++ -MMD -MF /home/nvidia/TensorRT/py/build/temp.linux-aarch64-3.8/torch_tensorrt/csrc/tensorrt_classes.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -UNDEBUG -I/home/nvidia/TensorRT/pytorch_tensorrt/csrc -I/home/nvidia/TensorRT/pytorch_tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-TRTorch/external/tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-Torch-TensorRT/external/tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-TensorRT/external/tensorrt/include -I/home/nvidia/TensorRT/py/../bazel-tensorrt/external/tensorrt/include -I/home/nvidia/TensorRT/py/../ -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/TH -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/usr/include/python3.8 -c -c /home/nvidia/TensorRT/py/torch_tensorrt/csrc/tensorrt_classes.cpp -o /home/nvidia/TensorRT/py/build/temp.linux-aarch64-3.8/torch_tensorrt/csrc/tensorrt_classes.o -Wno-deprecated -Wno-deprecated-declarations -D_GLIBCXX_USE_CXX11_ABI=1 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE=\"_gcc\"' '-DPYBIND11_STDLIB=\"_libstdcpp\"' '-DPYBIND11_BUILD_ABI=\"_cxxabi1013\"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14\r\nc++: fatal error: Killed signal terminated program cc1plus\r\ncompilation terminated.\r\n[2/4] c++ -MMD -MF /home/nvidia/TensorRT/py/build/temp.linux-aarch64-3.8/torch_tensorrt/csrc/tensorrt_backend.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=forma", "url": "https://github.com/pytorch/TensorRT/issues/1295", "state": "closed", "labels": [ "question" ], "created_at": "2022-08-21T03:33:07Z", "updated_at": "2022-08-22T00:38:23Z", "user": "HugeBob" }, { "repo": "pytorch/pytorch", "number": 83721, "title": "How to export a simple model using List.__contains__ to ONNX", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWhen using torch.jit.script, the message shows that \\_\\_contains__ method is not supported.\r\n\r\nThis is a reduced part of my model, the function should be tagged with torch.jit.script because there's a for loop using list.\\_\\_contains__\r\n\r\nAnd I want to export it to an onnx file but failed with the following output.\r\n\r\n### Code\r\n````python\r\nfrom typing import List, Dict\r\nimport torch\r\n\r\nx = torch.tensor([[59, 26, 32, 31, 58, 37, 12, 8, 8, 32, 27, 27, 35, 9, 3, 44, 22, 36,\r\n 22, 61, 51, 35, 15, 13, 14, 32, 22, 21, 9]], dtype=torch.long)\r\n\r\nnums = [3, 4, 5, 6, 7, 8, 9, 14, 15, 16, 17, 18, 22, 23, 24, 25, 26, 27,\r\n 28, 29, 30, 31, 37, 38, 39, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]\r\n\r\n\r\n@torch.jit.script\r\ndef batch(x, l: List[int]):\r\n for i in range(len(x)):\r\n for j in range(len(x[i])):\r\n if x[i, j] in l:\r\n x[i, j] *= 2\r\n return x\r\n\r\n\r\nclass Module1(torch.nn.Module):\r\n def forward(self, x):\r\n return batch(x, nums)\r\n\r\n\r\nm1 = Module1()\r\nprint(m1(x))\r\n\r\ntorch.onnx.export(m1,\r\n (x),\r\n \"2.onnx\",\r\n verbose=True,\r\n input_names=[\"x\"],\r\n dynamic_axes={\r\n \"x\": {\r\n 1: \"frames\",\r\n },\r\n },\r\n opset_version=11,\r\n )\r\n````\r\n\r\n### Output\r\n````\r\nTraceback (most recent call last):\r\n File \"E:\\My Files\\Projects\\Python\\test\\test.py\", line 28, in <module>\r\n torch.onnx.export(m1,\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\__init__.py\", line 350, in export\r\n return utils.export(\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 163, in export\r\n _export(\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 1074, in _export\r\n graph, params_dict, torch_out = _model_to_graph(\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 731, in _model_to_graph\r\n graph = _optimize_graph(\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 308, in _optimize_graph\r\n graph = _C._jit_pass_onnx(graph, operator_export_type)\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\__init__.py\", line 416, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 1401, in _run_symbolic_function\r\n return symbolic_fn(ctx, g, *inputs, **attrs)\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\symbolic_opset9.py\", line 5064, in Loop\r\n torch._C._jit_pass_onnx_block(\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\__init__.py\", line 416, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 1401, in _run_symbolic_function\r\n return symbolic_fn(ctx, g, *inputs, **attrs)\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\symbolic_opset9.py\", line 5064, in Loop\r\n torch._C._jit_pass_onnx_block(\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\__init__.py\", line 416, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"C:\\CodeEnv\\miniconda3\\envs\\dfs\\lib\\site-packages\\torch\\onnx\\utils.py\", line 1421, in _run_symbolic_function\r\n raise symbolic_registry.UnsupportedOperatorError(\r\ntorch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::__contains_ to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.\r\n````\r\n\r\n### Versions\r\n\r\nPyTorch version: 1.12.1+cu113\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.3\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Microsoft Windows 10 \u5bb6\u5ead\u4e2d\u6587\u7248\r\nGCC version: (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0\r\nClang version: Could not collect\r\nCMake version: version 3.23.2\r\nLibc version: N/A\r\n\r\nPython version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:51:29) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)\r\nPython platform: Windows-10-10.0.19044-SP0\r\nIs CUDA available: True\r\nCUDA runtime version: 11.5.119\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070\r\nNvidia driver version: 512.78\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.22.4\r\n[pip3] pytorch-lightning==0.7.1\r\n[pip3] torch==1.12.1+cu113\r\n[pip3] torchaudio==0.12.1+cu113\r\n[pip3] torchvision==0.13.1+cu113\r\n[conda] numpy 1.22.4 pypi_0 pypi\r\n[conda] pytorch-lightning 0.7.1 pypi_0 pypi\r\n[conda] torch 1.12.1+cu113 ", "url": "https://github.com/pytorch/pytorch/issues/83721", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-needs-info" ], "created_at": "2022-08-19T03:05:43Z", "updated_at": "2024-04-01T16:53:35Z", "user": "SineStriker" }, { "repo": "pytorch/pytorch", "number": 83685, "title": "How to use accessors for fast elementwise write?", "body": "### \ud83d\udcda The doc issue\n\n![image](https://user-images.githubusercontent.com/69435296/185450482-d4c8a081-68c2-4b59-9b38-3f3e7b191ad7.png)\r\n\r\nAs seen above from Libtorch documentation, accessors can be used for fast element wise read operations on Libtorch tensors.\r\nHowever, is there a similar functionality for write operations as well?\r\n\r\nThe use case is when preparing a data frame, we could directly use a CPU tensor, write into it and then just copy it to CUDA.\r\nPresently I make a normal array, copy array to CPU tensor using from_blob() and then transfer it to CUDA.\r\n\r\nBest Regards\r\nSambit\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/83685", "state": "closed", "labels": [], "created_at": "2022-08-18T16:50:28Z", "updated_at": "2022-08-24T20:36:19Z", "user": "SM1991CODES" }, { "repo": "pytorch/TensorRT", "number": 1282, "title": "\u2753 [Question] How do you solve the error: Expected Tensor but got Uninitialized?", "body": "## \u2753 Question\r\n\r\nCurrently, I am compiling a custom segmentation model using torch_tensorrt.compile(), using a model script obtained from jit. The code to compile is as follows:\r\n\r\n```\r\nscripted_model = torch.jit.freeze(torch.jit.script(model))\r\n\r\ninputs = [torch_tensorrt.Input(\r\n min_shape=[2, 3, 600, 400],\r\n opt_shape=[2, 3, 600, 400],\r\n max_shape=[2, 3, 600, 400],\r\n dtype=torch.float,\r\n )]\r\nenabled_precisions = {torch.float, torch.half}\r\n\r\nwith torch_tensorrt.logging.debug():\r\n trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions)\r\n```\r\n\r\nThe code fails to compile at the following step:\r\n```\r\n a = self.compression(torch.cat(x_list, 1))\r\n b = self.shortcut(x)\r\n\r\n c = a + b\r\n\r\n return c\r\n```\r\n, throwing the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 118, in <module>\r\n trt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions)\r\n File \"/home/oem/.pyenv/versions/ddrnet/lib/python3.8/site-packages/torch_tensorrt/_compile.py\", line 115, in compile\r\n return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)\r\n File \"/home/oem/.pyenv/versions/ddrnet/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py\", line 113, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nRuntimeError: Expected Tensor but got Uninitialized\r\n```\r\n\r\nIt seems that some variable is uninitialized. However, the strange thing is that replacing the previous code with the following code pieces both compile:\r\n```\r\n a = self.compression(torch.cat(x_list, 1))\r\n\r\n return a\r\n```\r\nand\r\n```\r\n b = self.shortcut(x)\r\n\r\n return b\r\n```\r\nSo, somehow taking the sum of these two tensors results in a failure to compile. Do you have any suggestions I can try such that this step compiles as well?\r\n\r\n## What you have already tried\r\nTried adding the following two parameters to the compilation step as well:\r\n``` \r\ntrt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions, torch_executed_ops=[\"prim::ListConstruct\"], min_block_size=1)\r\ntrt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions, torch_executed_ops=[\"prim::ListConstruct\"])\r\ntrt_ts_module = torch_tensorrt.compile(scripted_model, inputs=inputs, enabled_precisions=enabled_precisions, min_block_size=1)\r\n```\r\n, but these resulted in different errors, thus I decided not to use these parameters for now.\r\n\r\n## Environment\r\n - PyTorch Version (e.g., 1.0): 1.11.0+cu113\r\n - Torch-TensorRT version: 1.1.0\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 20.04 (kernel: 5.4.0-124-generic)\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip, from within a virtual environment (pyenv)\r\n - Are you using local sources or building from archives: No\r\n - Python version: 3.8.13\r\n - CUDA version: 11.7 (Nvidia Driver: 515.65.01)\r\n - GPU models and configuration: Nvidia RTX A2000\r\n\r\nLooking forwards to your answer, thanks in advance.\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1282", "state": "closed", "labels": [ "question" ], "created_at": "2022-08-18T13:04:23Z", "updated_at": "2022-10-11T15:42:19Z", "user": "Mark-M2L" }, { "repo": "pytorch/data", "number": 742, "title": "[Discussion] Is the implementation of `cycler` efficient? ", "body": "TL;DR: It seems in most cases users might be better off using `.flatmap(lambda x: [x for _ in n_repeat])` rather than `.cycle(n_repeat)`.\r\n\r\nHere is the [implementation](https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/util/cycler.py), basically `Cycler` reads from the source DataPipe for `n` number of times.\r\n\r\nThings to consider:\r\n1. This means repeating certain operations (e.g. reading from disk, complicated transformation) for `n` number of times, unless you use `in_memory_cache`.\r\n2. If `shuffle` is used afterwards, I believe `.flatmap(lambda x: [x for _ in n_repeat])` is strictly better than `.cycle(n_repeat)`.\r\n3. For `input = [0, 1, 2]`, the major difference is that `.cycle` returns `[0, 1, 2, 0, 1, 2]` compared to `.flatmap(...)` returning `[0, 0, 1, 1, 2, 2]`.\r\n\r\nQuestions:\r\n1. Should we change the implementation?\r\n2. Should we add something like `.repeat()` which basically does `.flatmap(lambda x: [x for _ in n_repeat])`?\r\n3. Should we advise users to use `.flatmap(...)` instead unless they specifically want the ordering of `[0, 1, 2, 0, 1, 2]`?\r\n\r\n\r\n@VitalyFedyunin @ejguan Let me know what you think.", "url": "https://github.com/meta-pytorch/data/issues/742", "state": "closed", "labels": [], "created_at": "2022-08-17T22:55:30Z", "updated_at": "2022-08-30T18:57:10Z", "comments": 4, "user": "NivekT" }, { "repo": "pytorch/data", "number": 736, "title": "Fix & Implement xdoctest", "body": "### \ud83d\udcda The doc issue\n\nThere is a PR https://github.com/pytorch/pytorch/pull/82797 landed into PyTorch core, which adds the functionality to validate if the example in comment is runnable.\r\n\r\nHowever, in the example of PyTorch Core, we normally refer `torchdata` in all examples for the sake of unification of importing path rather than directly importing `DataPipes` from pytorch core. This would cause `xdoctest` always failing. TBH, I don't know how to solve this problem without changing it back to `import torch.data.utils....`.\r\n\r\nBut, for `torchdata` project, we can do the similar work as a BE project to enable all doc test over the examples to prevent any failing test in our documentation.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/736", "state": "open", "labels": [ "Better Engineering" ], "created_at": "2022-08-16T13:49:46Z", "updated_at": "2022-08-16T19:04:24Z", "comments": 0, "user": "ejguan" }, { "repo": "pytorch/TensorRT", "number": 1272, "title": "\u2753 [Question] How can I debug the error: Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled", "body": "## \u2753 Question\r\n\r\nConverting a model to Tensor Engine with the next code does not work\r\n\r\nInput:\r\n```\r\ntrt_model = ttrt.compile(traced_model, \"default\",\r\n [ttrt.Input((1, 3, 224, 224), dtype=torch.float32)],\r\n torch.float32, truncate_long_and_double=False)\r\n```\r\nOutput:\r\n\r\n`RuntimeError: [Error thrown at core/conversion/converters/converter_util.cpp:167] Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled\r\n`\r\n\r\nRunning with `truncate_long_and_double=True` works but I want to understand what is going on wrong. So I ran\r\n\r\n```\r\nttrt.logging.set_reportable_log_level(ttrt.logging.Level.Debug)\r\ntrt_model = ttrt.compile(traced_model, \"default\",\r\n [ttrt.Input((1, 3, 224, 224), dtype=torch.float32)],\r\n torch.float32, truncate_long_and_double=False)\r\n```\r\n\r\nbut the output is not as clear as I expected (Next comment). Can you explain me the possible things that could raise this type of error? Sorry for the long logs, I worked in a 'tiny' version of the model to try make them shorter before writing here >.<\r\n\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0+cu113\r\n - OS (e.g., Linux): 22.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.10\r\n - CUDA version: 11.3\r\n - GPU models and configuration: RTX3090", "url": "https://github.com/pytorch/TensorRT/issues/1272", "state": "closed", "labels": [ "question" ], "created_at": "2022-08-16T13:40:02Z", "updated_at": "2022-08-22T18:14:00Z", "user": "mjack3" }, { "repo": "pytorch/data", "number": 732, "title": "Recommended way to shuffle intra and inter archives?", "body": "Say I have a bunch of archives containing samples. In my case each archive is a pickle file containing a list of samples, but it could be a tar or something else.\r\n\r\nI want to shuffle between archives (inter) and within archives (intra). My current way of doing it is below. Is there a more canonical solution?\r\n\r\n```py\r\nfrom torchdata.dataloader2 import DataLoader2, adapter\r\nfrom torchdata.datapipes.iter import IterDataPipe, FileLister, IterableWrapper\r\nfrom pathlib import Path\r\n\r\nimport pickle\r\n\r\n# Create archives\r\nroot = Path(\"/tmp/dataset/\")\r\nwith open(root / \"1.pkl\", \"wb\") as f:\r\n pickle.dump(list(range(10)), f)\r\nwith open(root / \"2.pkl\", \"wb\") as f:\r\n pickle.dump(list(range(10, 20)), f)\r\n\r\nclass PickleLoaderDataPipe(IterDataPipe):\r\n def __init__(self, source_datapipe):\r\n self.source_datapipe = source_datapipe\r\n\r\n def __iter__(self):\r\n for path in self.source_datapipe:\r\n with open(path, \"rb\") as f:\r\n yield pickle.load(f) # <- this is a list\r\n\r\nclass ConcaterIterable(IterDataPipe):\r\n # Same as unbatch(), kinda\r\n def __init__(self, source_datapipe):\r\n self.source_datapipe = source_datapipe\r\n\r\n def __iter__(self):\r\n for iterable in self.source_datapipe:\r\n yield from iterable\r\n\r\ndef intra_archive_shuffle(archive_content):\r\n return IterableWrapper(archive_content).shuffle()\r\n \r\n \r\ndp = FileLister(str(root), masks=[\"*.pkl\"])\r\ndp = dp.shuffle() # inter-archive shuffling\r\ndp = PickleLoaderDataPipe(dp)\r\ndp = dp.map(intra_archive_shuffle)\r\ndp = ConcaterIterable(dp) # Note: unbatch doesn't work because it's a datapipe of datapipes\r\n\r\nprint(list(dp))\r\n```", "url": "https://github.com/meta-pytorch/data/issues/732", "state": "open", "labels": [], "created_at": "2022-08-15T17:14:39Z", "updated_at": "2022-08-16T13:02:46Z", "comments": 8, "user": "NicolasHug" }, { "repo": "pytorch/pytorch", "number": 83392, "title": "How to turn off determinism just for specific operations, e.g. upsampling through bilinear interpolation?", "body": "This is the error caused by upsampling through bilinear interpolation when trying to use deterministic algorithms:\r\n\r\n`RuntimeError: upsample_bilinear2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.`\r\n\r\nHow to turn off determinism just for upsampling_bilinear2d (and any other operation)? Thanks!\n\ncc @ngimel @mruberry @kurtamohler", "url": "https://github.com/pytorch/pytorch/issues/83392", "state": "open", "labels": [ "module: cuda", "triaged", "module: determinism" ], "created_at": "2022-08-14T12:15:32Z", "updated_at": "2022-08-15T04:42:36Z", "user": "Jingling1" }, { "repo": "pytorch/TensorRT", "number": 1253, "title": "\u2753 [Question] How to load a TRT_Module in python environment on Windows which has been compiled on C++ Windows ? ", "body": "## \u2753 Question\r\n\r\nI have compiled torch_trt module using libtorch on C++ windows platform. This module is working perfectly on C++ for inference, however I want to use it in Python program on windows platform. How to load this module on python?\r\n\r\nWhen I tried to load it with torch.jit.load() or torch.jit.load() it is throwing following error:\r\n\r\n `File ~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\serialization.py:711, in load(f, map_location, pickle_module, **pickle_load_args)\r\n 707 warnings.warn(\"'torch.load' received a zip file that looks like a TorchScript archive\"\r\n 708 \" dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to\"\r\n 709 \" silence this warning)\", UserWarning)\r\n 710 opened_file.seek(orig_position)\r\n--> 711 return torch.jit.load(opened_file)\r\n 712 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n 713 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n\r\nFile ~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\jit\\_serialization.py:164, in load(f, map_location, _extra_files)\r\n 162 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\n 163 else:\r\n--> 164 cpp_module = torch._C.import_ir_module_from_buffer(\r\n 165 cu, f.read(), map_location, _extra_files\r\n 166 )\r\n 168 # TODO: Pretty sure this approach loses ConstSequential status and such\r\n 169 return wrap_cpp_module(cpp_module)\r\n\r\nRuntimeError: \r\nUnknown type name '__torch__.torch.classes.tensorrt.Engine':\r\n File \"code/__torch__/movinets/models.py\", line 4\r\n __parameters__ = []\r\n __buffers__ = []\r\n __torch___movinets_models_MoViNet_trt_engine_ : __torch__.torch.classes.tensorrt.Engine\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n def forward(self_1: __torch__.movinets.models.MoViNet_trt,\r\n input_0: Tensor) -> Tensor:`\r\n\r\n\r\n## What you have already tried\r\n\r\nSince torch_trt is not supported for Python on windows I picked `libtorchtrt_runtime.so` from linux `python3.8/site-packages/torch_tensorrt/lib/libtorchtrt_runtime.so` path and loaded on python on windows through torch.ops.load_library(). However it throws another error\r\n\r\n`File \"\\video_play.py\", line 189, in get_torch_tensorrt_converted_model torch.ops.load_library(\"libtorchtrt_runtime.so\") File \"C:\\Users\\NomanAnjum\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\_ops.py\", line 255, in load_library ctypes.CDLL(path) File \"C:\\Users\\NomanAnjum\\AppData\\Local\\Programs\\Python\\Python310\\lib\\ctypes\\__init__.py\", line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 193] %1 is not a valid Win32 application`\r\n\r\n## Environment\r\n\r\nWindows 11\r\n\r\nCPU : i9-11980HK x86-64\r\n\r\nGPU : RTX 3080 Mobile\r\n\r\nCuda : 11.5.2\r\n\r\nCudnn : 8.3.1\r\n\r\nLibtorch : 1.11\r\n\r\nTensor_RT : 8.4.1.5\r\n\r\nVisual Studio 2019\r\n\r\nPython 3.10,3.8\r\n\r\n\r\n#### Is there a way to load it in python??", "url": "https://github.com/pytorch/TensorRT/issues/1253", "state": "closed", "labels": [ "question", "No Activity", "channel: windows" ], "created_at": "2022-08-11T06:26:05Z", "updated_at": "2023-02-26T00:02:28Z", "user": "ghost" }, { "repo": "pytorch/pytorch", "number": 83227, "title": "QAT the bias is the int32, how to set the int8?", "body": "### \ud83d\udc1b Describe the bug\n\ni try to do quantization, the weight is int8 ,but the bias is int32, i want to set the bias ---> int8, what i need to do ?\r\nthanks\n\n### Versions\n\nhelp me, thanks\n\ncc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo", "url": "https://github.com/pytorch/pytorch/issues/83227", "state": "closed", "labels": [ "oncall: quantization" ], "created_at": "2022-08-11T03:11:12Z", "updated_at": "2022-08-11T23:10:24Z", "user": "aimen123" }, { "repo": "pytorch/functorch", "number": 999, "title": "vmap and forward-mode AD fail sometimes on in-place views", "body": "## The Problem\r\n\r\n```py\r\nimport torch\r\nfrom functorch import jvp, vmap\r\nfrom functools import partial\r\n\r\nB = 2\r\n\r\ndef f(x, y):\r\n x = x.clone()\r\n view = x[0]\r\n x.copy_(y)\r\n return view, x\r\n\r\ndef push_jvp(x, y, yt):\r\n return jvp(partial(f, x), (y,), (yt,))\r\n\r\nx = torch.randn(2, B, 6)\r\ny = torch.randn(2, 6, B)\r\nyt = torch.randn(2, 6, B)\r\nouts, tangents = vmap(push_jvp, in_dims=(1, 2, 2))(x, y, yt)\r\n```\r\nraises the following:\r\n```\r\nRuntimeError: vmap: Calling Tensor.as_strided is not supported unless the batch dims being vmapped over are at the front of\r\nthe tensor (in memory layout). When they are not at the front of the tensor this operation can be error prone so we actively\r\n discourage it; please file us a bug report and/or try to express the as_strided operation in terms of PyTorch view operatio\r\nns\r\n```\r\n\r\nIf I am understanding what is going on correctly, the root cause of the problem is that, ignoring vmap for a second, in `x.copy_(y)`, x is a regular Tensor and y is a dual tensor:\r\n- the copy_ causes x.tangent to be a copy of y.tangent\r\n- then, the tangent on the base (x) gets propagated to the views. This happens by calling .as_strided. `view.tangent` gets assigned `x.tangent.as_strided(something)`\r\n\r\nNow, if `y.tangent` is a BatchedTensor, then calling `as_strided` on it may raise the above error message.\r\n\r\n## Is this actually a problem?\r\n\r\nPreviously, our approach was to say that vmap x jvp composition only works when the user must only vmap over dimension 0. However, that's not quite correct -- if the user users non-contiguous tensors, then it'll run into this problem. Also, vmap x jvp can produce tensors where the batch dimension is not at 0, so the user has no control over this.\r\n\r\n## Potential solutions\r\n\r\n1. When a tangent gets propagated to views as a result of an in-place operation, instead of calling `as_strided`, we should call the original view operation. This means we should save the original view operation somewhere.\r\n1. (From Jeffrey) An alternative to (1) is: instead of calling as_strided, figure out what the correct non-as_strided view operation(s) are by reading the sizes/sizes/storage_offset, and call that instead.\r\n1. It is possible to write a batching rule for a \"safe as_strided\". An as_strided call is safe if it does not expose memory that was not previously exposed in the Tensor. We would (a) add a `safe_as_strided` operator, (b) save some metadata on if a view Tensor was created from a base through a chain of \"safe\" operations or not, and (c) dispatch to either `safe_as_strided` or `as_strided`\r\n\r\nThoughts? cc @soulitzer @albanD ", "url": "https://github.com/pytorch/functorch/issues/999", "state": "open", "labels": [], "created_at": "2022-08-10T17:45:17Z", "updated_at": "2022-08-16T20:46:48Z", "comments": 9, "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 83135, "title": "torch.nn.functional.avg_pool{1|2|3}d error message does not match what is described in the documentation", "body": "### \ud83d\udcda The doc issue\n\nParameter 'kernel_size' and 'stride' of torch.nn.functional.avg_pool{1|2|3}d can be a single number or a tuple. However, I found that error message only mentioned tuple of ints which means parameter 'kernel_size' and 'stride' can be only int number or tuple of ints.\r\n\r\n```\r\nimport torch\r\nresults={}\r\narg_1 = torch.rand([1, 1, 7], dtype=torch.float32)\r\narg_2 = 8.0\r\narg_3 = 2\r\narg_4 = 0\r\narg_5 = True\r\narg_6 = True\r\nresults['res'] = torch.nn.functional.avg_pool1d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,)\r\n#TypeError: avg_pool1d(): argument 'kernel_size' (position 2) must be tuple of ints, not float\r\n```\r\n\r\n```\r\nimport torch\r\nresults={}\r\narg_1 = torch.rand([16, 528, 16, 16], dtype=torch.float32)\r\narg_2 = 32.0\r\narg_3 = 13.0\r\narg_4 = 0\r\narg_5 = False\r\narg_6 = True\r\narg_7 = None\r\nresults['res'] = torch.nn.functional.avg_pool2d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,arg_7,)\r\n#TypeError: avg_pool2d(): argument 'stride' (position 3) must be tuple of ints, not float\r\n```\r\n\r\n```\r\nimport torch\r\nresults={}\r\narg_1 = torch.rand([20, 16, 50, 44, 31], dtype=torch.float32)\r\narg_2_0 = 3.0\r\narg_2_1 = 2\r\narg_2_2 = 2\r\narg_2 = [3.0,2,2]\r\narg_3_0 = 2\r\narg_3_1 = 1\r\narg_3_2 = 2\r\narg_3 = [2,1,2]\r\narg_4 = 0\r\narg_5 = False\r\narg_6 = True\r\narg_7 = None\r\nresults['res'] = torch.nn.functional.avg_pool3d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,arg_7,)\r\n#TypeError: avg_pool3d(): argument 'kernel_size' must be tuple of ints, but found element of type float at pos 1\r\n```\n\n### Suggest a potential alternative/fix\n\nIt would be great if the doc could be written as follows:\r\n\r\nkernel_size \u2013 size of the pooling region. Can be a int number or a tuple (kT, kH, kW).\r\nstride \u2013 stride of the pooling operation. Can be a int number or a tuple (sT, sH, sW).\r\n\r\nOr modify the error message so that it matches the document description.\n\ncc @svekars @holly1238 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are", "url": "https://github.com/pytorch/pytorch/issues/83135", "state": "open", "labels": [ "module: docs", "module: nn", "triaged" ], "created_at": "2022-08-10T01:11:59Z", "updated_at": "2022-08-10T12:57:45Z", "user": "cheyennee" }, { "repo": "pytorch/test-infra", "number": 516, "title": "[CI] Use job summaries to display how to replicate failures on specific configs", "body": "For configs such as slow, dynamo, and parallel-native, reproducing a CI error is more involved than just rerunning the command locally. We should use tools (like job summaries) to give people the context they'd need to repro a bug.", "url": "https://github.com/pytorch/test-infra/issues/516", "state": "open", "labels": [], "created_at": "2022-08-09T18:15:15Z", "updated_at": "2022-11-15T19:51:40Z", "user": "janeyx99" }, { "repo": "pytorch/TensorRT", "number": 1243, "title": "\u2753 [Question] How to correctly configure LD_LIBRARY_PATH ", "body": "## \u2753 Question\r\n\r\nHello, after installing torch_tensorrt on my jetson xavier using jetpack 4.6, I cannot import it. I am having a similar issue to other bugs that have been reported and answered. I am wondering though, how do you correctly add tensorrt to LD_LIBRARY_PATH? (Proposed solution from other bugs).\r\n\r\n## What you have already tried\r\n\r\nThe tensorrt package is stored in /usr/lib/python3.6/dist-packages/tensorrt\r\n\r\nI try adding this to LD_LIBRARY_PATH like so:\r\n`export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/python3.6/dist-packages/tensorrt`\r\n\r\nThis addition hadn't changed the import error, unfortunately.\r\n\r\n## Environment\r\n\r\n> Jetpack 4.6\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1243", "state": "closed", "labels": [ "question" ], "created_at": "2022-08-08T20:24:55Z", "updated_at": "2022-08-08T20:35:00Z", "user": "kneatco" }, { "repo": "pytorch/TensorRT", "number": 1235, "title": "\u2753 [Question] How do you debug errors in the compilation step? ", "body": "## \u2753 Question\r\n\r\nHello all, \r\n\r\nAfter training a model, I decided to use torch_tensorrt to test and hopefully increase inference speed. When compiling the custom model, I get the following error: `RuntimeError: Trying to create tensor with negative dimension -1: [-1, 3, 600, 400]`. This did not occur when doing inference in regular PyTorch. Further the following warning was issued (before receiving the error):\r\n```WARNING: [Torch-TensorRT] - For input x.1, found user specified input dtype as Float16, however when inspecting the graph, the input type expected was inferred to be Float\r\nThe compiler is going to use the user setting Float16\r\nThis conflict may cause an error at runtime due to partial compilation being enabled and therefore\r\ncompatibility with PyTorch's data type convention is required.\r\nIf you do indeed see errors at runtime either:\r\n- Remove the dtype spec for x.1\r\n- Disable partial compilation by setting require_full_compilation to True```\r\n\r\nThe code to compile is as follows:\r\n```inputs = [torch_tensorrt.Input(\r\n min_shape=[2, 3, 600, 400],\r\n opt_shape=[4, 3, 600, 400],\r\n max_shape=[8, 3, 600, 400],\r\n dtype=torch.half,\r\n )]\r\nenabled_precisions = {torch.float, torch.half}\r\ntrt_ts_module = torch_tensorrt.compile(model, inputs=inputs, enabled_precisions=enabled_precisions)\r\n```\r\n\r\nMy question is: what can I do to properly debug this error?\r\n\r\n## What you have already tried\r\n- Use `mobilenet_v2`, as specified in the example https://pytorch.org/TensorRT/tutorials/use_from_pytorch.html#use-from-pytorch. This model compiles successfully.\r\n- Change the input size (change the batch size, i.e. the first dimension, use shapes of >= 100). This gave the same error.\r\n- Set `require_full_compilation` to True, which was not fruitful either.\r\n\r\n## Environment\r\n - PyTorch Version (e.g., 1.0): 1.11.0+cu113\r\n - Torch-TensorRT version: 1.1.0\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 20.04 (kernel: 5.4.0-122-generic)\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): `pip`, from within a virtual environment (`pyenv`)\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8.13\r\n - CUDA version: 11.4.4 (Driver: 470.82.01)\r\n - GPU models and configuration: Nvidia RTX A2000\r\n - Any other relevant information: TensorRT has been installed via pip, to install torch_tensorrt (and getting it to import in Python), I followed the answer in the following issue: https://github.com/pytorch/TensorRT/issues/1026#issuecomment-1119561746\r\n\r\nLooking forwards to your answer, thanks in advance.\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1235", "state": "closed", "labels": [ "question" ], "created_at": "2022-08-05T15:23:53Z", "updated_at": "2022-08-08T17:04:30Z", "user": "Mark-M2L" }, { "repo": "pytorch/TensorRT", "number": 1233, "title": "\u2753 [Question] How to install \"tensorrt\" package? ", "body": "## \u2753 Question\r\n\r\nI'm trying to install `torch-tensorrt` on a Jetson AGX Xavier. I first installed `pytorch` 1.12.0 and `torchvision` 0.13.0 following this [guide](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048). Then I installed `torch-tensorrt` following this [guide](https://pytorch.org/TensorRT/tutorials/installation.html#installation), and the compilation completed succesfully.\r\n\r\nWhen I try to import `torch_tensorrt` it throws an error, saying it can't find a module named `tensorrt`. Where I can find this package?\r\n\r\n## Environment\r\n\r\nI'm using a Jetson AGX Xavier with Jetpack 5.0.1.", "url": "https://github.com/pytorch/TensorRT/issues/1233", "state": "closed", "labels": [ "question", "component: dependencies", "channel: linux-jetpack" ], "created_at": "2022-08-05T08:40:18Z", "updated_at": "2022-12-15T17:36:36Z", "user": "domef" }, { "repo": "pytorch/data", "number": 718, "title": "Recommended practice to shuffle data with datapipes differently every epoch", "body": "### \ud83d\udcda The doc issue\n\nI was trying `torchdata` 0.4.0 and I found that shuffling with data pipes will always yield the same result across different epochs, unless I shuffle it again at the beginning of every epoch.\r\n\r\n```python\r\n# same_result.py\r\nimport torch\r\nimport torchdata.datapipes as dp\r\nX = torch.randn(200, 5)\r\ndpX = dp.map.SequenceWrapper(X)\r\ndpXS = dpX.shuffle()\r\nfor _ in range(5):\r\n for i in dpXS:\r\n print(i) # always prints the same value\r\n break\r\n\r\n# different_result.py\r\nimport torch\r\nimport torchdata.datapipes as dp\r\nX = torch.randn(200, 5)\r\ndpX = dp.map.SequenceWrapper(X)\r\nfor _ in range(5):\r\n dpXS = dpX.shuffle()\r\n for i in dpXS:\r\n print(i) # prints different values\r\n break\r\n```\r\n\r\nI wonder what is the recommended practice to shuffle the data at the beginning of every epoch? Neither the documentation nor the examples seem to answer this question.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/718", "state": "closed", "labels": [], "created_at": "2022-08-05T02:12:25Z", "updated_at": "2022-09-13T21:18:49Z", "comments": 4, "user": "BarclayII" }, { "repo": "pytorch/pytorch", "number": 82751, "title": "Refactor how errors decide whether to append C++ stacktrace", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nPer @zdevito's comment in https://github.com/pytorch/pytorch/pull/82665/files#r936022305, we should refactor the way C++ stacktrace is appended to errors.\r\n\r\nCurrently, in https://github.com/pytorch/pytorch/blob/752579a3735ce711ccaddd8d9acff8bd6260efe0/torch/csrc/Exceptions.h, each error goes through a try/catch and the C++ stacktrace is conditioned on whether cpp stacktraces are enabled or not.\r\n\r\nInstead, specific exceptions can have a flag that determines whether cpp stacktrace is added or not. Most errors would set this in their constructor based on the env variable, but for certain types of errors which always report cpp stacktrace, this can just be set to true and this field can be checked when reporting errors.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/82751", "state": "open", "labels": [ "triaged", "better-engineering" ], "created_at": "2022-08-03T20:28:56Z", "updated_at": "2022-08-03T20:28:56Z", "user": "rohan-varma" }, { "repo": "pytorch/data", "number": 712, "title": "Add Examples of Common Preprocessing Steps with IterDataPipe (such as splitting a data set into two)", "body": "### \ud83d\udcda The doc issue\r\n\r\nThere are a few common steps that users often would like to do while preprocessing data, such as [splitting their data set](https://pytorch.org/docs/stable/data.html#torch.utils.data.random_split) into train and eval. There are documentation in PyTorch Core about how to do these things with `Dataset`. We should add the same to our documentation, specifically for `IterDataPipe`. Or create a link to PyTorch Core's documentation for reference when that is appropriate. This issue is driven by common questions we have received either in person or on the forum.\r\n\r\nIf we find that any functionality is missing for `IterDataPipe`, we should implement them.\r\n", "url": "https://github.com/meta-pytorch/data/issues/712", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-08-02T23:58:09Z", "updated_at": "2022-10-20T17:49:41Z", "comments": 9, "user": "NivekT" }, { "repo": "pytorch/data", "number": 709, "title": "Update tutorial about shuffling before sharding", "body": "### \ud83d\udcda The doc issue\n\nThe [tutorial](https://pytorch.org/data/beta/tutorial.html#working-with-dataloader) needs to update the actual reason about shuffling before sharding. It's not accurate.\r\nShuffling before sharding is required to achieve global shuffling rather than only shuffling inside each shard.\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/709", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-08-02T17:53:56Z", "updated_at": "2022-08-04T22:06:36Z", "comments": 2, "user": "ejguan" }, { "repo": "pytorch/data", "number": 707, "title": "Map-style DataPipe to read from s3", "body": "### \ud83d\ude80 The feature\n\n[Amazon S3 plugin for PyTorch ](https://aws.amazon.com/blogs/machine-learning/announcing-the-amazon-s3-plugin-for-pytorch/)proposes S3Dataset which is a Map-style PyTorch Dataset. I was looking for a similar feature in torchdata but only found [S3FileLoader](https://pytorch.org/data/main/generated/torchdata.datapipes.iter.S3FileLoader.html#torchdata.datapipes.iter.S3FileLoader) which doesn't meet my requirements.\r\n \r\n Is there any implementation of a Map-style DataPipe I am missing? Or any method to do a similar thing with the existing tools?\r\n \r\nThe main requirement is that I need to read images from s3, apply a transformation, and keep them syncronized with a list of labels. \r\n\r\n Thank you\n\n### Motivation, pitch\n\nMap-style DataPipe to read from s3 to complement the existing itetable style Datapipe to read from s3.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/707", "state": "closed", "labels": [], "created_at": "2022-08-02T12:58:21Z", "updated_at": "2022-08-04T13:31:32Z", "comments": 10, "user": "gombru" }, { "repo": "pytorch/tutorials", "number": 1993, "title": "Problem with the torchtext library text classification example", "body": "The first section of the [tutorial](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) suggests\r\n`\r\nimport torch\r\nfrom torchtext.datasets import AG_NEWS\r\ntrain_iter = iter(AG_NEWS(split='train'))\r\n`\r\n\r\nwhich does not work yielding\r\n`TypeError: _setup_datasets() got an unexpected keyword argument 'split'`\r\n\r\nI might highlight as well that the string doc for AG_NEWS mentions\r\n`train_dataset, test_dataset = torchtext.datasets.AG_NEWS(ngrams=3)`\n\ncc @pytorch/team-text-core @Nayef211", "url": "https://github.com/pytorch/tutorials/issues/1993", "state": "closed", "labels": [ "question", "module: torchtext", "docathon-h1-2023", "medium" ], "created_at": "2022-08-02T08:46:49Z", "updated_at": "2023-06-12T19:42:05Z", "user": "EssamWisam" }, { "repo": "pytorch/tutorials", "number": 1991, "title": "Some typos in and a question from TorchScript tutorial", "body": "Hi, I first thank for this tutorial.\r\n\r\nHere are some typos in the tutorial:\r\n\r\n1.`be` in https://github.com/pytorch/tutorials/blob/7976ab181fd2a97b2775574eec284d1fc8abcfe0/beginner_source/Intro_to_TorchScript_tutorial.py#L42 should be `by`.\r\n\r\n2.`succintly` in https://github.com/pytorch/tutorials/blob/7976ab181fd2a97b2775574eec284d1fc8abcfe0/beginner_source/Intro_to_TorchScript_tutorial.py#L114 should be `succinctly`.\r\n\r\n3.In https://github.com/pytorch/tutorials/blob/7976ab181fd2a97b2775574eec284d1fc8abcfe0/beginner_source/Intro_to_TorchScript_tutorial.py#L206-L207 `TracedModule` is wrongly stated to be an instance of `ScriptModule`. I suggest that this line become:\r\n```\r\n# instance of ``torch.jit.TracedModule`` (which is a subclass of ``torch.jit.ScriptModule``)\r\n```\r\n\r\n4.Second part in https://github.com/pytorch/tutorials/blob/7976ab181fd2a97b2775574eec284d1fc8abcfe0/beginner_source/Intro_to_TorchScript_tutorial.py#L322-L323 seems somewhat ambiguous to me. What does it mean by the second `inline`?\n\ncc @svekars", "url": "https://github.com/pytorch/tutorials/issues/1991", "state": "closed", "labels": [ "grammar" ], "created_at": "2022-08-01T13:21:26Z", "updated_at": "2022-10-13T22:49:41Z", "comments": 2, "user": "sadra-barikbin" }, { "repo": "pytorch/data", "number": 705, "title": "Set better defaults for `MultiProcessingReadingService`", "body": "### \ud83d\ude80 The feature\r\n\r\n```python\r\nclass MultiProcessingReadingService(ReadingServiceInterface):\r\n num_workers: int = get_number_of_cpu_cores()\r\n pin_memory: bool = True\r\n timeout: float\r\n worker_init_fn: Optional[Callable[[int], None]] # Remove this?\r\n prefetch_factor: int = profile_optimal_prefetch_factor(model : nn.Module)\r\n persistent_workers: bool = True\r\n``` \r\n\r\nI can add these, opening this issue to discuss whether it's a good idea to change defaults. \r\n\r\n+: Users get better out of the box performance with `torchdata`\r\n-: backward compatibility issues when moving from `dataloaderv1` to `dataloaderv2`\r\n\r\n### Motivation, pitch\r\n\r\nThere are many issues on discuss, stack overflow, and blogs describing how people should configure data loaders for optimized performance. Since a lot of the tricks haven't changed like `pin_memory = true` or `num_workers = num_cpu_cores` or `persistent_workers=true` and since we're in the process of developing `dataloaderv2` now may be a good time to revisit these default values \r\n\r\n* https://www.jpatrickpark.com/post/prefetcher/#:~:text=The%20prefetch_factor%20parameter%20only%20controls,samples%20prefetched%20across%20all%20workers.)\r\n* https://stackoverflow.com/questions/53998282/how-does-the-number-of-workers-parameter-in-pytorch-dataloader-actually-work\r\n* https://discuss.pytorch.org/t/when-to-set-pin-memory-to-true/19723\r\n\r\n### Alternatives\r\n\r\n1. Instead of setting reasonable defaults, we can instead extend the `linter.py` to suggest some of these tips if we notice some sources of slowdowns\r\n2. Do nothing, suggest people read documentation when configuring performance\r\n\r\n### Additional context\r\n\r\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/705", "state": "open", "labels": [ "enhancement" ], "created_at": "2022-07-31T22:46:33Z", "updated_at": "2022-08-02T22:07:18Z", "comments": 1, "user": "msaroufim" }, { "repo": "pytorch/pytorch", "number": 82542, "title": "Is there Doc that explains how to call an extension op in another extension implementation?", "body": "### \ud83d\udcda The doc issue\n\nFor example, there is an extension op which is installed from public repo via `pip install torch-scatter`, and in Python code, it's easy to use this extension:\r\n\r\n```py\r\nimport torch\r\noutput = torch.ops.torch_scatter.scatter_max(x, index)\r\n```\r\n\r\nHowever, I'm writing an C++ extension and want to call this extension as well, but I cannot find any doc that guides how to do this, or I don't know whether Pytorch C++ extension can even support it or not. Briefly, this is something I'd like to do in extension function:\r\n\r\n```cpp\r\ntorch::Tensor my_op(torch::Tensor x, torch::Tensor y, torch::Tensor z) {\r\n auto temp = torch::ops::torch_scatter::scatter_max(z, y.view(-1)); // not working\r\n ..\r\n return temp;\r\n}\r\n```\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @svekars @holly1238 @jbschlosser", "url": "https://github.com/pytorch/pytorch/issues/82542", "state": "open", "labels": [ "module: docs", "module: cpp", "triaged" ], "created_at": "2022-07-31T06:20:02Z", "updated_at": "2022-08-03T15:18:05Z", "user": "ghostplant" }, { "repo": "pytorch/pytorch", "number": 82524, "title": "how to build libtorch from source?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n where is the source? I want to build libtorch-win-shared-with-deps-1.12.0%2Bcu116.zip\r\n\r\n### Versions\r\n\r\nas in the title\r\n\r\ncc @malfet @seemethere @svekars @holly1238 @jbschlosser", "url": "https://github.com/pytorch/pytorch/issues/82524", "state": "closed", "labels": [ "module: build", "module: docs", "module: cpp", "triaged" ], "created_at": "2022-07-30T08:01:18Z", "updated_at": "2022-08-21T01:50:37Z", "user": "xoofee" }, { "repo": "pytorch/data", "number": 703, "title": "Read Parquet Files Directly from S3?", "body": "### \ud83d\ude80 The feature\n\nThe `ParquetDataFrameLoader` allows us to read parquet files from the local file system, but I don't think it supports reading parquet files from (for example) an S3 bucket.\r\n\r\nMake this possible.\n\n### Motivation, pitch\n\nI would like to train my models on parquet files stored in an S3 bucket.\n\n### Alternatives\n\nYou could probably download the parquet file locally and then use the `ParquetDataFrameLoader`?\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/703", "state": "open", "labels": [ "enhancement", "feature" ], "created_at": "2022-07-30T06:07:01Z", "updated_at": "2022-08-03T19:09:09Z", "comments": 2, "user": "vedantroy" }, { "repo": "pytorch/TensorRT", "number": 1213, "title": "\u2753 [Question] Is it ok to build v1.1.0 with cuda10.2 not default cuda11.3?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nIs it ok to build v1.1.0 with cuda10.2 not default cuda11.3?\r\nIt's hard to upgrade latest gpu driver for some machine which is shared by many people. So cuda10.2 is preferred.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 1.11.0\r\n - CUDA version: 10.2 \r\n - TensorRT: 8.2.4.2 \r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1213", "state": "closed", "labels": [ "question", "component: dependencies" ], "created_at": "2022-07-28T12:05:22Z", "updated_at": "2022-08-12T01:46:32Z", "user": "wikiwen" }, { "repo": "pytorch/TensorRT", "number": 1212, "title": "\ud83d\udc1b [Bug] Encountered bug when using Torch-TensorRT", "body": "## \u2753 Question\r\n\r\nHello There, \r\nI've tried to run torch_TensorRT on ubuntu and windows as well. On windows I compiled it with [this](https://github.com/gcuendet/Torch-TensorRT/tree/add-cmake-support) pull request and it is working good on C++. The resulting trt_module on ubuntu is loading flawlessly on python and can be saved and loaded from disk for future use. This is not the case with Windows C++ module, the resulting trt_model on windows via C++ program is working perfectly with C++ but it is not getting loaded on python. My question is, python library is using C binaries to perform this task and resulting model is getting loaded on python, why it is not same in other case? Am I missing something? \r\n\r\n## What I have tried\r\n\r\n### Working \r\n#### Compiling and Loading TRT Model On Python Ubuntu:\r\n\r\n```\r\ntrt_model_fp32 = torch_tensorrt.compile(torch_script_module, truncate_long_and_double=True,\r\n inputs=[torch_tensorrt.Input((1, 3, 8, 290, 290), dtype=torch.float32)],\r\n enabled_precisions=torch.float16, \r\n workspace_size=1 << 34,\r\n require_full_compilation=True,\r\n\r\n )\r\ntorch.jit.save(trt_model_fp32, \"NewTRTModel.ts\")\r\nmodel = torch.jit.load(\"NewTRTModel.ts\")\r\n```\r\n\r\n### Not Working\r\n#### Compiling TRT Model On C++ Windows and then Loading on Python:\r\n```\r\n\r\n const torch::Device device = torch::Device(torch::kCUDA, 0);\r\n torch::jit::script::Module model;\r\n\r\n std::cout << \"Trying to load the model\" << std::endl;\r\n try {\r\n model = torch::jit::load(model_path, device);\r\n model.to(device);\r\n model.eval();\r\n }\r\n catch (const c10::Error& e) {\r\n std::cerr << e.what() << std::endl;\r\n }\r\n auto inp = std::vector<int64_t>{ 1, 3, 8, 290, 290 };\r\n auto input = torch_tensorrt::Input(inp);\r\n auto compile_settings = torch_tensorrt::ts::CompileSpec({ input });\r\n compile_settings.enabled_precisions = { torch::kFloat16 };\r\n \r\n // Compile module\r\n std::cout << \"Compiling...\" << std::endl;\r\n auto trt_mod = torch_tensorrt::ts::compile(model, compile_settings);\r\n // Save module for later\r\n trt_mod.save(\"/NewTRTModel.ts\");\r\n\r\n#### Loading On Python: \r\nmodel = torch.load(\"NewTRTModel.ts\")\r\nmodel = torch.jit.load(\"NewTRTModel.ts\")\r\n```\r\n\r\n## Error\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\nInput In [3], in <cell line: 1>()\r\n----> 1 model = torch.load(\"NewTRTModel.ts\")\r\n\r\nFile ~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\serialization.py:711, in load(f, map_location, pickle_module, **pickle_load_args)\r\n 707 warnings.warn(\"'torch.load' received a zip file that looks like a TorchScript archive\"\r\n 708 \" dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to\"\r\n 709 \" silence this warning)\", UserWarning)\r\n 710 opened_file.seek(orig_position)\r\n--> 711 return torch.jit.load(opened_file)\r\n 712 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n 713 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n\r\nFile ~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\jit\\_serialization.py:164, in load(f, map_location, _extra_files)\r\n 162 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\n 163 else:\r\n--> 164 cpp_module = torch._C.import_ir_module_from_buffer(\r\n 165 cu, f.read(), map_location, _extra_files\r\n 166 )\r\n 168 # TODO: Pretty sure this approach loses ConstSequential status and such\r\n 169 return wrap_cpp_module(cpp_module)\r\n\r\nRuntimeError: \r\nUnknown type name '__torch__.torch.classes.tensorrt.Engine':\r\n File \"code/__torch__/movinets/models.py\", line 4\r\n __parameters__ = []\r\n __buffers__ = []\r\n __torch___movinets_models_MoViNet_trt_engine_ : __torch__.torch.classes.tensorrt.Engine\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n def forward(self_1: __torch__.movinets.models.MoViNet_trt,\r\n input_0: Tensor) -> Tensor:\r\n\r\n```\r\n\r\n## Environment\r\n\r\nWindows 11\r\n\r\nCPU : i9-11980HK x86-64\r\n\r\nGPU : RTX 3080 Mobile\r\n\r\nCuda : 11.5.2\r\n\r\nCudnn : 8.3.1\r\n\r\nLibtorch : 1.11\r\n\r\nTensor_RT : 8.4.1.5\r\n\r\nVisual Studio 2019\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1212", "state": "closed", "labels": [ "question", "No Activity", "channel: windows" ], "created_at": "2022-07-28T06:31:49Z", "updated_at": "2022-11-02T18:44:43Z", "user": "ghost" }, { "repo": "pytorch/TensorRT", "number": 1209, "title": "\u2753 [Question] How do you install an older TensorRT package?", "body": "## \u2753 Question\r\n\r\nHow do you install an older TensorRT package? I'm using Pytorch 1.8 and TensorRT version 0.3.0 matches that Pytorch version.\r\n\r\n\r\n## What you have already tried\r\n\r\nI tried:\r\n\r\n```\r\npip3 install torch-tensorrt==v0.3.0 -f https://github.com/pytorch/TensorRT/releases\r\nLooking in links: https://github.com/pytorch/TensorRT/releases\r\nERROR: Could not find a version that satisfies the requirement torch-tensorrt==v0.3.0 (from versions: 0.0.0, 0.0.0.post1, 1.0.0, 1.1.0)\r\nERROR: No matching distribution found for torch-tensorrt==v0.3.0\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1209", "state": "closed", "labels": [ "question" ], "created_at": "2022-07-27T16:40:14Z", "updated_at": "2022-08-01T15:54:24Z", "user": "JinLi711" }, { "repo": "pytorch/pytorch", "number": 82304, "title": "How to use SwiftShader to test vulkan mobile models ?", "body": "### \ud83d\udcda The doc issue\n\nIn this tutorial [here](https://pytorch.org/tutorials/prototype/vulkan_workflow.html),\r\n\r\nIt's pointed out at the end that it will be possible to use SwiftShader to test pytorch_mobile models on Vulkan backend without needing to go to mobile.\r\n\r\nHow?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/82304", "state": "closed", "labels": [ "oncall: mobile" ], "created_at": "2022-07-27T10:18:59Z", "updated_at": "2022-07-28T22:40:59Z", "user": "MohamedAliRashad" }, { "repo": "pytorch/TensorRT", "number": 1207, "title": "\u2753 [Question] cmake do not find torchtrt?", "body": "## \u2753 Question\r\n\r\nHi, im trying to compile from source and test with c++.\r\n\r\nI built using locally installed cuda 10.2 , tensort 8.2 and libtorch cxx11 abi, compile using `bazel build //:libtorchtrt -c opt` \r\nIt looks like the installation was successful.\r\n```\r\nINFO: Analyzed target //:libtorchtrt (0 packages loaded, 0 targets configured).\r\nINFO: Found 1 target...\r\nTarget //:libtorchtrt up-to-date:\r\n bazel-bin/libtorchtrt.tar.gz\r\nINFO: Elapsed time: 248.908s, Critical Path: 35.38s\r\nINFO: 217 processes: 2 internal, 215 linux-sandbox.\r\nINFO: Build completed successfully, 217 total actions\r\n\r\n```\r\n\r\nBut when i test with c++, the cmake can not find torchtrt, seems like the installation was not correctly\uff1f\uff1f\uff1f\r\nIs there anyone who can tell me what do i miss??? thank u.\r\n\r\nthis is my cmakelist file, it works without `find_package(torchtrt REQUIRED)`\r\n\r\n```\r\nproject(example)\r\ncmake_minimum_required(VERSION 3.0)\r\n\r\nset(CMAKE_CXX_STANDARD 14)\r\n\r\nset(Torch_DIR /home/xs/libtorch/share/cmake/Torch) \r\nfind_package(Torch REQUIRED)\r\nfind_package(torchtrt REQUIRED)\r\n\r\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}\")\r\n\r\nadd_executable(example main.cpp)\r\ntarget_link_libraries(example \"${TORCH_LIBRARIES}\")\r\n```\r\nErrors:\r\n```\r\nCMake Error at CMakeLists.txt:10 (find_package):\r\n By not providing \"Findtorchtrt.cmake\" in CMAKE_MODULE_PATH this project has\r\n asked CMake to find a package configuration file provided by \"torchtrt\",\r\n but CMake did not find one.\r\n\r\n Could not find a package configuration file provided by \"torchtrt\" with any\r\n of the following names:\r\n\r\n torchtrtConfig.cmake\r\n torchtrt-config.cmake\r\n\r\n Add the installation prefix of \"torchtrt\" to CMAKE_PREFIX_PATH or set\r\n \"torchtrt_DIR\" to a directory containing one of the above files. If\r\n \"torchtrt\" provides a separate development package or SDK, be sure it has\r\n been installed.\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0\r\n - CPU Architecture: x86-64\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip whl file from PyTorch.org\r\n - Build command you used (if compiling from source): compiling from source\r\n - Are you using local sources or building from archives: local sources\r\n - Python version: 3.7.0\r\n - CUDA version: 10.2\r\n - GPU models and configuration: gtx 1050ti\r\n - Any other relevant information:\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1207", "state": "closed", "labels": [ "question", "component: build system" ], "created_at": "2022-07-27T09:14:51Z", "updated_at": "2023-09-14T17:40:25Z", "user": "xsxsmm" }, { "repo": "pytorch/torchx", "number": 569, "title": "[Ray] Elastic Launch on Ray Cluster", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\nSupport elastic training on Ray Cluster.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\nTraining can tolerate node failures.\r\nThe number of worker nodes can expand as the size of the cluster grows.\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\nBased on current implementation, there will be two major steps for this feature:\r\n- [ ] #559 Support expanding the placement groups for command actors on the fly \r\n- [ ] Support fault tolerance which depends on the implementation of ray.\r\nRay Placement Group supports fault tolerance, and its logic is when a node dead, GCS will reschedule the placement groups on that node to other nodes. And it introduces a problem: how do we know when a node is dead and which placement groups are being created, since we must restart the command actor on those placement groups who have been rescheduled, the reason is that those placement groups will never be removed until the training ends, and it reserves resources cannot be used by others. Currently there are two possible ways to achieve this:\r\n 1. Disable the fault tolerance feature of Ray Placement Group, then we need find a way to monitor the living placement groups.\r\n 2. Let the Ray GCS notifies the main process when some placement groups are being rescheduled, and we will be able to restart the command actors on those placement groups once they have been rescheduled.\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n[Ray Placement Group](https://docs.ray.io/en/latest/ray-core/placement-group.html#fault-tolerance)\r\n[Support expanding the placement groups for command actors on the fly](https://github.com/pytorch/torchx/pull/559)\r\n[Enable Notification on Node failure](https://github.com/ray-project/ray/issues/27076)\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/569", "state": "open", "labels": [ "enhancement", "ray" ], "created_at": "2022-07-27T04:32:41Z", "updated_at": "2022-11-05T18:22:51Z", "comments": 0, "user": "ntlm1686" }, { "repo": "pytorch/data", "number": 693, "title": "Changing decoding method in StreamReader ", "body": "### \ud83d\udc1b Describe the bug\n\nHi,\r\n\r\nWhen decoding from a file stream in `StreamReader`, torchdata automatically assumes the incoming bytes are UTF-8. However, in the case of alternate encoding's this will error (in my case `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xec in position 3: invalid continuation byte`). How do we change the decoding method to fit the particular data stream?\n\n### Versions\n\n```\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.0\r\n[pip3] pytorch-lightning==1.6.4\r\n[pip3] torch==1.11.0\r\n[pip3] torchdata==0.3.0\r\n[pip3] torchmetrics==0.9.1\r\n[pip3] torchvision==0.12.0\r\n[conda] numpy 1.23.0 pypi_0 pypi\r\n[conda] pytorch-lightning 1.6.4 pypi_0 pypi\r\n[conda] torch 1.11.0 pypi_0 pypi\r\n[conda] torchdata 0.3.0 pypi_0 pypi\r\n[conda] torchmetrics 0.9.1 pypi_0 pypi\r\n[conda] torchvision 0.12.0 pypi_0 pypi\r\n```", "url": "https://github.com/meta-pytorch/data/issues/693", "state": "open", "labels": [], "created_at": "2022-07-27T00:33:29Z", "updated_at": "2022-07-27T13:18:15Z", "comments": 2, "user": "is-jlehrer" }, { "repo": "pytorch/data", "number": 690, "title": "Unable to vectorize datapipe operations", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nLet `t` be an input dataset that associates strings (model input) to integers (model output):\r\n\r\n```python\r\nt = [(\"a\", 567), (\"b\", 908), (\"c\", 887)]\r\n```\r\n\r\nI now wrap `t` in a `SequenceWrapper`, to use it as part of a DataPipe:\r\n\r\n```python\r\nimport torchdata.datapipes as dp\r\n\r\npipeline = dp.map.SequenceWrapper(t, deepcopy=False)\r\n```\r\n\r\nNow, I have a datapipe giving me tuples:\r\n\r\n```python\r\n>>> pipeline[0]\r\n('a', 567)\r\n```\r\n\r\nAfter that, I am willing to do some preprocessing. However, since I have a huge dataset I want to vectorize the following operations: for that, I use `.batch`:\r\n\r\n```python\r\nbatched_pipeline = pipeline.batch(batch_size=2)\r\n```\r\n\r\nBy vectorizing, I mean grouping the X values (the strings) and the Y values (integers) together so that I can apply a custom logic to the input and the output at the same time, and in batch.\r\nHowever, the `.batch()` function returns the following:\r\n\r\n```python\r\n>>> batched_pipeline[0]\r\n[('a', 567), ('b', 908)]\r\n```\r\n\r\nWhich really makes no sense because why would I want the whole line batched? Just so that I can iterate over it right after?\r\nIn my opinion, `.batch()` only makes sense if the different **slices** (see TensorFlow's `Dataset.from_tensor_slices()` which does handle that) are batched separately.\r\n\r\nSo what do you think? Is there something I am missing?\r\n\r\nThanks in advance!\r\n\r\n<details>\r\n<summary>Versions</summary>\r\n\r\nPyTorch version: 1.12.0+cu116\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.6\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Red Hat Enterprise Linux release 8.5 (Ootpa) (x86_64)\r\nGCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-3)\r\nClang version: 12.0.1 (Red Hat 12.0.1-2.module+el8.5.0+12651+6a7729ff)\r\nCMake version: version 3.20.2\r\nLibc version: glibc-2.28\r\n\r\nPython version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration:\r\nGPU 0: NVIDIA A100-SXM4-80GB\r\nGPU 1: NVIDIA A100-SXM4-80GB\r\nGPU 2: NVIDIA A100-SXM4-80GB\r\nGPU 3: NVIDIA A100-SXM4-80GB\r\nGPU 4: NVIDIA A100-SXM4-80GB\r\nGPU 5: NVIDIA A100-SXM4-80GB\r\nGPU 6: NVIDIA A100-SXM4-80GB\r\nGPU 7: NVIDIA A100-SXM4-80GB\r\n\r\nNvidia driver version: 515.48.07\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] light-the-torch==0.4.0\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.0\r\n[pip3] torch==1.12.0+cu116\r\n[pip3] torchaudio==0.12.0\r\n[pip3] torchdata==0.4.0\r\n[pip3] torchmetrics==0.9.2\r\n[pip3] torchtext==0.13.0\r\n[pip3] torchvision==0.13.0\r\n[conda] light-the-torch 0.4.0 pypi_0 pypi\r\n[conda] numpy 1.23.0 pypi_0 pypi\r\n[conda] torch 1.12.0+cu116 pypi_0 pypi\r\n[conda] torchaudio 0.12.0 pypi_0 pypi\r\n[conda] torchdata 0.4.0 pypi_0 pypi\r\n[conda] torchmetrics 0.9.2 pypi_0 pypi\r\n[conda] torchtext 0.13.0 pypi_0 pypi\r\n[conda] torchvision 0.13.0 pypi_0 pypi\r\n\r\n</details>", "url": "https://github.com/meta-pytorch/data/issues/690", "state": "open", "labels": [], "created_at": "2022-07-26T13:30:39Z", "updated_at": "2022-07-26T15:50:26Z", "comments": 2, "user": "BlueskyFR" }, { "repo": "pytorch/xla", "number": 3760, "title": "How to load a gpu trained model on TPU for evaluation", "body": "## \u2753 Questions and Help\r\nHello,\r\nI am loading a GPU trained model on map_location=cpu and then doing \"model.to(device)\" where device is xm.xla_device(n=device_num,devkind=\"TPU\") but on testing the cpu processing time and the tpu processing time is the same. Please let me know what I can do about it.\r\n\r\nThank you", "url": "https://github.com/pytorch/xla/issues/3760", "state": "open", "labels": [], "created_at": "2022-07-26T01:25:45Z", "updated_at": "2022-07-26T02:22:58Z", "user": "Preethse" }, { "repo": "pytorch/data", "number": 689, "title": "Distributed training tutorial with DataLoader2", "body": "### \ud83d\udcda The doc issue\n\nI am not sure how to implement distributed training.\n\n### Suggest a potential alternative/fix\n\nIf there was a simple example that showed how to use DDP with the torchdata library it would be super helpful.", "url": "https://github.com/meta-pytorch/data/issues/689", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-07-25T22:44:56Z", "updated_at": "2023-02-01T17:59:08Z", "comments": 9, "user": "MatthewCaseres" }, { "repo": "pytorch/TensorRT", "number": 1203, "title": "\u2753 [Question] How do you install torch-tensorrt (Import error. no libvinfer_plugin.so.8 file error)? ", "body": "## \u2753 Question\r\n\r\n<!-- ImportError: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory -->\r\n\r\n### ImportError: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory\r\n\r\ncuda and cudnn is installed well.\r\nI installed pytorch and nvidia-tensorrt well in conda environment \r\nand then install torch-tensorrt via pip\r\n\r\n```\r\npip3 install torch-tensorrt -f https://github.com/pytorch/TensorRT/releases\r\n```\r\nbut when I import torch-tensorrt, it gives importError \r\nImportError: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory\r\n```\r\n>>> import torch_tensorrt\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/user_name/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch_tensorrt/__init__.py\", line 11, in <module>\r\n from torch_tensorrt._compile import *\r\n File \"/home/user_name/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch_tensorrt/_compile.py\", line 2, in <module>\r\n from torch_tensorrt import _enums\r\n File \"/home/user_name/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch_tensorrt/_enums.py\", line 1, in <module>\r\n from torch_tensorrt._C import dtype, DeviceType, EngineCapability, TensorFormat\r\nImportError: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory\r\n```\r\n\r\n\r\n## What you have already tried\r\n\r\n```\r\n>>> import torch\r\n>>> torch.cuda.is_available()\r\nTrue\r\n>>> torch.cuda.device_count()\r\n8\r\n>>> torch.cuda.current_device()\r\n0\r\n>>> torch.cuda.get_device_name(0)\r\n'NVIDIA RTX A5000'\r\n>>> torch.__version__\r\n'1.12.0'\r\n>>> import tensorrt\r\n```\r\n\r\n\r\n\r\n<!-- checked cuda, cudnn, pytorch versions are right. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.12\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda\r\n - Build command you used (if compiling from source):\r\n\r\n\r\n - Are you using local sources or building from archives: \r\n - Python version: 3.9.7\r\n - CUDA version: 11.4\r\n - GPU models and configuration: NVIDIA RTX A5000\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1203", "state": "closed", "labels": [ "question" ], "created_at": "2022-07-25T03:03:35Z", "updated_at": "2024-04-30T02:16:10Z", "user": "YOONAHLEE" }, { "repo": "pytorch/TensorRT", "number": 1202, "title": "\u2753 [Question] interpolate isn't suported?", "body": "## \u2753 Question\r\n\r\ndoes anyone succeed compile [torch.nn.functional.interpolate](https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html) with torch_tensort>1.x.x?\r\n\r\nin the release note, it is written that nearest and bilinear interpolation are supported\r\n\r\nif you can compile it, please share with me the example code. thank you!", "url": "https://github.com/pytorch/TensorRT/issues/1202", "state": "closed", "labels": [ "question", "component: converters" ], "created_at": "2022-07-24T21:35:34Z", "updated_at": "2022-07-27T23:46:37Z", "user": "yokosyun" }, { "repo": "pytorch/functorch", "number": 982, "title": "GPU Memeory", "body": "```\r\nfunc_model, params = make_functional(model)\r\n\r\nfor param in params:\r\n param.requires_grad_(False)\r\n\r\ndef compute_loss(params, data, targets):\r\n data = data.unsqueeze(dim=0)\r\n preds = func_model(params, data)\r\n loss = loss_fn(preds, targets)\r\n return loss\r\n\r\nper_sample_info = vmap(grad_and_value(compute_loss, has_aux=False), (None, 0, 0),randomness='different')(params, images, labels)\r\nper_sample_grads = per_sample_info[0]\r\nper_sample_losses = per_sample_info[1].detach_()\r\n \r\ngrads = torch.cat([g.detach().view(b,-1) for g in per_sample_grads], dim=1)\r\n```\r\nIt seems that when I get grads, the usage of gpu memory nearly doubles which is not what I want. Looking forward to some advice.\r\n\r\n", "url": "https://github.com/pytorch/functorch/issues/982", "state": "closed", "labels": [], "created_at": "2022-07-24T04:45:50Z", "updated_at": "2022-07-24T10:37:22Z", "comments": 0, "user": "kwwcv" }, { "repo": "pytorch/pytorch", "number": 82041, "title": "[Misleading] The doc started using Tensorflow terminology in the document to explain how to use the Pytorch code.", "body": "### \ud83d\udcda The doc issue\n\n![image](https://user-images.githubusercontent.com/21982975/180585585-6077a456-c2d9-4e15-a1f6-014b93feb13b.png)\r\n the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad).\n\n### Suggest a potential alternative/fix\n\n the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad).\r\nChange it to be:\r\nthe model must be executed in inference mode and operate on input tensors that does not accumulate gradient. (e.g, setting the model with torch.no_grad).\n\ncc @svekars @holly1238", "url": "https://github.com/pytorch/pytorch/issues/82041", "state": "open", "labels": [ "module: docs", "triaged" ], "created_at": "2022-07-23T01:43:39Z", "updated_at": "2022-07-24T16:35:11Z", "user": "AliceSum" }, { "repo": "pytorch/torchx", "number": 567, "title": "[exploratory] TorchX Dashboard", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nAdd a new `torchx dashboard` command that will launch a local HTTP server that allows users to view all of their jobs with statuses, logs and integration with any ML specific extras such as artifacts, Tensorboard, etc.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nCurrently the interface for TorchX is only via programmatic or via the CLI. It would also be nice to have a UI dashboard that could be used to monitor all of your job as well as support deeper integrations such as experiment tracking and metrics.\r\n\r\nRight now if users want to use a UI they have to use their platform specific one (i.e aws batch/ray dashboard) and many don't have one (slurm/volcano).\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nThis would be a fairly simple interface built on top of something such as Flask (https://flask.palletsprojects.com/en/2.1.x/quickstart/). \r\n\r\nPages:\r\n\r\n* `/` the main page with a list of all of the users jobs and filters\r\n* `/<scheduler>/<jobid>` an overview of the job, the job def and the status with a tab for logs, artifacts and any other URLs that are logged\r\n* `/<scheduler>/<jobid>/logs` - view the logs\r\n* `/<scheduler>/<jobid>/external/<metadata key>` - iframes based off of external services such as tensorboard etc \r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\nProviding a way to view URLs for external services via the terminal.\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\n* https://docs.ray.io/en/latest/ray-core/ray-dashboard.html#logical-view", "url": "https://github.com/meta-pytorch/torchx/issues/567", "state": "open", "labels": [ "enhancement", "RFC", "cli" ], "created_at": "2022-07-22T19:28:51Z", "updated_at": "2022-08-02T21:23:14Z", "comments": 1, "user": "d4l3k" }, { "repo": "pytorch/torchx", "number": 566, "title": "add a TORCHX_JOB_ID environment variable to all jobs launched via runner", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nAs part of the future experiment tracking we want to be able to have the application know it's own identity. When we launch a job we return the full job id (i.e. `kubernetes://session/app_id`) but the app itself doesn't have this exact same job ID. We do provide an `app_id` macro that can be used in the app def for both env and arguments but it's up to the app owner to manually add that.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nIf we add a `TORCHX_JOB_ID` environment variable it allows us to write more standardized integrations for experiment tracking that use the job ID as a key. There's no added cost from an extra environment variable and will enable deeper automatic integrations into other libraries.\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nAdd a new environment variable to Runner.dryrun\r\n\r\nhttps://github.com/pytorch/torchx/blob/main/torchx/runner/api.py#L241\r\n\r\nthat uses the macros.app_id to add the full job ID using the scheduler and session information form the runner.\r\n\r\nhttps://github.com/pytorch/torchx/blob/main/torchx/specs/api.py#L156\r\n\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/566", "state": "open", "labels": [ "enhancement", "module: runner", "tracking" ], "created_at": "2022-07-22T18:22:24Z", "updated_at": "2022-07-22T21:28:02Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/functorch", "number": 979, "title": "ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv", "body": "Hi All,\r\n\r\nI was running an older version of PyTorch ( - built from source) with FuncTorch ( - built from source), and somehow I've broken the older version of functorch. When I import functorch I get the following error,\r\n```\r\nimport functorch\r\n#returns ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv\r\n```\r\n\r\nThe version I had of `functorch` was `0.2.0a0+9d6ee76`, is there a way to perhaps re-install to fix this ImportError? I do have the latest version of PyTorch/FuncTorch in a separate conda environment but I wanted to check how it compares to the older version in this 'older' conda environment PyTorch/Functorch were versions ,1.12.0a0+git7c2103a and 0.2.0a0+9d6ee76 respectively.\r\n\r\nIs there a way to download a specific version of `functorch` with `https://github.com/pytorch/functorch.git` ? Or another way to fix this issue?", "url": "https://github.com/pytorch/functorch/issues/979", "state": "closed", "labels": [], "created_at": "2022-07-22T14:51:13Z", "updated_at": "2022-07-25T19:22:04Z", "comments": 24, "user": "AlphaBetaGamma96" }, { "repo": "pytorch/TensorRT", "number": 1199, "title": " Cant import torch_tensorrt", "body": " ERROR:\r\nfrom torch.fx.passes.pass_manager import PassManager\r\n\r\n ModuleNotFoundError: No module named 'torch.fx.passes.pass_manager'\r\n\r\n\r\n\r\n - PyTorch Version : 1.11\r\n - CPU Architecture: jetson AGX xavier\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch: nvidia forum wheel\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:3.8\r\n - CUDA version: 11.4\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1199", "state": "closed", "labels": [ "question", "channel: linux-jetpack", "component: fx" ], "created_at": "2022-07-22T08:00:34Z", "updated_at": "2022-09-02T18:04:29Z", "user": "sanath-tech" }, { "repo": "pytorch/TensorRT", "number": 1198, "title": "\u2753 [Question] Where can we get VGG-16 checkpoint pretrained on CIFAR-10 ? ", "body": "## \u2753 Question\r\n\r\nTo get $pwd/vgg16_ckpts/ckpt_epoch110.pth, I tried to run the script named [python3 finetune_qat.py](https://github.com/pytorch/TensorRT/tree/v1.1.1/examples/int8/training/vgg16#quantization-aware-fine-tuning-for-trying-out-qat-workflows).\r\n\r\nHowever, the script needs VGG-16 pretrained model at 100-epoch as follows: \r\n```bash\r\nLoading from checkpoint $(PATH_TOTensorRT)/examples/int8/training/vgg16/vgg16_ckpts/ckpt_epoch100.pth\r\n```\r\nThen where can we download the checkpoint-epoch 100 model?\r\nI failed to download it from other internet site ", "url": "https://github.com/pytorch/TensorRT/issues/1198", "state": "closed", "labels": [ "question" ], "created_at": "2022-07-22T05:06:34Z", "updated_at": "2022-07-22T05:13:32Z", "user": "zinuok" }, { "repo": "pytorch/TensorRT", "number": 1197, "title": "\u2753 [Question] Where can we get 'trained_vgg16_qat.jit.pt' ?", "body": "## \u2753 Question\r\n\r\nWhere can we get 'trained_vgg16_qat.jit.pt' ?\r\nthe link in [test_qat_trt_accuracy.py](https://github.com/pytorch/TensorRT/blob/master/tests/py/test_qat_trt_accuracy.py#L74)\r\ndoesn't work now.", "url": "https://github.com/pytorch/TensorRT/issues/1197", "state": "closed", "labels": [ "question" ], "created_at": "2022-07-22T04:38:53Z", "updated_at": "2022-07-22T04:46:46Z", "user": "zinuok" }, { "repo": "pytorch/serve", "number": 1753, "title": "how to return the predictions in JSON format(in JSON string and JSON header)?", "body": "I was using torchserve to production service, I was able to return the predictions with a JSON string, but I was unable to get the response with a JSON header. ", "url": "https://github.com/pytorch/serve/issues/1753", "state": "closed", "labels": [ "triaged_wait", "support" ], "created_at": "2022-07-22T04:04:26Z", "updated_at": "2022-07-24T16:50:32Z", "user": "Vincentwei1021" }, { "repo": "pytorch/functorch", "number": 977, "title": "Hessian (w.r.t inputs) calculation in PyTorch differs from FuncTorch", "body": "Hi All,\r\n\r\nI've been trying to calculate the Hessian of the output of my network with respect to its inputs within FuncTorch. I had a version within PyTorch that supports batches, however, they seem to disagree with each other and I have no idea why they don't give the same results. Something is clearly wrong, I know my PyTorch version is right so either there's an issue in my version of FuncTorch or I've implemented it wrong in FuncTorch. \r\n\r\nAlso, how can I use the `has_aux` flag in `jacrev` to return the jacobian from the first `jacrev` so I don't have to repeat the jacobian calculation?\r\n\r\nThe only problem with my example is that it uses `torch.linalg.slogdet` and from what I remember FuncTorch can't vmap over `.item()`. I do have my own fork of pytorch where I edited the backward to remove the `.item()` call so it works with vmap. Although, it's not the greatest implementation as I just set it to the default `nonsingular_case_backward` like so,\r\n```\r\nTensor slogdet_backward(const Tensor& grad_logabsdet,\r\n const Tensor& self,\r\n const Tensor& signdet, const Tensor& logabsdet) {\r\n auto singular_case_backward = [&](const Tensor& grad_logabsdet, const Tensor& self) -> Tensor {\r\n Tensor u, sigma, vh;\r\n std::tie(u, sigma, vh) = at::linalg_svd(self, false);\r\n Tensor v = vh.mH();\r\n // sigma has all non-negative entries (also with at least one zero entry)\r\n // so logabsdet = \\sum log(abs(sigma))\r\n // but det = 0, so backward logabsdet = \\sum log(sigma)\r\n auto gsigma = grad_logabsdet.unsqueeze(-1).div(sigma);\r\n return svd_backward({}, gsigma, {}, u, sigma, vh);\r\n };\r\n\r\n auto nonsingular_case_backward = [&](const Tensor& grad_logabsdet, const Tensor& self) -> Tensor {\r\n // TODO: replace self.inverse with linalg_inverse\r\n return unsqueeze_multiple(grad_logabsdet, {-1, -2}, self.dim()) * self.inverse().mH();\r\n };\r\n\r\n auto nonsingular = nonsingular_case_backward(grad_logabsdet, self);\r\n return nonsingular;\r\n}\r\n```\r\n\r\nMy 'minimal' reproducible script is below with the output shown below that. It computes the Laplacian via a PyTorch method and via FuncTorch for a single sample of size `[A,1]` where `A` is the number of input nodes to the network.\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch import Tensor\r\nimport functorch\r\nfrom functorch import jacrev, jacfwd, hessian, make_functional, vmap\r\nimport time \r\n\r\n_ = torch.manual_seed(0)\r\n\r\nprint(\"PyTorch version: \", torch.__version__)\r\nprint(\"CUDA version: \", torch.version.cuda)\r\nprint(\"FuncTorch version: \", functorch.__version__)\r\n\r\ndef sync_time() -> float:\r\n torch.cuda.synchronize()\r\n return time.perf_counter()\r\n\r\nB=1 #batch\r\nA=3 #input nodes\r\n\r\ndevice=torch.device(\"cuda\")\r\n\r\nclass model(nn.Module):\r\n\r\n def __init__(self, num_inputs, num_hidden):\r\n super(model, self).__init__()\r\n \r\n self.num_inputs=num_inputs\r\n self.func = nn.Tanh()\r\n \r\n self.fc1 = nn.Linear(2, num_hidden)\r\n self.fc2 = nn.Linear(num_hidden, num_inputs)\r\n \r\n def forward(self, x):\r\n \"\"\"\r\n Takes x in [B,A,1] and maps it to sign/logabsdet value in Tuple([B,], [B,])\r\n \"\"\"\r\n \r\n idx=len(x.shape)\r\n rep=[1 for _ in range(idx)]\r\n rep[-2] = self.num_inputs\r\n g = x.mean(dim=(idx-2), keepdim=True).repeat(*rep)\r\n f = torch.cat((x,g), dim=-1)\r\n\r\n h = self.func(self.fc1(f))\r\n \r\n mat = self.fc2(h)\r\n sgn, logabs = torch.linalg.slogdet(mat)\r\n return sgn, logabs\r\n\r\nnet = model(A, 64)\r\nnet = net.to(device)\r\n\r\nfnet, params = make_functional(net)\r\n\r\ndef logabs(params, x):\r\n _, logabs = fnet(params, x)\r\n #print(\"functorch logabs: \",logabs)\r\n return logabs\r\n\r\n\r\ndef kinetic_pytorch(xs: Tensor) -> Tensor:\r\n \"\"\"Method to calculate the local kinetic energy values of a netork function, f, for samples, x.\r\n The values calculated here are 1/f d2f/dx2 which is equivalent to d2log(|f|)/dx2 + (dlog(|f|)/dx)^2\r\n within the log-domain (rather than the linear-domain).\r\n\r\n :param xs: The input positions of the many-body particles\r\n :type xs: class: `torch.Tensor`\r\n \"\"\"\r\n xis = [xi.requires_grad_() for xi in xs.flatten(start_dim=1).t()]\r\n xs_flat = torch.stack(xis, dim=1)\r\n\r\n _, ys = net(xs_flat.view_as(xs))\r\n #print(\"pytorch logabs: \",ys)\r\n ones = torch.ones_like(ys)\r\n\r\n #df_dx calculation\r\n (dy_dxs, ) = torch.autograd.grad(ys, xs_flat, ones, retain_graph=True, create_graph=True)\r\n\r\n\r\n #d2f_dx2 calculation (diagonal only)\r\n lay_ys = sum(torch.autograd.grad(dy_dxi, xi, ones, retain_graph=True, create_graph=False)[0] \\\r\n for xi, dy_dxi in zip(xis, (dy_dxs[..., i] for i in range(len(xis))))\r\n )\r\n #print(\"(PyTorch): \",lay_ys, dy_dxs)\r\n \r\n ek_local_per_walker = -0.5 * (lay_ys + dy_dxs.pow(2).sum(-1)) #move const out of loop?\r\n return ek_local_per_walker\r\n \r\njacjaclogabs = jacrev(jacrev(logabs, argnums=1), argnums=1)\r\njaclogabs = jacrev(logabs, argnums=1)\r\n \r\ndef kinetic_functorch(params, x):\r\n d2f_dx2 = vmap(jacjaclogabs, in_dims=(None, 0))(par", "url": "https://github.com/pytorch/functorch/issues/977", "state": "closed", "labels": [], "created_at": "2022-07-21T12:11:09Z", "updated_at": "2022-08-01T19:37:18Z", "comments": 18, "user": "AlphaBetaGamma96" }, { "repo": "pytorch/benchmark", "number": 1046, "title": "How to add an new backend?", "body": "Hello, I want to add an new backend to run benchmark **without** modify this repo's code. In torchdynamo repo, I use @create_backend decorator to finish this, but I can't find suitable interface in this repo. ", "url": "https://github.com/pytorch/benchmark/issues/1046", "state": "closed", "labels": [], "created_at": "2022-07-20T08:45:36Z", "updated_at": "2022-07-27T22:47:49Z", "user": "zzpmiracle" }, { "repo": "pytorch/TensorRT", "number": 1189, "title": "\u2753 [Question]Why the GPU memory has doubled when I loaded model from Torch-TensorRT by Pytorch? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nWhen I'm using Pytorch to load model from Torch-TensorRT(torch.jit.load (*.ts)) file, the model's GPU memory has doubled(1602MB to 3242MB of GPU Memory from Nvidia-smi). At the same time, the gradient of model tensors are both not included. What I'm concern is that the context memory of torch is not reused, is restart a new context memory of torch. \r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):1.10.0\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Python version: 3.7\r\n - CUDA version:11.2\r\n - Any other relevant information: torch-tensorrt version: 1.1.0\r\n- NVIDIA GPU: Tesla v100\r\n\r\n## Additional context\r\n\r\nimport torch\r\nimport torch_tensorrt\r\n\r\n# memory is 1.6G\r\na= torch.randn()\r\na= torch.randn([1,1,224,224])\r\na.cuda()\r\n\r\n# memory become 3.2G\r\nmodel = torch.jit.load()\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1189", "state": "closed", "labels": [ "question", "No Activity", "performance" ], "created_at": "2022-07-19T10:21:14Z", "updated_at": "2023-03-26T00:02:18Z", "user": "Jancapcc" }, { "repo": "pytorch/TensorRT", "number": 1188, "title": "\u2753 [Question] Cannot install torch-tensorrt package", "body": "Hi! Can someone explain why this is error\r\n\r\n```shell\r\n(tf-gpu-11.6) C:\\Users\\myxzlpltk>pip install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases\r\nLooking in links: https://github.com/NVIDIA/Torch-TensorRT/releases\r\nCollecting torch-tensorrt\r\n Using cached torch-tensorrt-0.0.0.post1.tar.gz (9.0 kB)\r\n Preparing metadata (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n\r\n \u00d7 python setup.py egg_info did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [13 lines of output]\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"C:\\Users\\myxzlpltk\\AppData\\Local\\Temp\\pip-install-t86xj3rx\\torch-tensorrt_a472ada85c9e492d8f4d7d614046053d\\setup.py\", line 125, in <module>\r\n raise RuntimeError(open(\"ERROR.txt\", \"r\").read())\r\n RuntimeError:\r\n ###########################################################################################\r\n The package you are trying to install is only a placeholder project on PyPI.org repository.\r\n To install Torch-TensorRT please run the following command:\r\n\r\n $ pip install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases\r\n ###########################################################################################\r\n\r\n [end of output]\r\n\r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: metadata-generation-failed\r\n\r\n\u00d7 Encountered error while generating package metadata.\r\n\u2570\u2500> See above for output.\r\n\r\nnote: This is an issue with the package mentioned above, not pip.\r\nhint: See above for details.\r\n```", "url": "https://github.com/pytorch/TensorRT/issues/1188", "state": "closed", "labels": [ "question", "channel: windows" ], "created_at": "2022-07-19T01:48:13Z", "updated_at": "2024-02-26T17:16:23Z", "user": "myxzlpltk" }, { "repo": "pytorch/TensorRT", "number": 1186, "title": "\u2753 [Question] Python Package for V1.1.1 Release? ", "body": "## \u2753 Question\r\n\r\nDoes the latest release include the python package for supporting JP5.0 too?\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11\r\n - CPU Architecture: Arm64\r\n - Python version: 3.8\r\n - CUDA version: 11.4\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1186", "state": "closed", "labels": [ "question", "release: patch", "channel: linux-jetpack" ], "created_at": "2022-07-18T15:20:13Z", "updated_at": "2022-07-18T21:47:06Z", "user": "haichuanwang001" }, { "repo": "pytorch/data", "number": 661, "title": "DataLoader2 with reading service", "body": "For user dev and onboarding experience of the data component, we will provide examples, tutorials, up-to-date documentations as well as the operational support. We added a simple train loop example. This is to further track adding the uscase and example of DataLoader2 with different reading services.", "url": "https://github.com/meta-pytorch/data/issues/661", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-07-15T17:29:41Z", "updated_at": "2022-11-10T23:07:24Z", "comments": 2, "user": "dahsh" }, { "repo": "pytorch/data", "number": 655, "title": "DataLoader2 with OSS datasets/datapipes", "body": "For user dev and onboarding experience of the data component, we will provide examples, tutorials, up-to-date documentations as well as the operational support. We added a simple train loop example. This is to further track adding the uscase and example of DataLoader2 with open source datasets/datapipes.", "url": "https://github.com/meta-pytorch/data/issues/655", "state": "closed", "labels": [], "created_at": "2022-07-14T17:51:13Z", "updated_at": "2022-11-10T23:06:20Z", "comments": 2, "user": "dahsh" }, { "repo": "pytorch/torchx", "number": 557, "title": "how does i run the script and use script args", "body": "## \u2753 Questions and Help\r\nhow does i run the script and use the script_args --\r\n torchx run --scheduler local_cwd --scheduler_args log_dir=/tmp dist.ddp -j 1x2 --script dlrm_main.py --epoch 30\r\n\r\nwhen i test dlrm by next code\r\n\r\n```shell\r\n torchx run --scheduler local_cwd --scheduler_args log_dir=/tmp dist.ddp -j 1x2 --script dlrm_main.py --epoch 30\r\n\r\n```\r\n![image](https://user-images.githubusercontent.com/6194818/178942300-1fe84175-cee5-42cf-8331-455c8716d863.png)\r\n\r\n### Question\r\nthe error is :\r\nusage: torchx run <run args...> ddp [--help] [--script SCRIPT] [-m M] [--image IMAGE] [--name NAME] [-h H] [--cpu CPU] [--gpu GPU] [--memMB MEMMB] [-j J] [--env ENV] [--max_retries MAX_RETRIES] [--rdzv_port RDZV_PORT]\r\n [--mounts MOUNTS]\r\n ...\r\ntorchx run <run args...> ddp : error: unrecognized arguments: --epoch\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/557", "state": "closed", "labels": [], "created_at": "2022-07-14T08:50:39Z", "updated_at": "2023-07-03T19:51:50Z", "comments": 3, "user": "davidxiaozhi" }, { "repo": "pytorch/examples", "number": 1022, "title": "How to build a generator for a layout 2 image GANs with images of size 256 and 512", "body": "Hello I am new to GANs and I need you help : \r\nPlease could you help me to make the model accept the image size of 256x256 and 512x512 \r\n\r\nI included the generator model for 128x128\r\n\r\n`import torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nfrom math import *\r\nfrom models.bilinear import crop_bbox_batch\r\n\r\n\r\ndef get_z_random(batch_size, z_dim, random_type='gauss'):\r\n if random_type == 'uni':\r\n z = torch.rand(batch_size, z_dim) * 2.0 - 1.0\r\n elif random_type == 'gauss':\r\n z = torch.randn(batch_size, z_dim)\r\n return z\r\n\r\n\r\ndef transform_z_flat(batch_size, time_step, z_flat, obj_to_img):\r\n # restore z to batch with padding\r\n z = torch.zeros(batch_size, time_step, z_flat.size(1)).to(z_flat.device)\r\n for i in range(batch_size):\r\n idx = (obj_to_img.data == i).nonzero()\r\n if idx.dim() == 0:\r\n continue\r\n idx = idx.view(-1)\r\n n = idx.size(0)\r\n z[i, :n] = z_flat[idx]\r\n return z\r\n\r\n\r\nclass ConditionalBatchNorm2d(nn.Module):\r\n def __init__(self, num_features, num_classes):\r\n super().__init__()\r\n self.num_features = num_features\r\n self.bn = nn.BatchNorm2d(num_features, affine=False)\r\n self.embed = nn.Embedding(num_classes, num_features * 2)\r\n self.embed.weight.data[:, :num_features].normal_(1, 0.02) # Initialise scale at N(1, 0.02)\r\n self.embed.weight.data[:, num_features:].zero_() # Initialise bias at 0\r\n\r\n def forward(self, x, y):\r\n out = self.bn(x)\r\n gamma, beta = self.embed(y).chunk(2, 1)\r\n out = gamma.view(-1, self.num_features, 1, 1) * out + beta.view(-1, self.num_features, 1, 1)\r\n return out\r\n\r\n\r\nclass ResidualBlock(nn.Module):\r\n \"\"\"Residual Block with instance normalization.\"\"\"\r\n\r\n def __init__(self, dim_in, dim_out):\r\n super(ResidualBlock, self).__init__()\r\n self.main = nn.Sequential(\r\n nn.Conv2d(dim_in, dim_out, kernel_size=3, stride=1, padding=1, bias=False),\r\n nn.BatchNorm2d(dim_out, affine=True, track_running_stats=True),\r\n nn.ReLU(inplace=True),\r\n nn.Conv2d(dim_out, dim_out, kernel_size=3, stride=1, padding=1, bias=False),\r\n nn.BatchNorm2d(dim_out, affine=True, track_running_stats=True))\r\n\r\n def forward(self, x):\r\n return x + self.main(x)\r\n\r\n\r\nclass ConvLSTMCell(nn.Module):\r\n\r\n def __init__(self, input_size, input_dim, hidden_dim, kernel_size, bias):\r\n \"\"\"\r\n Initialize ConvLSTM cell.\r\n Parameters\r\n ----------\r\n input_size: (int, int)\r\n Height and width of input tensor as (height, width).\r\n input_dim: int\r\n Number of channels of input tensor.\r\n hidden_dim: int\r\n Number of channels of hidden state.\r\n kernel_size: (int, int)\r\n Size of the convolutional kernel.\r\n bias: bool\r\n Whether or not to add the bias.\r\n \"\"\"\r\n\r\n super(ConvLSTMCell, self).__init__()\r\n\r\n self.height, self.width = input_size\r\n self.input_dim = input_dim\r\n self.hidden_dim = hidden_dim\r\n\r\n self.kernel_size = kernel_size\r\n self.padding = kernel_size[0] // 2, kernel_size[1] // 2\r\n self.bias = bias\r\n\r\n self.conv = nn.Conv2d(in_channels=self.input_dim + self.hidden_dim,\r\n out_channels=4 * self.hidden_dim,\r\n kernel_size=self.kernel_size,\r\n padding=self.padding,\r\n bias=self.bias)\r\n\r\n def forward(self, input_tensor, cur_state):\r\n h_cur, c_cur = cur_state\r\n\r\n combined = torch.cat([input_tensor, h_cur], dim=1) # concatenate along channel axis\r\n\r\n combined_conv = self.conv(combined)\r\n cc_i, cc_f, cc_o, cc_g = torch.split(combined_conv, self.hidden_dim, dim=1)\r\n i = torch.sigmoid(cc_i)\r\n f = torch.sigmoid(cc_f)\r\n o = torch.sigmoid(cc_o)\r\n g = torch.tanh(cc_g)\r\n\r\n c_next = f * c_cur + i * g\r\n h_next = o * torch.tanh(c_next)\r\n\r\n return h_next, c_next\r\n\r\n def init_hidden(self, batch_size, device):\r\n return (torch.zeros(batch_size, self.hidden_dim, self.height, self.width).to(device),\r\n torch.zeros(batch_size, self.hidden_dim, self.height, self.width).to(device))\r\n\r\n\r\nclass ConvLSTM(nn.Module):\r\n\r\n def __init__(self, input_size, input_dim, hidden_dim, kernel_size, batch_first=False, bias=True, return_all_layers=False):\r\n super(ConvLSTM, self).__init__()\r\n\r\n self._check_kernel_size_consistency(kernel_size)\r\n\r\n if isinstance(hidden_dim, list):\r\n num_layers = len(hidden_dim)\r\n elif isinstance(hidden_dim, int):\r\n num_layers = 1\r\n\r\n # Make sure that both `kernel_size` and `hidden_dim` are lists having len == num_layers\r\n kernel_size = self._extend_for_multilayer(kernel_size, num_layers)\r\n hidden_dim = self._extend_for_multilayer(hidden_di", "url": "https://github.com/pytorch/examples/issues/1022", "state": "closed", "labels": [], "created_at": "2022-07-13T15:45:09Z", "updated_at": "2022-07-16T17:13:15Z", "user": "TahaniFennir" }, { "repo": "pytorch/data", "number": 648, "title": "Chainer/Concater from single datapipe?", "body": "The `Concater` datapipe takes multiple DPs as input. Is there a class that would take a **single** datapipe of iterables instead? Something like this:\r\n\r\n```py\r\nclass ConcaterIterable(IterDataPipe):\r\n def __init__(self, source_datapipe):\r\n self.source_datapipe = source_datapipe\r\n\r\n def __iter__(self):\r\n for iterable in self.source_datapipe:\r\n yield from iterable\r\n```\r\n\r\nBasically:\r\n\r\n[`itertools.chain` ](https://docs.python.org/3/library/itertools.html#itertools.chain)== `Concater`\r\n[`itertools.chain.from_iterable`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable) == `ConcaterIterable`\r\n\r\n\r\n\r\nMaybe a neat way of implementing this would be to keep a single `Concater` class, which would fall back to the `ConcaterIterable` behaviour if it's passed only one DP as input?\r\n\r\n\r\n-----\r\n\r\n\r\nDetails: I need this for my benchmarking on manifold where each file is a big pickle archive of multiple images. My DP builder looks like this:\r\n\r\n```py\r\ndef make_manifold_dp(root, dataset_size):\r\n handler = ManifoldPathHandler()\r\n dp = IoPathFileLister(root=root)\r\n dp.register_handler(handler)\r\n\r\n dp = dp.shuffle(buffer_size=dataset_size).sharding_filter()\r\n\r\n dp = IoPathFileOpener(dp, mode=\"rb\")\r\n dp.register_handler(handler)\r\n\r\n dp = PickleLoaderDataPipe(dp)\r\n dp = ConcaterIterable(dp) # <-- Needed here!\r\n return dp\r\n```", "url": "https://github.com/meta-pytorch/data/issues/648", "state": "closed", "labels": [ "good first issue" ], "created_at": "2022-07-13T14:19:43Z", "updated_at": "2023-03-14T20:25:01Z", "comments": 9, "user": "NicolasHug" }, { "repo": "pytorch/pytorch", "number": 81395, "title": "How to Do Semi-Asynchronous or Asynchronous Training with Pytorch", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWhen PyTorch is used for distributed training, DDP is normally good enough for most situations. However, when if performance of different nodes differs, the performance of the whole training will be decided by the worst node. E.g. worker 0 needs 1 second for a forward and backward pass while worker 1 needs 2 seconds, the time for one step will be 2 seconds.\r\n\r\nSo I am wondering if there is way to do semi-asynchronous training with Pytorch?\n\n### Alternatives\n\nThere is a similar library called [hivemind](tps://github.com/learning-at-home/hivemind), but it is designed for Internet while we prefer to run the training job in our cluster.\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/81395", "state": "closed", "labels": [], "created_at": "2022-07-13T09:42:48Z", "updated_at": "2022-07-13T16:50:59Z", "user": "lsy643" }, { "repo": "pytorch/data", "number": 647, "title": "Update out-of-date example and colab", "body": "### \ud83d\udcda The doc issue\n\nThe examples for Text/Vision/Audio are out-of-date: https://github.com/pytorch/data/tree/main/examples\r\nThe colab attached in README needs to be updated as well:\r\n- How to install torchdata\r\n- Example needs shuffle + sharding_filter\n\n### Suggest a potential alternative/fix\n\nNone", "url": "https://github.com/meta-pytorch/data/issues/647", "state": "closed", "labels": [], "created_at": "2022-07-12T21:09:53Z", "updated_at": "2023-02-02T14:39:40Z", "comments": 5, "user": "ejguan" }, { "repo": "pytorch/functorch", "number": 956, "title": "Batching rule for searchsorted implementation", "body": "Hi,\r\n\r\nThanks for the great work, really enjoying functorch in my work. I have encountered the following when using vmap on a function which uses torch.searchsorted:\r\n\r\nUserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::searchsorted.Tensor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /Users/runner/work/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)\r\n\r\nLooking forward to the implementation.", "url": "https://github.com/pytorch/functorch/issues/956", "state": "closed", "labels": [ "actionable" ], "created_at": "2022-07-12T06:36:04Z", "updated_at": "2022-07-18T13:49:42Z", "comments": 6, "user": "mingu6" }, { "repo": "pytorch/data", "number": 637, "title": "[TODO] Create dependency on TorchArrow?", "body": "\nThis issue is generated from the TODO line\n\nhttps://github.com/pytorch/data/blob/2f29adba451e1b87f1c0c654557d9dd98673fdd8/torchdata/datapipes/iter/util/dataframemaker.py#L15\n\n\n ", "url": "https://github.com/meta-pytorch/data/issues/637", "state": "open", "labels": [], "created_at": "2022-07-11T17:34:07Z", "updated_at": "2022-07-11T17:34:07Z", "comments": 0, "user": "VitalyFedyunin" }, { "repo": "pytorch/data", "number": 580, "title": "[Linter] Ability to disable some lints", "body": "### \ud83d\ude80 The feature\r\n\r\nThere are several options to disable specific linters. \r\n\r\nOption 1. Disable with `linter-ignore: code`\r\n\r\nPros: \r\n- Similar to known syntax of various linters\r\n\r\nCons: \r\n- Need to modify code of datasets to disable something\r\n\r\n```\r\ndatapipe = datapipe.sharding_filter().shuffle() # linter-ignore: shuffle-shard\r\n```\r\n\r\nOption 2. Global & Context disables\r\n\r\nPros: \r\n- Can control datasets without modification of the code\r\n\r\nCons: \r\n- Global might disable important errors\r\n- Context requires additional indent \r\n- Syntax feels weird \r\n- Annoying to disable construct time linters (see below)\r\n\r\n```\r\nfrom torchdata import linter\r\nlinter.disable('shuffle-shard') # global\r\nwith linter.disable('shuffle-shard'): # context based\r\n dl = DataLoader2(...)\r\n```\r\n\r\nOption 3. DLv2 argument / ReadingService argument\r\n\r\nPros: \r\n- Local to specific DataLoader\r\n- Can control datasets without modication of the code\r\n\r\nCons: \r\n- Syntax feels weird\r\n- Some linters might trigger/not in various ReadingServices \r\n- Annoying to disable construct time linters (see below)\r\n\r\n```\r\ndl = DataLoader2(dp_graph, [adapter], disable_lint = ['shuffle-shard'])\r\n```\r\n\r\nOption 4. DataPipe 'attribute'\r\n\r\nPros: \r\n- Can be defined by DataSet developer or by the user\r\n- Can impact construct time error handling\r\n\r\nCons: \r\n- Syntax feels weird\r\n\r\n```datapipe = datapipe.sharding_filter().shuffle().disable_lint('shuffle-shard')```\r\n \r\nand/or (as we can have an adapter to do the same job)\r\n\r\n```dl = DataLoader(dp_graph,[DisableLint('shuffle-shard')], ...)```\r\n\r\nPersonally, I prefer the last variant, but I'm open to discussion.", "url": "https://github.com/meta-pytorch/data/issues/580", "state": "open", "labels": [], "created_at": "2022-07-08T17:25:25Z", "updated_at": "2022-07-15T21:23:17Z", "comments": 3, "user": "VitalyFedyunin" }, { "repo": "pytorch/pytorch", "number": 81103, "title": "[Discussion] How to add MPS extension with custom kernel?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi,\r\nI am working on adding MPS op for MPS backend with a custom kernel.\r\nHere is an example:\r\n\r\nhttps://github.com/grimoire/TorchMPSCustomOpsDemo\r\n\r\nI am new to Metal. I am not sure if it is a good way (or the right way) to add such op. There are something I want to discuss:\r\n\r\n## Device and CommandQueue\r\n\r\nSince PyTorch has not exposed the MPS-related API, I have to copy some head [from torch csrc](https://github.com/grimoire/TorchMPSCustomOpsDemo/tree/master/csrc/pytorch/mps). The library is build with `MPSDevice::getInstance()->device()` and the command is commit to `getCurrentMPSStream()`. I am not sure if I should flush on commit or not.\r\n\r\n## LibraryFromUrl vs LibraryFromSource\r\n\r\nIt seems that Metal library can not be linked together with the other object file. So I have to:\r\n\r\nEither load it at runtime, which leads to the problem of how to find the relative location of the `.metallib`.\r\n\r\n```objc\r\n// load from url\r\nNSURL* metal_url = [NSURL fileURLWithPath: utl_str];\r\nlibrary->_library = [at::mps::MPSDevice::getInstance()->device() newLibraryWithURL: metal_url error:&error];\r\n```\r\n\r\nOr build it at runtime. Which might take a long time to compile the kernel at runtime.\r\n\r\n```objc\r\n// build library from source string\r\nNSString* code_str = [NSString stringWithCString: sources.c_str()];\r\nlibrary->_library = [at::mps::MPSDevice::getInstance()->device() newLibraryWithSource: code_str options: nil error:&error];\r\n```\r\n\r\n## BuildExtension\r\n\r\nIf we does not build metal kernel at runtime, we need to setup the compiler for metal kernel in the `setup.py`.\r\n\r\nSince the `build_ext` provided by Python and PyTorch does not support build Metal, I patched the `UnixCCompiler` in `BuildExtension` to add the support. Both `compile` and `link` need to be updated:\r\n\r\n```python\r\n\r\n # compile\r\n def darwin_wrap_single_compile(obj, src, ext, cc_args, extra_postargs,\r\n pp_opts) -> None:\r\n cflags = copy.deepcopy(extra_postargs)\r\n try:\r\n original_compiler = self.compiler.compiler_so\r\n\r\n if _is_metal_file(src):\r\n # use xcrun metal to compile metal file to `.air`\r\n metal = ['xcrun', 'metal']\r\n self.compiler.set_executable('compiler_so', metal)\r\n if isinstance(cflags, dict):\r\n cflags = cflags.get('metal', [])\r\n else:\r\n cflags = []\r\n elif isinstance(cflags, dict):\r\n cflags = cflags['cxx']\r\n\r\n original_compile(obj, src, ext, cc_args, cflags, pp_opts)\r\n finally:\r\n self.compiler.set_executable('compiler_so', original_compiler)\r\n \r\n # link\r\n def darwin_wrap_single_link(target_desc,\r\n objects,\r\n output_filename,\r\n output_dir=None,\r\n libraries=None,\r\n library_dirs=None,\r\n runtime_library_dirs=None,\r\n export_symbols=None,\r\n debug=0,\r\n extra_preargs=None,\r\n extra_postargs=None,\r\n build_temp=None,\r\n target_lang=None):\r\n if osp.splitext(objects[0])[1].lower() == '.air':\r\n for obj in objects:\r\n assert osp.splitext(obj)[1].lower(\r\n ) == '.air', f'Expect .air file, but get {obj}.'\r\n # link `.air` with xcrun metallib\r\n linker = ['xcrun', 'metallib']\r\n self.compiler.spawn(linker + objects + ['-o', output_filename])\r\n else:\r\n return original_link(target_desc, objects, output_filename,\r\n output_dir, libraries, library_dirs,\r\n runtime_library_dirs, export_symbols,\r\n debug, extra_preargs, extra_postargs,\r\n build_temp, target_lang)\r\n```\r\n\r\nThe code looks ... ugly. Hope there is a better way to do that.\r\n\r\n\r\nSo ... any advice?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @malfet @zou3519 @kulinseth @albanD", "url": "https://github.com/pytorch/pytorch/issues/81103", "state": "closed", "labels": [ "module: cpp-extensions", "triaged", "enhancement", "topic: docs", "module: mps" ], "created_at": "2022-07-08T12:32:14Z", "updated_at": "2023-07-28T17:11:42Z", "user": "grimoire" }, { "repo": "pytorch/pytorch.github.io", "number": 1071, "title": "Where is documented the resize and crop in EfficientNet for torchvision v0.12.0", "body": "## \ud83d\udcda Documentation\r\n\r\nHello, I do not see in any place what resize and center crop were done for training the efficientNet_bx models.\r\nWhere is that information? \r\n\r\nI saw it in the torchvision v0.13.0 documentation or code ([for example](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py#L522))\r\n\r\nMany of us have still projects in the older version.\r\n\r\nThanks\r\n", "url": "https://github.com/pytorch/pytorch.github.io/issues/1071", "state": "closed", "labels": [], "created_at": "2022-07-08T12:20:23Z", "updated_at": "2022-07-22T22:06:23Z", "user": "mjack3" }, { "repo": "pytorch/vision", "number": 6249, "title": "Error when create_feature_extractor in AlexNet", "body": "### \ud83d\udc1b Describe the bug\n\nWhen I try to obtain the feature of layer \"classifier.4\" in AlexNet, the program has reported an error. The code is as follows:\r\n```\r\nimport torch\r\nfrom torchvision.models import alexnet, AlexNet_Weights\r\nfrom torchvision.models.feature_extraction import create_feature_extractor\r\n\r\nmodel = alexnet(weights=AlexNet_Weights.IMAGENET1K_V1)\r\nextractor = create_feature_extractor(model, {'classifier.4': 'feat'})\r\nimg = torch.rand(3,224,224)\r\nout = extractor(img)\r\n```\r\n\r\n**Error message**\r\n```\r\nRuntimeError: mat1 and mat2 shapes cannot be multiplied (256x36 and 9216x4096)\r\n```\r\n\r\nI guess it is because that the shape of output from \"flatten\" of AlexNet is 256x36 rather than 9216.\n\n### Versions\n\n```\r\nCollecting environment information...\r\nPyTorch version: 1.12.0+cu116\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.6\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.31\r\nIs CUDA available: True\r\nCUDA runtime version: 11.6.55\r\nGPU models and configuration:\r\nGPU 0: NVIDIA GeForce RTX 3090\r\nGPU 1: NVIDIA GeForce RTX 3090\r\nGPU 2: NVIDIA GeForce RTX 3090\r\nGPU 3: NVIDIA GeForce RTX 3090\r\nGPU 4: NVIDIA GeForce RTX 3090\r\nGPU 5: NVIDIA GeForce RTX 3090\r\nGPU 6: NVIDIA GeForce RTX 3090\r\nGPU 7: NVIDIA GeForce RTX 3090\r\n\r\nNvidia driver version: 510.73.08\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.4.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.0\r\n/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.22.3\r\n[pip3] torch==1.12.0+cu116\r\n[pip3] torchmetrics==0.9.1\r\n[pip3] torchtext==0.12.0\r\n[pip3] torchvision==0.13.0+cu116\r\n[conda] blas 1.0 mkl defaults\r\n[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults\r\n[conda] mkl 2021.4.0 h06a4308_640 defaults\r\n[conda] mkl-service 2.4.0 py39h7f8727e_0 defaults\r\n[conda] mkl_fft 1.3.1 py39hd3c417c_0 defaults\r\n[conda] mkl_random 1.2.2 py39h51133e4_0 defaults\r\n[conda] numpy 1.22.3 py39he7a7128_0 defaults\r\n[conda] numpy-base 1.22.3 py39hf524024_0 defaults\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torch 1.12.0+cu116 pypi_0 pypi\r\n[conda] torchmetrics 0.9.1 pypi_0 pypi\r\n[conda] torchtext 0.12.0 py39 pytorch\r\n[conda] torchvision 0.13.0+cu116 pypi_0 pypi\r\n```\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6249", "state": "closed", "labels": [ "question", "module: models", "topic: feature extraction" ], "created_at": "2022-07-08T09:28:06Z", "updated_at": "2022-07-08T10:11:43Z", "user": "githwd2016" }, { "repo": "pytorch/vision", "number": 6247, "title": "Probable missing argument for swin transformer", "body": "Hello, \r\n\r\nWhen I inspect the swin transformer codes in the original swin repo, mmdetection or detectron2, I have noticed that there is a parameter called `drop_path_rate` which I cannot see in the in the torchvision repo. Maybe, I am overlooking. Is there a similar parameter and is it an important parameter? \r\n\r\nThanks in advance\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6247", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2022-07-08T08:21:58Z", "updated_at": "2022-07-11T13:17:40Z", "user": "artest08" }, { "repo": "pytorch/functorch", "number": 940, "title": "Question on how to batch over both: inputs and tangent vectors", "body": "I want to compute the jacobian vector product of a function F from R^d to R^D. But I need to do this at a batch of points x_1, ..., x_n in R^d and a batch of tangent vectors v_1, ..., v_m in R^d. Namely, for all i = 1, ..., n and j = 1, ..., m I need to compute the nxm jacobian vector products: J_F(x_i) * v_j.\r\n\r\nIs there a way to do this by using vmap twice to loop over the batches x_i and v_j?", "url": "https://github.com/pytorch/functorch/issues/940", "state": "open", "labels": [], "created_at": "2022-07-07T14:57:28Z", "updated_at": "2022-07-12T17:47:23Z", "user": "sgstepaniants" }, { "repo": "pytorch/serve", "number": 1725, "title": "Serving other framework models with Torchserve?", "body": "Hi everyone.\r\n\r\nAs in the title, I want to ask if torchserve can serve other framework models or pytorch models only?\r\n\r\nFor example, I have a model written in mxnet. This is the snippet code of `initialize` method in my custom handler.\r\n```python\r\ndef initialize(self, context):\r\n properties = context.system_properties\r\n if (torch.cuda.is_available() and\r\n properties.get(\"gpu_id\") is not None):\r\n ctx_id = properties.get(\"gpu_id\")\r\n else:\r\n ctx_id = -1\r\n\r\n self.manifest = context.manifest\r\n model_dir = properties.get(\"model_dir\")\r\n prefix = os.path.join(model_dir, \"model/resnet-50\")\r\n\r\n # load model\r\n sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, 0)\r\n if ctx_id >= 0:\r\n self.ctx = mx.gpu(ctx_id)\r\n else:\r\n self.ctx = mx.cpu()\r\n self.model = mx.mod.Module(symbol=sym,\r\n context=self.ctx,\r\n label_names=None)\r\n self.model.bind(\r\n data_shapes=[('data', (1, 3, 640, 640))],\r\n for_training=False\r\n )\r\n self.model.set_params(arg_params, aux_params)\r\n self.initialized = True\r\n```\r\nFor some reason, pretrained mxnet model can't be loaded. But that same model works fine in my training and inferencing script. This is the error log.\r\n```\r\n2022-07-06T15:48:07,468 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File \"/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/ts/model_loader.py\", line 151, in load\r\n2022-07-06T15:48:07,468 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - initialize_fn(service.context)\r\n2022-07-06T15:48:07,469 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File \"/tmp/models/4b6bbba5e16445ffbe70f89282a0d30a/handler.py\", line 34, in initialize\r\n2022-07-06T15:48:07,469 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, 0)\r\n2022-07-06T15:48:07,469 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File \"/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/mxnet/model.py\", line 476, in load_checkpoint\r\n2022-07-06T15:48:07,470 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - symbol = sym.load('%s-symbol.json' % prefix)\r\n2022-07-06T15:48:07,470 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File \"/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/mxnet/symbol/symbol.py\", line 3054, in load\r\n2022-07-06T15:48:07,470 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - check_call(_LIB.MXSymbolCreateFromFile(c_str(fname), ctypes.byref(handle)))\r\n2022-07-06T15:48:07,471 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File \"/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/mxnet/base.py\", line 246, in check_call\r\n2022-07-06T15:48:07,471 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - raise get_last_ffi_error()\r\n2022-07-06T15:48:07,471 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - mxnet.base.MXNetError: Traceback (most recent call last):\r\n2022-07-06T15:48:07,472 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File \"../include/dmlc/././json.h\", line 718\r\n2022-07-06T15:48:07,472 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - MXNetError: Check failed: !is_->fail(): Error at Line 32, around ^``, Expect number\r\n```\r\n", "url": "https://github.com/pytorch/serve/issues/1725", "state": "closed", "labels": [ "help wanted", "question" ], "created_at": "2022-07-06T09:08:44Z", "updated_at": "2022-07-13T07:58:10Z", "user": "vuongdanghuy" }, { "repo": "pytorch/TensorRT", "number": 1166, "title": "\u2753 [Question] How to run Torch-Tensorrt on JETSON AGX ORIN?", "body": "## \u2753 Question\r\n**Not able to run Torch-Tensorrt on Jetson AGX ORIN**\r\nAs per the [release note](https://github.com/pytorch/TensorRT/discussions/1043), it is mentioned that current release doesn't have support for Jetpack 5.0DP but ORIN only supports Jetpack 5.0DP (I might be wrong but inferring from this [Jetpack Archives.](https://developer.nvidia.com/embedded/jetpack-archive). **Is there a way to run Torch-Tensort on ORIN?** if not what's the possible timeline for new release with this support? \r\n\r\n## What you have already tried\r\n\r\nI did tried building for python, as suggested in the repo, it enables `import torch_tensorrt` but doesn't supports any attributes. \r\n\r\n## Environment\r\n - PyTorch Version (e.g., 1.0): 1.11 \r\n - CPU Architecture: arm64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): tried both, wheels provided [here ](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048) and building from source(instruction from [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048)). \r\n - Build command you used (if compiling from source): python3 setup.py --use_cxx11_abi (however, this refers to jetpack 4.6 by default) \r\n - Python version: 3.8\r\n - CUDA version: 11.4\r\n - GPU models and configuration: Jetson ORIN \r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1166", "state": "closed", "labels": [ "question", "channel: linux-jetpack" ], "created_at": "2022-07-05T19:46:00Z", "updated_at": "2022-08-11T02:55:46Z", "user": "krmayankb" }, { "repo": "pytorch/functorch", "number": 933, "title": "Cannot import vmap after new release", "body": "I am installing functorch on google colab; when I don't specify the version, it installs version 0.2.2 and PyTorch version 1.12.0, and uninstall currently installed PyTorch 1.11.0 on colab. But, in the line where I import vmap, it throws an error that functorch is not compatible with PyTorch 1.12.0:\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-1-0691ca18293b>](https://localhost:8080/#) in <module>()\r\n 3 \r\n 4 from torchsummary import summary\r\n----> 5 from functorch import vmap\r\n 6 import torch\r\n 7 import torch.nn as nn\r\n\r\n[/usr/local/lib/python3.7/dist-packages/functorch/__init__.py](https://localhost:8080/#) in <module>()\r\n 20 if torch_cuda_version not in pytorch_cuda_restrictions:\r\n 21 raise RuntimeError(\r\n---> 22 f\"We've detected an installation of PyTorch 1.12 with {verbose_torch_cuda_version} support. \"\r\n 23 \"This functorch 0.2.0 binary is not compatible with the PyTorch installation. \"\r\n 24 \"Please see our install page for suggestions on how to resolve this: \"\r\n\r\nRuntimeError: We've detected an installation of PyTorch 1.12 with CUDA 10.2 support. This functorch 0.2.0 binary is not compatible with the PyTorch installation. Please see our install page for suggestions on how to resolve this: https://pytorch.org/functorch/stable/install.html\r\n```\r\n\r\nI tried the older version, functorch 0.1.1 with PyTorch 1.11.0, but it also gives some errors during the import:\r\n```\r\nImportError Traceback (most recent call last)\r\n[<ipython-input-3-abbd2ba6241c>](https://localhost:8080/#) in <module>()\r\n 3 \r\n 4 from torchsummary import summary\r\n----> 5 from functorch import vmap\r\n 6 import torch\r\n 7 import torch.nn as nn\r\n\r\n[/usr/local/lib/python3.7/dist-packages/functorch/__init__.py](https://localhost:8080/#) in <module>()\r\n 5 # LICENSE file in the root directory of this source tree.\r\n 6 import torch\r\n----> 7 from . import _C\r\n 8 \r\n 9 # Monkey patch PyTorch. This is a hack, we should try to upstream\r\n\r\nImportError: /usr/local/lib/python3.7/dist-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv\r\n```\r\n\r\nNote: I was able to use vmap from older version just a few hours ago, then I came to notebook started it and now it doesn't work", "url": "https://github.com/pytorch/functorch/issues/933", "state": "open", "labels": [], "created_at": "2022-07-05T18:47:06Z", "updated_at": "2022-08-08T14:31:27Z", "comments": 4, "user": "KananMahammadli" }, { "repo": "pytorch/vision", "number": 6239, "title": "n classes in ConvNeXt model ", "body": "### \ud83d\udc1b Describe the bug\n\n\r\nHI,\r\n\r\nI'm trying to train a ConvNeXt tiny model as a binary classifier by loading the model architecture and pretrained weights from torchvision.models.\r\n\r\nI use the following two lines of code to load the model and change the number of output nodes:\r\n\r\n>num_classes=2\r\nmodel_ft = models.convnext_tiny(weights=ConvNeXt_Tiny_Weights.DEFAULT)\r\nmodel_ft.classifier[2].out_features = num_classes\r\n\r\nAnd when I print this layer of the mode I get:\r\n\r\n>print(model_ft.classifier[2])\r\n\r\n>Linear(in_features=768, out_features=2, bias=True)\r\n\r\nThis suggests that the change had been made. However, when I train the model, the output has dimensions of 42 x 1,000. _i.e. batch_size_ x n classes in ImageNet:\r\n\r\n>batch_size=42\r\noutputs = model(inputs)\r\nprint(outputs.size())\r\n\r\n>torch.Size([42, 1000])\r\n\r\nAny thoughts on how solve this problem?\r\n\r\nCheers,\r\nJamie\r\n\r\np.s. it seems like the issue might be that the number of classes is hard coded as 1000 in:\r\npytorch/vision/tree/main/torchvision/models/convnext.py\r\nLines 90:100\r\n\r\n>class ConvNeXt(nn.Module):\r\ndef init(\r\nself,\r\nblock_setting: List[CNBlockConfig],\r\nstochastic_depth_prob: float = 0.0,\r\nlayer_scale: float = 1e-6,\r\nnum_classes: int = 1000,\r\nblock: Optional[Callable[..., nn.Module]] = None,\r\nnorm_layer: Optional[Callable[..., nn.Module]] = None,\r\n**kwargs: Any,\r\n) -> None:\r\n\r\n\n\n### Versions\n\nPytorch version: 1.13.0.dev20220624\r\nPython 3.8\r\n\r\n\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6239", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2022-07-05T17:47:40Z", "updated_at": "2022-07-06T08:13:15Z", "user": "jrsykes" }, { "repo": "pytorch/vision", "number": 6235, "title": "Creating a `cache-dataset` for Video classification.", "body": "Hello, now I am trying to test the video classification model R(2+1)D on Kinetics400. However the speed of loading data is so slow. I believe the loading speed can be improved by caching the data but I am not sure how to cache video files. In the code also, it is mentioned. I want to know to cache video files? is cache dataset creating feature also included in future updates? \r\nThank you !\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6235", "state": "closed", "labels": [ "question", "module: reference scripts", "module: video" ], "created_at": "2022-07-05T04:27:54Z", "updated_at": "2022-07-05T08:28:20Z", "user": "yakhyo" }, { "repo": "pytorch/audio", "number": 2526, "title": "Need more detail and tutorial on how to use the language model to decrease the word rate error.", "body": "### \ud83d\udcda The doc issue\r\n\r\n1. How do we build our own language model and add it to the language model, such as wav2vec2? However many of the solutions from the doc require using another library.\r\n\r\n2. If 1 requires training the language model again, then It looks like we can use our own text file for the language model to form a bean search \r\n![image](https://user-images.githubusercontent.com/21982975/177036688-d44e4b55-496e-486a-86a7-53535c22e435.png)\r\nhttps://github.com/facebookresearch/fairseq/issues/3157\r\n\r\nI was working on a project for deaf students to have a subtitle. You know they found out that after wav2vec2 using a language model, such as n-gram the word rate error will be dropped. Thus, I was thinking to add a lecture note or textbook to decrease the WRE for college class subtitling. But a lot of language model implementation for Pytorch audio model requires other library, such as KenLM. But I was thinking if it is a n-gram model, it shouldn't be difficult to have it in Pytorch. If we want to deploy it in other language, such as Javascript, it will require ONNX in Pytorch, so we may need to write the language model in Pytorch rather than in KenLM\r\n\r\n\r\nFirst, this has been asked that it looks like we do not need to train the language model(such as n-gram) again. We just need to put the text file that has all the possible words that we want the n-gram model to do the beam search.\r\n![image](https://user-images.githubusercontent.com/21982975/177036345-903194f1-bb50-473b-8e59-dfb5ef19ba38.png)\r\nBut you can see the doc only gives you one line of code to \"short cut\" everything without telling the user how to use their own text file.\r\n\r\nAgain, if we look at the doc, we see \"Builds CTC beam search decoder from Flashlight\". Thus, how do we use our own language model? Again, my point of using my own language model is not because I have some powerful transformer models. It is I need to be clear on how the model handles the process of wav2vec2 output to text with the language model.\r\nThus this issue was proposed and asked, and I feel it was not explained detailly.\r\nhttps://github.com/facebookresearch/fairseq/issues/3157\r\n\r\n\r\nSuggestion: I prefer HuBERT since it is smaller than Wav2vec2.", "url": "https://github.com/pytorch/audio/issues/2526", "state": "open", "labels": [], "created_at": "2022-07-03T11:05:05Z", "updated_at": "2022-07-18T21:02:59Z", "user": "AliceSum" }, { "repo": "pytorch/tutorials", "number": 1961, "title": "Update SpaCy to latest.", "body": "The old `spacy==2.3.2` is out of date, and I cannot install it (due to build failure). Is it possible to remove the version constraint?", "url": "https://github.com/pytorch/tutorials/issues/1961", "state": "closed", "labels": [ "dependencies" ], "created_at": "2022-07-02T11:01:23Z", "updated_at": "2022-12-09T17:47:43Z", "comments": 2, "user": "evan0greenup" }, { "repo": "pytorch/tutorials", "number": 1960, "title": "Question: how to run individual tutorial?", "body": "I don't make to `make doc`, I just want to run a specific individual tutorial.\r\n\r\nIs it safe to directly run it as script?", "url": "https://github.com/pytorch/tutorials/issues/1960", "state": "closed", "labels": [ "question" ], "created_at": "2022-07-02T10:59:34Z", "updated_at": "2022-08-01T21:15:19Z", "user": "evan0greenup" }, { "repo": "pytorch/TensorRT", "number": 1156, "title": "\u2753 [Question] Support for CUDA 11.6?", "body": "## Does latest version support CUDA 11.6\u2753\r\n\r\nPytorch officially supports CUDA 11.6, however docs say torch_tensort supports CUDA 11.3 at max. But in some issues it is said that CUDA version 11.6 is used. Is CUDA 11.6 officially supported by torch_tensorrt?\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): any\r\n - CPU Architecture:\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8 or 3.9\r\n - CUDA version: 11.6\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1156", "state": "closed", "labels": [ "question", "component: dependencies" ], "created_at": "2022-07-01T11:41:24Z", "updated_at": "2022-08-12T03:16:44Z", "user": "alercelik" }, { "repo": "pytorch/data", "number": 564, "title": "[RFC] Restricting `IterDataPipe` to have method `__iter__` as a generator function without method `__next__`", "body": "### \ud83d\ude80 The feature\r\n\r\n** Note that this is a RFC to solely discuss the design. There is currently no plan to implement this feature. This issue serves as a developer documentation of the current design and the complexity/issue that we encounter with certain aspects of `IterDataPipe`. It also provides a space to discuss what we can potentially do.\r\n\r\nThe overarching goal is to simplify certain aspects of `IterDataPipe` while providing flexibility for users.\r\n\r\nThe proposed feature is to restrict `IterDataPipe`, such that it must have a method `__iter__` that is a generator function and it cannot have the method `__next__`. All built-in `IterDataPipe` is already implemented that way, so this will only impact custom `IterDataPipe` that users create.\r\n\r\nAlternate solutions are also discussed below. We welcome suggestions as well!\r\n\r\n### Motivation, pitch\r\n\r\nFor context, currently, there are 3 main types of `IterDataPipe` that is allowed. The ones with:\r\n1. `__iter__` is a generator function (e.g. use `yield`)\r\n2. `__iter__` that returns an iterator but is not a generator function\r\n3. `__iter__` returns `self` and a `__next__` method exists\r\n\r\nNote that it is possible for users to have `__next__` but not have `__iter__` returning `self`, but that is not recommended and have unexpected behaviors. All built-in DataPipes belong to type 1.\r\n\r\nThe fact that there are 3 types of `IterDataPipe` makes the implementation of [`hook_iterator`](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/datapipes/_hook_iterator.py) very complicated.\r\n\r\nThe hook is called every time `__iter__` of an `IterDataPipe` is invoked. The hook tries to do a few things:\r\n* Enforce the single iterator per `IterDataPipe` constraint (seeoperations to related to `valid_iterator_id`) and reset the DataPipe as needed\r\n* Count the number of elements yielded\r\n* Allow performance profiling of operations\r\n\r\nThe fact that there is no restriction on how users can implement `__iter__` and `__next__` for custom DataPipes means `hook_iterator` must be complicated in order to handle the many corner cases that can happen. As you can see, we have a long code block to manage the behavior of type 1, and have a custom class to manage the behavior of type 2 and 3. The behavior of the method `__next__` (type 3) is difficult to control and can lead to unexpected behaviors if users aren't careful.\r\n\r\nIf we are able to restrict `IterDataPipe`, the implementation of those functionalities within `hook_iterator` will be much cleaner at the cost of providing less flexibility for `IterDataPipe`. I believe users also will be less likely to run into errors if we have such restriction.\r\n\r\n### Alternatives\r\n\r\nSuggestion from @ejguan:\r\nCreate a class called `DataPipeIterator`, which contains `__self__` and `__next__`. `__iter__` from DataPipe always return a specific DataPipeIterator object. This might resolve the most of our problem.\r\n\r\n### Additional context\r\n\r\nSuch restriction will likely break some downstream usages. Whatever we do, we will proceed carefully.\r\n\r\nPerformance impact is also an aspect that we must consider as well.\r\n\r\nFeedback and suggestions are more than welcomed. Let us know if you have experienced issues while using `torchdata` or have a bad experience while implementing new features.", "url": "https://github.com/meta-pytorch/data/issues/564", "state": "open", "labels": [], "created_at": "2022-06-30T20:39:17Z", "updated_at": "2022-06-30T20:41:30Z", "comments": 0, "user": "NivekT" }, { "repo": "pytorch/vision", "number": 6221, "title": "Customize FasterRCNN", "body": "Hi,\r\n\r\nI've been trying, unsuccessfully to customize a bit the implementation of FasterRCNN proposed by torchvision. For example, one thing I would like to do, would be to write a customized [postprocess_detections ](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/detection/roi_heads.py#L668) function that return confidence for all labels and not only the one with highest confidence.\r\n\r\nIn the past I've managed to successfully overwrite the loss function by doing something like \r\n```\r\nmodel = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\r\ntorchvision.models.detection.roi_heads.fastrcnn_loss = custom_loss\r\n```\r\n\r\nBut the postprocess_detections function is within the RoIHeads class. If I try to replace the RoIHead class before defining my model I get this error:\r\n```\r\ntorchvision.models.detection.roi_heads.RoIHeads = RoIHeadsCustom\r\nmodel = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(\r\n pretrained=True\r\n)\r\n```\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test2.py\", line 80, in <module>\r\n model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(\r\n File \"/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py\", line 470, in fasterrcnn_mobilenet_v3_large_fpn\r\n return _fasterrcnn_mobilenet_v3_large_fpn(weights_name, pretrained=pretrained, progress=progress,\r\n File \"/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py\", line 393, in _fasterrcnn_mobilenet_v3_large_fpn\r\n model = FasterRCNN(backbone, num_classes, rpn_anchor_generator=AnchorGenerator(anchor_sizes, aspect_ratios),\r\n File \"/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py\", line 222, in __init__\r\n roi_heads = RoIHeads(\r\n File \"/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/roi_heads.py\", line 512, in __init__\r\n super(RoIHeads, self).__init__()\r\nTypeError: super(type, obj): obj must be an instance or subtype of type\r\n```\r\n\r\n\r\nBut if I define it afterwards, the object is already created and the custom class is not taken into account\r\n```\r\nmodel = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(\r\n pretrained=True\r\n)\r\ntorchvision.models.detection.roi_heads.RoIHeads = RoIHeadsCustom\r\n```\r\n\r\nIf anyone has some ideas on how to easily customize torchvision models that would be a great help. The only solution I'm seeing is creating a fork of torchvision, which I'd rather avoid.\r\nThanks.\n\ncc @datumbox @YosuaMichael", "url": "https://github.com/pytorch/vision/issues/6221", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2022-06-30T09:40:50Z", "updated_at": "2022-07-06T14:15:49Z", "user": "paullixo" }, { "repo": "pytorch/TensorRT", "number": 1150, "title": "\u2753 [Question] The same inputs producing very different outputs via pytorch & TensorRT.", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nHey, guys!\r\nI'm new to TensorRT, after the environment setup. I'm very excited to try the official demo in this page. [Resnet50-example.](https://pytorch.org/TensorRT/_notebooks/Resnet50-example.html). I got very different outputs when inference with the same inputs via pytorch & TensorRT.\r\nBut when I use efficientnet_b3 as the model, the results are same.\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version: 1.11.0+cu113\r\n - TensorRT Version: 8.4.1.5\r\n - torch_tensorrt. Version: 1.1.0\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 20.04.2 LTS\r\n - How you installed PyTorch : pip\r\n - How you installed TensorRT: pip\r\n - Are you using local sources or building from archives: No\r\n - Python version: 3.8.8\r\n - CUDA version: 11.4 \r\n - GPU models and configuration: NVIDIA GeForce RTX 3090\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nHere is my model convert code from PyTorch to TensorRT\r\n```python\r\nimport time\r\nimport numpy as np\r\nimport torch\r\ntorch.manual_seed(1989)\r\nimport tensorrt\r\nimport torch_tensorrt\r\nfrom torchvision import models\r\n\r\n\r\n\r\nif __name__ == '__main__':\r\n # 1 get pytorch model\r\n model = models.resnet50(pretrained=False)\r\n #model = models.efficientnet_b3(pretrained=False)\r\n model = model.eval().to('cuda')\r\n\r\n # 2 conver to tensorrt model\r\n input_shape=(1,3,224,224)\r\n ts_model = torch.jit.script(model)\r\n trt_model = torch_tensorrt.compile(\r\n model, \r\n inputs=[torch_tensorrt.Input(input_shape, dtype=torch.float32)],\r\n enabled_precisions = torch.float32,\r\n workspace_size = 1 << 22\r\n )\r\n print('Convert over.')\r\n #torch.jit.save(trt_model, 'trt_model.pt')\r\n #trt_model = torch.jit.load('trt_model.pt')\r\n\r\n # 3 check speedup\r\n inputs = torch.randn(input_shape).to('cuda')\r\n benchmark(model, inputs, dtype='fp32')\r\n benchmark(ts_model, inputs, dtype='fp32')\r\n benchmark(trt_model, inputs, dtype='fp32')\r\n```\r\n\r\nAnd here is the benchmark function for the same inputs.\r\n```python\r\ndef benchmark(model, inputs, dtype='fp32', nwarmup=50, nruns=3000):\r\n model.eval()\r\n if dtype=='fp16':\r\n inputs = inputs.half()\r\n \r\n print(\"Warm up ...\")\r\n with torch.no_grad():\r\n for _ in range(nwarmup):\r\n outputs = model(inputs)\r\n torch.cuda.synchronize()\r\n print(\"Start timing ...\")\r\n timings = []\r\n with torch.no_grad():\r\n for i in range(1, nruns+1):\r\n start_time = time.time()\r\n outputs = model(inputs)\r\n torch.cuda.synchronize()\r\n end_time = time.time()\r\n timings.append(end_time - start_time)\r\n if i%1000==0:\r\n print('Iteration %d/%d, avg batch time %.2f ms'%(i, nruns, np.mean(timings)*1000))\r\n print(outputs[0][:8])\r\n```\r\n\r\nAnd here are the strange outputs that I got. \ud83e\udd2f\r\n\r\nFor efficientnet_b3\r\n>WARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0\r\nConvert over.\r\nWarm up ...\r\nStart timing ...\r\nIteration 1000/3000, avg batch time 10.76 ms\r\nIteration 2000/3000, avg batch time 10.75 ms\r\nIteration 3000/3000, avg batch time 10.75 ms\r\ntensor([ 2.5864e-15, -2.6358e-15, 4.9805e-15, 6.8343e-15, 3.6509e-16,\r\n 1.3975e-15, 1.7666e-15, -2.6696e-15], device='cuda:0')\r\nWarm up ...\r\nStart timing ...\r\nIteration 1000/3000, avg batch time 6.92 ms\r\nIteration 2000/3000, avg batch time 6.92 ms\r\nIteration 3000/3000, avg batch time 6.92 ms\r\ntensor([ 2.5864e-15, -2.6358e-15, 4.9805e-15, 6.8343e-15, 3.6509e-16,\r\n 1.3975e-15, 1.7666e-15, -2.6696e-15], device='cuda:0')\r\nWarm up ...\r\nStart timing ...\r\nIteration 1000/3000, avg batch time 0.59 ms\r\nIteration 2000/3000, avg batch time 0.59 ms\r\nIteration 3000/3000, avg batch time 0.59 ms\r\ntensor([ 2.5864e-15, -2.6358e-15, 4.9805e-15, 6.8343e-15, 3.6509e-16,\r\n 1.3975e-15, 1.7666e-15, -2.6696e-15], devic", "url": "https://github.com/pytorch/TensorRT/issues/1150", "state": "closed", "labels": [ "bug", "question", "No Activity", "performance" ], "created_at": "2022-06-29T10:10:01Z", "updated_at": "2023-03-26T00:02:20Z", "user": "Amoko" }, { "repo": "pytorch/vision", "number": 6216, "title": "EfficientNet_v2 models not loading through torchvision", "body": "### \ud83d\udc1b Describe the bug\n\nI am trying to train efficient_v2 classification models on custom dataset using \r\n[this script](https://github.com/pytorch/vision/tree/f75272fa704452a1d9405126c3a09e2d7432d489/references/classification)\r\nI used following command \r\n```\r\npython3 train.py --model efficientnet_v2 --batch-size 128 --lr 0.5 --lr-scheduler cosineanne\r\nalinglr --lr-warmup-epochs 5 --lr-warmup-method linear --auto-augment ta_wide --epochs 600 --random-erase 0.1 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 --w\r\neight-decay 0.00002 --norm-weight-decay 0.0 --train-crop-size 384 --model-ema --val-crop-size 480 --val-resize-size 480\r\n```\r\n\r\nI get following error\r\n```\r\nTraceback (most recent call last):\r\n File \"train.py\", line 501, in <module>\r\n main(args)\r\n File \"train.py\", line 224, in main\r\n model = torchvision.models.__dict__[args.model](weights=args.weights, num_classes=num_classes)\r\nKeyError: 'efficientnet_v2'\r\n```\r\n\r\n\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 1.10.0+cu102\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\r\nClang version: 10.0.0-4ubuntu1 \r\nCMake version: version 3.19.4\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.4.0-1030-aws-x86_64-with-glibc2.29\r\nIs CUDA available: True\r\nCUDA runtime version: 11.5.119\r\nGPU models and configuration: GPU 0: Tesla T4\r\nNvidia driver version: 510.73.05\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn.so.8.3.1\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.1\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.1\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.1\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.1\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.1\r\n/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.1\r\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.1\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.20.0\r\n[pip3] torch==1.10.0\r\n[pip3] torchaudio==0.8.0\r\n[pip3] torchvision==0.11.1\r\n[conda] Could not collect\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6216", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2022-06-29T09:12:09Z", "updated_at": "2022-06-29T11:09:27Z", "user": "suyashhchougule" }, { "repo": "pytorch/serve", "number": 1713, "title": "How to specify which gpu is to be used for serve? ", "body": "### \ud83d\ude80 The feature\r\n\r\n\r\n```console\r\n:~$ lspci | grep VGA\r\n0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P Integrated Graphics Controller (rev 0c)\r\n0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA103M [GeForce RTX 3080 Ti Mobile] (rev a1)\r\n:~$ glxinfo | egrep -i \"device|memory\"\r\n Device: Mesa Intel(R) Graphics (ADL GT2) (0x46a6)\r\n Video memory: 29872MB\r\n Unified memory: yes\r\n GL_AMD_performance_monitor, GL_AMD_pinned_memory, \r\n GL_EXT_framebuffer_object, GL_EXT_framebuffer_sRGB, GL_EXT_memory_object, \r\n GL_EXT_memory_object_fd, GL_EXT_packed_depth_stencil, GL_EXT_packed_float, \r\n GL_AMD_pinned_memory, GL_AMD_query_buffer_object, \r\n GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4, GL_EXT_memory_object, \r\n GL_EXT_memory_object_fd, GL_EXT_multi_draw_arrays, \r\n GL_EXT_memory_object, GL_EXT_memory_object_fd, GL_EXT_multi_draw_arrays, \r\n:~$ nvidia-smi\r\nCommand 'nvidia-smi' not found, but can be installed with:\r\nsudo apt install nvidia-utils-418-server # version 418.226.00-0ubuntu4, or\r\nsudo apt install nvidia-utils-390 # version 390.151-0ubuntu0.22.04.1\r\nsudo apt install nvidia-utils-450-server # version 450.191.01-0ubuntu0.22.04.1\r\nsudo apt install nvidia-utils-470 # version 470.129.06-0ubuntu0.22.04.1\r\nsudo apt install nvidia-utils-470-server # version 470.129.06-0ubuntu0.22.04.1\r\nsudo apt install nvidia-utils-510 # version 510.73.05-0ubuntu0.22.04.1\r\nsudo apt install nvidia-utils-510-server # version 510.73.08-0ubuntu0.22.04.1\r\n```\r\n\r\n### Motivation, pitch\r\n\r\nJust wanna **NVIDIA driver** for **torchserve** and the other one **Intel** for display if possible ???\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/1713", "state": "closed", "labels": [ "triaged_wait", "support" ], "created_at": "2022-06-28T19:35:33Z", "updated_at": "2022-07-07T02:13:46Z", "user": "jiapei-nexera" }, { "repo": "pytorch/vision", "number": 6206, "title": "Wrong for pytorch-nightly version", "body": "### \ud83d\udc1b Describe the bug\n\nThe wrong is below:\r\nTraceback (most recent call last):\r\n File \"/home/hxj/PycharmProjects/ImageNetTrain/main.py\", line 9, in <module>\r\n weights = P.models.ResNet50_Weights.IMAGENET1K_V1\r\nAttributeError: module 'torchvision.prototype.models' has no attribute 'ResNet50_Weights'\n\n### Versions\n\npytorch-nightly 1.13\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/6206", "state": "open", "labels": [ "question", "module: models" ], "created_at": "2022-06-27T08:44:00Z", "updated_at": "2022-06-27T08:55:49Z", "user": "wwwsent" }, { "repo": "pytorch/data", "number": 550, "title": "DataLoader2 should reset when a new iterator is created?", "body": "When a new iterator is created, `DataLoader2` currently resumes from when it was left off rather than resetting and starting from the beginning again (see code snippet below). This is divergent from the behavior of the original `DataLoader`. Users likely expect the latter behavior and we should properly reset the state of `DataLoader2` when a new iterator is created.\r\n\r\n```python\r\nfrom torchdata.dataloader2 import DataLoader2\r\nfrom torchdata.datapipes.iter import IterableWrapper\r\n\r\n\r\ndl = DataLoader2(IterableWrapper(range(10)))\r\n\r\nfor i in iter(dl):\r\n print(i)\r\n if i == 4:\r\n print('--------------')\r\n break\r\n\r\nfor i in iter(dl):\r\n print(i)\r\n```\r\n\r\ncc: @VitalyFedyunin ", "url": "https://github.com/meta-pytorch/data/issues/550", "state": "closed", "labels": [], "created_at": "2022-06-24T18:19:09Z", "updated_at": "2022-08-26T21:02:39Z", "comments": 1, "user": "NivekT" }, { "repo": "pytorch/data", "number": 549, "title": "DataLoader2.__len__() ?", "body": "This is somewhat related to https://github.com/pytorch/data/issues/533\r\n\r\nAs described in https://github.com/pytorch/data/issues/533#issuecomment-1163381945, we like to check the `len()` of the DataLoader in torchvision in our logging utils.\r\n\r\nAre there plans to implement `__len__()` on `DataLoader2`?", "url": "https://github.com/meta-pytorch/data/issues/549", "state": "open", "labels": [], "created_at": "2022-06-24T17:10:39Z", "updated_at": "2022-07-06T19:21:39Z", "comments": 1, "user": "NicolasHug" }, { "repo": "pytorch/data", "number": 538, "title": "Warn about pickle-ablity when using `dp.map(some_local_function)` ?", "body": "`torchdata` issues a warning about pickle when we use lambdas (which is great!)\r\nAnother kind of function that isn't compatible with pickle are local functions. Would it be possible to throw the same warning there?\r\n\r\n\r\n```py\r\nimport torchdata\r\nimport pickle\r\n\r\ndef make_dp():\r\n\r\n def f(x): # local function, not pickleable\r\n return x\r\n\r\n return torchdata.datapipes.iter.IterableWrapper(range(40)).map(f)\r\n\r\ndp = make_dp() # no warning\r\n\r\nf = \"/tmp/dp\"\r\npickle.dump(dp, open(f, \"wb\")) # fails\r\n```", "url": "https://github.com/meta-pytorch/data/issues/538", "state": "closed", "labels": [], "created_at": "2022-06-23T13:02:33Z", "updated_at": "2022-06-27T21:48:29Z", "comments": 1, "user": "NicolasHug" }, { "repo": "pytorch/data", "number": 533, "title": "`len(dataloader)` in distributed setting is different with datapipes and with map-style datasets", "body": "In a distributed setting, `len(dataloader)` will return:\r\n\r\n- `len(dataset) // (batch_size * num_GPUs)` if `dataset` is a map-style dataset\r\n- `len(dataset) // batch_size` if `dataset` is a datapipe\r\n\r\nThis discrepancy makes it a bit difficult to work with torchvision's training recipes, where we often need the size of the dataloader.\r\n\r\nBelow is an illustration of this discrepancy - you can run the snippet (even without a GPU) with `torchrun --nproc_per_node 4 script.py`\r\n\r\n```py\r\n# Run this with e.g. `torchrun --nproc_per_node 4 script.py`\r\nimport torch.utils.data as data\r\nimport torch.distributed as dist\r\nimport torchdata\r\n\r\n\r\ndef replace_print():\r\n import builtins as __builtin__\r\n builtin_print = __builtin__.print\r\n def print(*args, **kwargs):\r\n if dist.get_rank() == 0:\r\n builtin_print(f\"[GPU 0]\", *args, **kwargs)\r\n\r\n __builtin__.print = print\r\n\r\n\r\n# Setting up DDP - you can ignore this\r\ndist.init_process_group(backend=\"gloo\")\r\nreplace_print()\r\ndist.barrier()\r\n\r\n\r\nsize = 800\r\ndp = torchdata.datapipes.iter.IterableWrapper(range(size)).sharding_filter()\r\ndl = data.DataLoader(dp, batch_size=10, num_workers=4, drop_last=True)\r\nprint(f\"with dp, {len(dl) = }\")\r\n# Gives : 80\r\n\r\nds = list(range(size))\r\ndl = data.DataLoader(ds, batch_size=10, num_workers=4, drop_last=True, sampler=data.DistributedSampler(ds, shuffle=False))\r\nprint(f\"with mapstyle, {len(dl) = }\")\r\n# Gives: 20\r\n\r\n```", "url": "https://github.com/meta-pytorch/data/issues/533", "state": "open", "labels": [], "created_at": "2022-06-22T16:32:01Z", "updated_at": "2022-06-22T16:57:09Z", "comments": 2, "user": "NicolasHug" }, { "repo": "pytorch/pytorch", "number": 80007, "title": "when forward use **kwargs\uff0chow to construct the example_ Inputs parameter in jit.trace?", "body": "### \ud83d\udc1b Describe the bug\n\nimport torch\r\n\r\nclass Model(nn.Module):\r\n def forward(self, **kwargs):\r\n # kwargs contains more than dozens of tensors\r\n pass\r\n\r\nmodel = Model()\r\ntrace_model = torch.jit.trace(model, example_inputs=??)\n\n### Versions\n\nPyTorch version: 1.6.0+cu101\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.10.2\r\nLibc version: glibc-2.26\r\n\r\nPython version: 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] (64-bit runtime)\r\nPython platform: Linux-3.10.0-1.3.2.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic\r\nIs CUDA available: True\r\nCUDA runtime version: 10.1.243\r\nGPU models and configuration: \r\nGPU 0: GeForce RTX 2080 Ti\r\nGPU 1: GeForce RTX 2080 Ti\r\n\r\nNvidia driver version: 440.44\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.18.5\r\n[pip3] torch==1.6.0+cu101\r\n[pip3] torchvision==0.7.0+cu101\r\n[conda] blas 1.0 mkl \r\n[conda] mkl 2021.2.0 h06a4308_296 \r\n[conda] mkl-service 2.3.0 py38h27cfd23_1 \r\n[conda] mkl_fft 1.3.0 py38h42c9631_2 \r\n[conda] mkl_random 1.2.1 py38ha9443f7_2 \r\n[conda] numpy 1.20.1 py38h93e21f0_0 \r\n[conda] numpy-base 1.20.1 py38h7d8b39e_0 \r\n[conda] numpydoc 1.1.0 pyhd3eb1b0_1", "url": "https://github.com/pytorch/pytorch/issues/80007", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2022-06-22T03:20:17Z", "updated_at": "2023-03-11T03:33:15Z", "user": "zyDotwei" }, { "repo": "pytorch/TensorRT", "number": 1138, "title": "problem build in jetson nano jetpack4.6", "body": "## \u2753 Question\r\n\r\nHello\r\nI tried to compile the torch-tensorrt on the jetson nano I got this error \r\nsuggestions please\r\nThanks\r\n \r\n\r\nbazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures\r\n\r\n\r\njetson@jetson-desktop:~/TensorRT$ bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures\r\nINFO: Analyzed target //:libtorchtrt (10 packages loaded, 2870 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /home/jetson/TensorRT/core/lowering/BUILD:10:11: Compiling core/lowering/register_trt_placeholder_ops.cpp failed: (Exit 1): gcc failed: error executing command \r\n (cd /home/jetson/.cache/bazel/_bazel_jetson/8770c998fbff2b8d5ee14d56a02ce872/sandbox/linux-sandbox/66/execroot/Torch-TensorRT && \\\r\n exec env - \\\r\n PATH=/home/jetson/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \\\r\n PWD=/proc/self/cwd \\\r\n /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.d '-frandom-seed=bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.o' -fPIC -iquote . -iquote bazel-out/aarch64-fastbuild/bin -iquote external/tensorrt -iquote bazel-out/aarch64-fastbuild/bin/external/tensorrt -iquote external/cuda -iquote bazel-out/aarch64-fastbuild/bin/external/cuda -iquote external/cudnn -iquote bazel-out/aarch64-fastbuild/bin/external/cudnn -iquote external/libtorch -iquote bazel-out/aarch64-fastbuild/bin/external/libtorch -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/_virtual_includes/ATen -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/_virtual_includes/c10_cuda -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/_virtual_includes/c10 -isystem external/tensorrt/include/aarch64-linux-gnu -isystem bazel-out/aarch64-fastbuild/bin/external/tensorrt/include/aarch64-linux-gnu -isystem external/cuda/include -isystem bazel-out/aarch64-fastbuild/bin/external/cuda/include -isystem external/cudnn/include -isystem bazel-out/aarch64-fastbuild/bin/external/cudnn/include -isystem external/libtorch/include -isystem bazel-out/aarch64-fastbuild/bin/external/libtorch/include -isystem external/libtorch/include/torch/csrc/api/include -isystem bazel-out/aarch64-fastbuild/bin/external/libtorch/include/torch/csrc/api/include '-fdiagnostics-color=always' '-std=c++14' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__=\"redacted\"' '-D__TIMESTAMP__=\"redacted\"' '-D__TIME__=\"redacted\"' -c core/lowering/register_trt_placeholder_ops.cpp -o bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.o)\r\n# Configuration: 308cf0c0559d698e898984ad86ba68902429f53ed3b621b21d0881d53f6d42af\r\n# Execution platform: @local_config_platform//:host\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox\r\ncore/lowering/register_trt_placeholder_ops.cpp:16:34: error: invalid user-defined conversion from 'torch::jit::<lambda(torch::jit::Stack&)>' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}' [-fpermissive]\r\n aliasAnalysisFromSchema()),\r\n ^\r\ncore/lowering/register_trt_placeholder_ops.cpp:15:24: note: candidate is: torch::jit::<lambda(torch::jit::Stack&)>::operator void (*)(torch::jit::Stack&)() const <near match>\r\n [](Stack& stack) { /*noop*/ },\r\n ^\r\ncore/lowering/register_trt_placeholder_ops.cpp:15:24: note: no known conversion from 'void (*)(torch::jit::Stack&) {aka void (*)(std::vector<c10::IValue>&)}' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}'\r\nIn file included from external/libtorch/include/torch/csrc/jit/runtime/custom_operator.h:5:0,\r\n from core/lowering/register_trt_placeholder_ops.cpp:1:\r\nexternal/libtorch/include/torch/csrc/jit/runtime/operator.h:98:3: note: initializing argument 2 of 'torch::jit::Operator::Operator(std::__cxx11::string, torch::jit::OperationCreator, c10::AliasAnalysisKind)'\r\n Operator(\r\n ^~~~~~~~\r\nTarget //:libtorchtrt failed to build\r\nINFO: Elapsed time: 115.163s, Critical Path: 73.60s\r\nINFO: 16 processes: 5 internal, 11 linux-sandbox.\r\nFAILED: Build did NOT complete successfully\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n \r\n - PyTorch v1.8.0\r\n - Jetson nano\r\n - How you installed PyTorch ( `pip`):\r\n\r\n\r\n## Additional context\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1138", "state": "closed", "labels": [ "question", "channel: linux-jetpack" ], "created_at": "2022-06-21T16:45:05Z", "updated_at": "2022-09-02T18:08:45Z", "user": "Sylia-C" }, { "repo": "pytorch/functorch", "number": 892, "title": "Figure out how to get test coverage for more compositions of transforms", "body": "## Motivation\r\n\r\nCurrently, we only test the following compositions:\r\n- vmap\r\n- jvp\r\n- vjp\r\n- vmap x jvp\r\n- vmap x vjp\r\n- vjp x vjp\r\n- vjp x vmap\r\n\r\nThis has caught most of our bugs, but users still come to us with code that doesn't work due to it not being one of the above compositions. For example:\r\n- vmap x vmap can still error out even if just vmap works\r\n- vmap x vjp x vjp can error out if there is some backward operator (e.g. convolution_backward) that has a backward formula that is not composite compliant. Ditto for vmap x jvp x vjp.\r\n\r\n## The Ask\r\n\r\nFigure to get better test coverage for more compositions of transforms\r\n\r\n## Possibly related: OpInfos\r\n\r\nThis also is related to better OpInfo testing. OpInfos do not cover all aten operators. One way for us to really get good coverage using our existing tests is to add OpInfos for torch.ops.aten operations. For example, instead of checking the batching rule of torch.ops.aten.convolution_backward via a vmap x vjp test, it would be sufficient for us to just run a vmap test for torch.ops.aten.convolution_backward.\r\n\r\n", "url": "https://github.com/pytorch/functorch/issues/892", "state": "closed", "labels": [ "actionable" ], "created_at": "2022-06-21T14:34:58Z", "updated_at": "2022-09-15T15:01:19Z", "user": "zou3519" }, { "repo": "pytorch/serve", "number": 1701, "title": "curl 404 ResourceNotFoundException", "body": "Hello,\r\nI am stuck with an error that I am not sure what does it mean. \r\nwhen I do `curl \"http://localhost:8080/models\"` I get : \r\n`{\r\n \"code\": 404,\r\n \"type\": \"ResourceNotFoundException\",\r\n \"message\": \"Requested resource is not found, please refer to API document.\"\r\n}`\r\n\r\nI make an `.mar` file for my model with \r\n`\r\ntorch-model-archiver -f \\\r\n --model-name=classifier \\\r\n --version=1.0 \\\r\n --serialized-file=pytorch_model.bin \\\r\n --handler=custom_handler.py \\\r\n --extra-files \"config.json,index_to_name.json,special_tokens_map.json,tokenizer_config.json,tokenizer.json,training_args.bin,vocab.txt\" \\\r\n --export-path=model_store\r\n`\r\nAll of those files are stored in the same directory. \r\n\r\nWhen i run the serve `torchserve --start --model-store model_store --models classifier=classifier.mar` I dont get any error. normally when I do `curl \"http://localhost:8080/models\"` I will get my classifier but I instead I get that message.\r\n\r\nis there anything that I am missing here? or should I add something?\r\nI want to mention that I am using a handler (custom_handler.py) from [GoogleCloudPlatform](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/community-content/pytorch_text_classification_using_vertex_sdk_and_gcloud/pytorch-text-classification-vertex-ai-train-tune-deploy.ipynb). also, `curl localhost:8080/ping` give me `Healthy`\r\nThanks!", "url": "https://github.com/pytorch/serve/issues/1701", "state": "open", "labels": [ "help wanted", "question" ], "created_at": "2022-06-21T14:15:31Z", "updated_at": "2023-01-31T16:04:09Z", "user": "ma-batita" }, { "repo": "pytorch/serve", "number": 1699, "title": "How to properly understand MaxBatchDelay", "body": "From the documentation https://github.com/pytorch/serve/blob/master/docs/management_api.md#register-a-model\r\nThe parameter `maxBatchDelay` is the maximum delay for batch aggregation. It will wait this amount of time before aggregating all the requests (please correct me if I am wrong) into batches. Now, on the user side, if I set this number high, like 5000, then TorchServe will have enough time to receive possibly a large number of requests, then aggregates them. However, a large number like 5000 also means that the total time for the user to send requests and receive inference results will also be higher, and much higher if I set this number to 50. A user/client for sure wants to have as little time as possible before having the results, but setting maxBatchDelay low would also mean TorchServe wouldn't have enough time to aggregate. \r\n\r\nHow to properly understand this issue? Do I need a better metrics to measure the total time for the client, or should I set maxBatchDelay high? Or something else? ", "url": "https://github.com/pytorch/serve/issues/1699", "state": "closed", "labels": [ "documentation", "benchmark" ], "created_at": "2022-06-20T19:01:44Z", "updated_at": "2023-08-18T02:53:37Z", "user": "Hegelim" }, { "repo": "pytorch/serve", "number": 1698, "title": "Confused about Cumulative Inference Duration vs. PredictionTime", "body": "### \ud83d\udcda The doc issue\r\n\r\nI am running a model on TorchServe and I am trying to see how long it takes for inference. \r\nIf I use logging and view the logs, then I can see there is something called PredictionTime:\r\n![image](https://user-images.githubusercontent.com/55818214/174660754-3e3915a1-a3b6-4e28-9720-2bfd654f17b7.png)\r\n\r\nHowever, if I use the Metrics API, then I got something called \"Cumulative Inference Duration\"\r\n![image](https://user-images.githubusercontent.com/55818214/174660836-b4af8609-09ee-4ae7-ad8d-fdb7ab1aaf97.png)\r\n\r\nAnd in terms of values those 2 are very different. So I am not sure which one should I use to measure the total inference time for my requests? \r\n\r\nBtw, there is also something else called `HandlerTime` in the logs\r\n![image](https://user-images.githubusercontent.com/55818214/174662801-1fc649c9-37b5-44a6-9305-c0d8930cfedd.png)\r\n\r\nWhat does it mean? Where can I find related information about what are the meanings of these metrics? \r\n\r\nThanks, \r\n### Suggest a potential alternative/fix\r\n\r\n_No response_", "url": "https://github.com/pytorch/serve/issues/1698", "state": "open", "labels": [ "help wanted", "question" ], "created_at": "2022-06-20T18:35:03Z", "updated_at": "2022-07-08T18:50:45Z", "user": "Hegelim" }, { "repo": "pytorch/data", "number": 523, "title": "Document how to create a DataLoader when reading data from S3", "body": "### \ud83d\udcda The doc issue\n\nI find the provided example [here](https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/load/README.md#example) a bit confusing. \r\n\r\n```\r\nfrom torchdata.datapipes.iter import S3FileLister, S3FileLoader\r\n\r\ns3_prefixes = ['s3://bucket-name/folder/', ...]\r\ndp_s3_urls = S3FileLister(s3_prefixes)\r\ndp_s3_files = S3FileLoader(s3_urls) # outputs in (url, StreamWrapper(BytesIO))\r\n# more datapipes to convert loaded bytes, e.g.\r\ndatapipe = StreamWrapper(dp_s3_files).parse_csv(delimiter=' ')\r\n\r\nfor d in datapipe: # Start loading data\r\n pass\r\n```\r\n\r\nFirst, I think there is a mistake in the example: `s3_urls` should be `dp_s3_urls`?\r\nSecond, it is not clear why `parse_csv(delimiter=' ')` is being used.\r\nLast, I can't access my data after creating the `datapipe`. It would be great to have an example similar to [this one of the old plugin](https://github.com/aws/amazon-s3-plugin-for-pytorch/blob/master/examples/s3_cv_transform.py)\r\n\r\nI believe that an example of how to load a S3 folder containing images into a `torch.utils.data.DataLoader` would be very useful for new users (like me).\n\n### Suggest a potential alternative/fix\n\nProvide an example that starts with a S3 url of a folder with some images, and produce a dataloader with such images.", "url": "https://github.com/meta-pytorch/data/issues/523", "state": "closed", "labels": [], "created_at": "2022-06-20T15:52:53Z", "updated_at": "2022-06-23T00:17:12Z", "comments": 4, "user": "enric1994" }, { "repo": "pytorch/TensorRT", "number": 1136, "title": "\u2753 [Question] unable to save the model in TorchScript format? ", "body": "## \u2753 Question\r\nI'm trying to save my model as TorchScript format, unfortunately getting error.\r\n\r\n## What you have already tried\r\n```torch.jit.script(model)```\r\n\r\n## Environment\r\npython\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):1.11.0+cu113\r\n - CPU Architecture:\r\n - OS (e.g., Linux): ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:3.9.7\r\n - CUDA version:11.7\r\n - GPU models and configuration: RTX GEFORCE 2060\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\ncould you please help me save the model in TorchScript format\r\n@dignakov @narendasan @peri044 \r\n![Screenshot from 2022-06-20 11-29-31](https://user-images.githubusercontent.com/74839416/174584985-ca331c70-5a45-4811-860d-157878aa322b.png)\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1136", "state": "closed", "labels": [ "question", "bug: triaged [not a bug]" ], "created_at": "2022-06-20T10:43:11Z", "updated_at": "2022-07-05T20:57:03Z", "user": "IamExperimenting" }, { "repo": "pytorch/TensorRT", "number": 1134, "title": "\u2753 [Question] Why TensorRT model is slower?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nWhy TensorRT model is slower? I have tried TensorRT in a MHA (multihead attention) model, but found it is even slower than the jit scripted model.\r\n## What you have already tried\r\nI tested the original model, the jit scripted model, the jit model after optimization, and the TensorRT model. Then, I found the tensorrt model is not as fast as I expected. The model here is a simple MHA module modified from `fairseq` so it could pass the compilation.\r\n```py\r\nimport time\r\nimport tmp_attn\r\nimport torch\r\nimport tensorrt\r\nimport torch_tensorrt as torch_trt\r\n\r\n\r\ndef timer(m, i):\r\n st = time.time()\r\n for _ in range(10000):\r\n m(i, i, i)\r\n ed = time.time()\r\n return ed - st\r\n\r\n\r\nt1 = torch.randn(64, 1, 1280, device=\"cuda:0\")\r\nmodel = tmp_attn.MultiheadAttention(1280, 8).to(\"cuda:0\")\r\nmodel2 = torch.jit.script(model)\r\nmodel3 = torch.jit.optimize_for_inference(model2)\r\nmodel4 = torch_trt.compile(model, inputs=[t1, t1, t1]).to(\"cuda:0\")\r\n\r\nprint(\"Original Model\", timer(model, t1))\r\nprint(\"Jit Script Model\", timer(model2, t1))\r\nprint(\"Jit Script Model after optimization\", timer(model3, t1))\r\nprint(\"TensorRT Model\", timer(model4, t1))\r\n```\r\n<!-- A clear and concise description of what you have already done. -->\r\nI ran these models 10000 times and record the spent time.\r\nThe output is:\r\nOriginal Model 5.6981117725372314\r\nJit Script Model 4.5694739818573\r\nJit Script Model after optimization 3.3332810401916504\r\nTensorRT Model 4.772718667984009\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0\r\n - CPU Architecture: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz\r\n - OS (e.g., Linux): Linux, CentOS7\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda\r\n - Build command you used (if compiling from source): / \r\n - Are you using local sources or building from archives: No\r\n - Python version: 3.7\r\n - CUDA version: 11.7\r\n - GPU models and configuration:\r\n - TensorRT version: 8.2.5.1\r\n - Torch_tensorrt version: 1.1.0\r\n\r\n## Additional context\r\nThe code of MHA is here. \r\n`tmp_attn.py`\r\n\r\n\r\n[tmp_attn.py.zip](https://github.com/pytorch/TensorRT/files/8938221/tmp_attn.py.zip)\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1134", "state": "closed", "labels": [ "question", "No Activity", "performance" ], "created_at": "2022-06-20T06:55:23Z", "updated_at": "2023-11-09T09:01:52Z", "user": "geekinglcq" }, { "repo": "pytorch/TensorRT", "number": 1133, "title": "\u2753 [Question] How to install torch_tensorrt python API in ubuntu 20.04? ", "body": "## \u2753 Question\r\n\r\nI want to install ```torch_tensorrt``` python API in ubuntu 20.04. could you please provide step by a step installation procedure? I tried \r\n```pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases```\r\n\r\nwhen I try to import the module \r\n```import torch_tensorrt```\r\n\r\nI'm getting the below error,\r\n\r\n\r\n![Screenshot from 2022-06-19 15-41-46](https://user-images.githubusercontent.com/74839416/174486567-a3e92ba9-0636-49ed-ba2c-4d5ebfc2da22.png)\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0\r\n - CPU Architecture:\r\n - OS (e.g., Linux): LINUX\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:no\r\n - Python version: 3.7.13\r\n - CUDA version: 11.3.1\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n@narendasan @peri044 \r\n", "url": "https://github.com/pytorch/TensorRT/issues/1133", "state": "closed", "labels": [ "question", "component: build system", "component: packaging", "component: dependencies" ], "created_at": "2022-06-19T14:50:36Z", "updated_at": "2022-12-15T17:24:39Z", "user": "IamExperimenting" }, { "repo": "pytorch/serve", "number": 1692, "title": "TorchServe How to Curl Multiple Images Properly", "body": "I am using TorchServe to potentially serve a model from MMOCR (https://github.com/open-mmlab/mmocr), and I have several questions:\r\n1. I tried to do inference on hundreds of images together using batch mode by using & to concatenate curl commands together, such as suggested here https://github.com/pytorch/serve/issues/1235#issuecomment-938231201. However, this doesn't provide a neat solution if I have hundreds of curls concatenated together. I can of course have a super long command that looks like \r\n```\r\ncurl -X POST http://localhost:8080/predictions/ABINet -T image1.png & curl -X POST http://localhost:8080/predictions/ABINet -T image2.png & curl -X POST http://localhost:8080/predictions/ABINet -T image3.png & curl -X POST http://localhost:8080/predictions/ABINet -T image4.png &... \r\n```\r\nBut I don't think this is the right way to go. \r\nMy questions are: is using & really parallel? What is a good/suggested way to do inference on hundreds of images? What is a Pythonic way to do this (maybe using requests/subprocess)? \r\n\r\n2. I used config.properties file that looks like below\r\n```\r\nInference address: http://127.0.0.1:8080\r\nManagement address: http://127.0.0.1:8081\r\nMetrics address: http://127.0.0.1:8082\r\nload_models=ABINet.mar\r\nmodels={\\\r\n \"ABINet\": {\\\r\n \"1.0\": {\\\r\n \"defaultVersion\": true,\\\r\n \"marName\": \"ABINet.mar\",\\\r\n \"runtime\": \"python\",\\\r\n \"minWorkers\": 1,\\\r\n \"maxWorkers\": 8,\\\r\n \"batchSize\": 200,\\\r\n \"maxBatchDelay\": 50,\\\r\n \"responseTimeout\": 120,\\\r\n \"max_request_size\": 65535000\\\r\n }\\\r\n }\\\r\n}\r\n```\r\nI noticed that each time I do inference (using `curl -X POST http://localhost:8080/predictions/ABINet T image1.png & curl -X POST http://localhost:8080/predictions/ABINet T image2.png &...` hundreds of times concatenated), the GPU usage will increase, and the memory wouldn't be released after the inference is done. \r\n\r\nFor example, if I want to do inference on 300 images with config.properties that looks like\r\n```\r\nInference address: http://127.0.0.1:8080\r\nManagement address: http://127.0.0.1:8081\r\nMetrics address: http://127.0.0.1:8082\r\nload_models=ABINet.mar\r\nmodels={\\\r\n \"ABINet\": {\\\r\n \"1.0\": {\\\r\n \"defaultVersion\": true,\\\r\n \"marName\": \"ABINet.mar\",\\\r\n \"runtime\": \"python\",\\\r\n \"minWorkers\": 4,\\\r\n \"maxWorkers\": 8,\\\r\n \"batchSize\": 600,\\\r\n \"maxBatchDelay\": 50,\\\r\n \"responseTimeout\": 120,\\\r\n \"max_request_size\": 65535000\\\r\n }\\\r\n }\\\r\n}\r\n```\r\nusing `gpustat`, after I start torchserve, before I run the first inference, the GPU usage looks like\r\n\r\n![image](https://user-images.githubusercontent.com/55818214/174396193-5a2b1e3b-d4e3-4eff-a9d7-1bf3be2fdfcd.png)\r\n\r\nAfter running the inference the 1st time, the GPU usage looks like\r\n\r\n![image](https://user-images.githubusercontent.com/55818214/174396264-c4ba61d4-25d2-4d40-aaf0-061ae43cb503.png)\r\n\r\nAfter running the inference the 2nd time, \r\n\r\n![image](https://user-images.githubusercontent.com/55818214/174396318-bc5ff7fb-18f0-493d-b109-e7ef8b6a1608.png)\r\n\r\nSo if I do this inference on hundreds of images for 3 times, it will break and error like\r\n```\r\n{\r\n \"code\": 503,\r\n \"type\": \"ServiceUnavailableException\",\r\n \"message\": \"Model \\\"ABINet\\\" has no worker to serve inference request. Please use scale workers API to add workers.\"\r\n}\r\n```\r\nNow, I tried registering model with `initial_workers` as suggested here https://github.com/pytorch/serve/issues/29 but with no luck. \r\nMy questions are: \r\n* How to set this config.properties properly to handle this situation? How would I know what to set for batchsize and maxBatchDelay? \r\n* How to allow torchserve to release memory after one inference? Is there something similar to `gc.collect()` or `torch.cuda.reset_peak_memory_stats(device=None)`?\r\n* How does TorchServe work under the hood? If I send a request with hundreds of images, say, 600, will TorchServe take all in or take only whatever portion it can take? Or will it automatically partition the request (say, take 300 the first time, then take the rest 300)?\r\n\r\nI am attaching the MMOCR custom handler for reference\r\n```\r\nclass MMOCRHandler(BaseHandler):\r\n threshold = 0.5\r\n\r\n def initialize(self, context):\r\n properties = context.system_properties\r\n self.map_location = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n self.device = torch.device(self.map_location + ':' +\r\n str(properties.get('gpu_id')) if torch.cuda.\r\n is_available() else self.map_location)\r\n self.manifest = context.manifest\r\n\r\n model_dir = properties.get('model_dir')\r\n serialized_file = self.manifest['model']['serializedFile']\r\n checkpoint = os.path.join(model_dir, serialized_file)\r\n self.config_file = os.path.join(model_dir, 'config.py')\r\n\r\n self.model = init_detector(self.config_file, checkpoint, self.device)\r\n self.initialized = True\r\n\r\n ", "url": "https://github.com/pytorch/serve/issues/1692", "state": "open", "labels": [ "documentation", "help wanted", "perf" ], "created_at": "2022-06-17T18:54:26Z", "updated_at": "2024-08-04T15:18:11Z", "user": "Hegelim" }, { "repo": "pytorch/TensorRT", "number": 1129, "title": "\u2753 [Question] Torch traced model conversion with List[torch.Tensor] input", "body": "Is it possible to convert a torch traced model that accepts List[torch.Tensor] type of input to trt ts module? \r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1129", "state": "closed", "labels": [ "question", "component: core" ], "created_at": "2022-06-17T08:17:26Z", "updated_at": "2022-08-12T01:53:15Z", "user": "ArmenGhambaryan" }, { "repo": "pytorch/functorch", "number": 882, "title": "Can I use jvp with vmap?", "body": "Hi, experts.\r\n\r\nI want to use jvp with vmap, so that I can run jvp for each sample in a batch.\r\nHowever, unlike the jacrev example, jvp does not return a callable function, so I am not sure if it is compatible with vmap.\r\nIt seems like vjp returns a function like jacrev, so might be usable, but can I use jvp with vmap?\r\nIt is not clear to me whether vjp and jvp is interchangeable -- I don't see how I can use vjp instead to achieve what I need.\r\n\r\nThank you for the help!", "url": "https://github.com/pytorch/functorch/issues/882", "state": "closed", "labels": [], "created_at": "2022-06-16T18:40:05Z", "updated_at": "2022-06-16T19:00:52Z", "comments": 2, "user": "kwmaeng91" }, { "repo": "pytorch/torchx", "number": 520, "title": "[torchx/ray] Is elastic training on ray clusters supported?", "body": "## \ud83d\udc1b Bug\r\nHi, I would like to know the current state of running elastic training on ray clusters.\r\n\r\nI tried to repeat some experiments([notebook](https://colab.research.google.com/drive/1vVCpgQ9z_1SN8K9CJxUT2LtvUDN0AlND?usp=sharing)) in this [blog](https://www.anyscale.com/blog/large-scale-distributed-training-with-torchx-and-ray) on my ray cluster, but I got unexpected behavior.\r\n- I EXPECT to see when use custom component and the cluster has fewer available nodes than the job requested, the submitted job continues running with current nodes, and when there are new nodes become available, they join can join the training process. What I OBSERVED is the job failed and got the error below:\r\n ```\r\n TimeoutError: Placement group creation timed out. Make sure your cluster either has enough resources or use an autoscaling cluster. Current resources available: {'memory': 18038862642.0, 'CPU': 8.0, 'node:10.130.6.66': 0.999, 'object_store_memory': 15071908982.0, 'GPU': 1.0, 'node:10.130.6.67': 1.0}, resources requested by the placement group: [{'CPU': 2.0}, {'CPU': 2.0}, {'CPU': 2.0}, {'CPU': 2.0}, {'CPU': 2.0}]\r\n ```\r\n- When use the built-in `dist.ddp` component, even if there are enough computation resources, the ray job status always shows succeed, but from the ray job logs, the expected output never appears, and the only information in the log is\r\n ```\r\n Waiting for placement group to start.\r\n ```\r\n- When use custom component and the cluster has the required resources, the submitted job has expected log information in the log file, but the job will never stop, when I check the ray job status, it always shown\r\n ```\r\n Status for job 'raysubmit_kqtEAYVSmx4c1XgD': RUNNING\r\n Status message: Job is currently running.\r\n ```\r\n\r\n### Question\r\n<!-- your question here -->\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nModule (check all that applies):\r\n * [ ] `torchx.spec`\r\n * [x] `torchx.component`\r\n * [ ] `torchx.apps`\r\n * [ ] `torchx.runtime`\r\n * [ ] `torchx.cli`\r\n * [x] `torchx.schedulers`\r\n * [ ] `torchx.pipelines`\r\n * [ ] `torchx.aws`\r\n * [ ] `torchx.examples`\r\n * [ ] `other`\r\n\r\n\r\n## To Reproduce\r\n\r\nI tried two ways to launch a TorchX job on ray:\r\n\r\n```bash\r\n# Use custom component\r\n# Required resouses are defined in the component.py file\r\ntorchx run -s ray \\ # use ray scheduler\r\n -cfg dashboard_address=addr-of-cluster:8265,working_dir=. \\ # ray scheduler arguments\r\n component.py:trainer # use custom component\r\n\r\n# Use built-in dist.ddp component\r\ntorchx run -s ray \\ # use ray scheduler\r\n -cfg dashboard_address=addr-of-cluster:8265,working_dir=. \\ # ray scheduler arguments\r\n dist.ddp \\ # use dist.ddp component\r\n -j 4x1 \\ # nproc and nnodes\r\n --script ./compute_world_size.py # a distributed script\r\n```\r\n\r\nA detailed description of the command is [here](https://pytorch.org/torchx/latest/quickstart.html).\r\n\r\nThe provisioned ray cluster:\r\n\r\n```python\r\n\"headCPU\": \"4\",\r\n\"headGPU\": \"0\",\r\n\"headMemory\": \"12Gi\",\r\n\"headMaxMemory\": \"24Gi\", \r\n\"workerMinCount\": 1, \r\n\"workerMaxCount\": 4,\r\n\"workerCPU\": \"4\",\r\n\"workerGPU\": \"0\",\r\n\"workerMemory\": \"12Gi\",\r\n\"workerMaxMemory\": \"24Gi\"\r\n```\r\n\r\nPerformed following experiments:\r\n\r\n- **(Autoscaling)** To test if torchx will trigger ray autoscaler to provide more nodes than the minimum nodes, I launched a job that requires 4 nodes.\r\nThe results are listed below:\r\n\r\n - [Custom component](torchx-ray/component.py):\r\n - Ray job status:\r\n\r\n ```shell\r\n Status for job 'raysubmit_kqtEAYVSmx4c1XgD': RUNNING\r\n Status message: Job is currently running.\r\n ```\r\n\r\n - Ray job logs:\r\n\r\n ```shell\r\n Waiting for placement group to start.\r\n (scheduler +1s) Tip: use `ray status` to view detailed cluster status. To disable these messages, set RAY_SCHEDULER_EVENTS=0.\r\n (scheduler +1s) Adding 3 nodes of type worker_node.\r\n (scheduler +21s) Resized to 20 CPUs, 4 GPUs.\r\n (CommandActor pid=223, ip=10.130.6.73) initializing `gloo` process group\r\n (CommandActor pid=223, ip=10.130.6.73) successfully initialized process group\r\n (CommandActor pid=223, ip=10.130.6.73) rank: 3, actual world_size: 4, computed world_size: 4\r\n (CommandActor pid=221, ip=10.131.6.32) initializing `gloo` process group\r\n (CommandActor pid=221, ip=10.131.6.32) successfully initialized process group\r\n (CommandActor pid=221, ip=10.131.6.32) rank: 1, actual world_size: 4, computed world_size: 4\r\n (CommandActor pid=222, ip=10.130.6.74) initializing `gloo` process group\r\n (CommandActor pid=222, ip=10.130.6.74) successfully initialized process group\r\n (CommandActor pid=222, ip=10.130.6.74) rank: 0, actual world_size: 4, computed world_size: 4\r\n (CommandActor pid=225, ip=10.131.6.30) initializing `gloo` process group\r\n (CommandActor pid=225, ip=10.131.6.30) successfully initialized process group\r\n (CommandActor pid=225, ip=10.131.6.30) rank: 2, actual world_siz", "url": "https://github.com/meta-pytorch/torchx/issues/520", "state": "open", "labels": [ "question", "ray" ], "created_at": "2022-06-15T18:25:55Z", "updated_at": "2022-06-22T21:34:39Z", "comments": 7, "user": "ntlm1686" }, { "repo": "pytorch/serve", "number": 1687, "title": "How to install torchserve from source ???", "body": "### \ud83d\ude80 The feature\n\nWithout using\r\n`pip install torchserve` and `docker pull pytorch/torchserve`, how can I install **torchserve** using this open source ??\r\nI can build `model-archiver` and `workflow-archiver`, but how can I build out `torchserve` from source ?\n\n### Motivation, pitch\n\nWithout using\r\n`pip install torchserve` and `docker pull pytorch/torchserve`, how can I install **torchserve** using this open source ??\r\nI can build `model-archiver` and `workflow-archiver`, but how can I build out `torchserve` from source ?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/1687", "state": "closed", "labels": [], "created_at": "2022-06-14T18:03:48Z", "updated_at": "2022-06-15T03:05:30Z", "user": "jiapei-nexera" }, { "repo": "pytorch/examples", "number": 1012, "title": "Using SLURM for Imagenet training on multiple nodes", "body": "In the pytorch imagenet example of this repo, it says that for multiple nodes we have to run the command on each node like below:\r\n\r\n![image](https://user-images.githubusercontent.com/10924797/173546864-66c56fa9-3aef-4f26-9e06-12866db2220f.png)\r\n\r\nSince I am using a shared HPC cluster with SLURM, I cannot actively know which nodes my training will use so I'm not sure how to run these two commands. How can I run these two commands on the separate nodes using SLURM?", "url": "https://github.com/pytorch/examples/issues/1012", "state": "closed", "labels": [ "distributed" ], "created_at": "2022-06-14T09:39:59Z", "updated_at": "2022-07-10T20:11:43Z", "comments": 2, "user": "b0neval" }, { "repo": "pytorch/pytorch", "number": 79495, "title": "How to stacked RGB images", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi, pytorch support teams.\r\n\r\nI want to stack a RGB images.\r\nI want to construct a 3D or 4D RGB tensor.\r\nAnd, create a GAN model using these tensor.\r\nHow do I define how to create such a tensor?\r\nI would like to stack the attached 2D RGB images.\r\nOr can you extract each RGB element from a 3D image as a 3D tensor?\r\n\r\nKind regards,\r\nyoshimura.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n![RGB image](https://user-images.githubusercontent.com/68062970/173480331-c74cf544-d349-4c10-a30e-d53735d7c00e.png)\r\n", "url": "https://github.com/pytorch/pytorch/issues/79495", "state": "closed", "labels": [], "created_at": "2022-06-14T02:40:40Z", "updated_at": "2022-06-14T18:01:50Z", "user": "kazuma0606" }, { "repo": "pytorch/tutorials", "number": 1945, "title": "Calculating accuracy.", "body": "How can i calculate the accuracy of the model on seq2seq with attention chatbot?", "url": "https://github.com/pytorch/tutorials/issues/1945", "state": "closed", "labels": [ "question" ], "created_at": "2022-06-13T22:34:03Z", "updated_at": "2022-08-17T20:26:00Z", "user": "OmarHaitham520" }, { "repo": "pytorch/torchx", "number": 514, "title": "Launching hello world job on Kubernetes and getting logs", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\n<!-- link to the problematic documentation -->\r\nhttps://pytorch.org/torchx/0.1.0rc2/quickstart.html\r\n\r\n## What does it currently say?\r\n<!-- copy paste the section that is wrong -->\r\n`torchx run --scheduler kubernetes my_component.py:greet --image \"my_app:latest\" --user \"your name\"`\r\nThe documentation lacks information about getting logs for the hello world example with Kubernetes cluster.\r\n\r\n## What should it say?\r\n<!-- the proposed new documentation -->\r\nThe user should have a kubectl CLI configured. Refer to [this](https://kubernetes.io/docs/reference/kubectl/) \r\n\r\nTo get the logs of hello world job:\r\n`kubectl logs <pod name>`\r\n\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/514", "state": "open", "labels": [ "documentation" ], "created_at": "2022-06-13T14:20:20Z", "updated_at": "2022-06-13T16:50:35Z", "comments": 1, "user": "vishakha-ramani" }, { "repo": "pytorch/TensorRT", "number": 1114, "title": "How can i compile CUDA C in this project\u2753 [Question] How do you ....? ", "body": "## \u2753 Question\r\n\r\nI want compile tensorrt plugin in this project. But I do not know how to use bazel to compile the cuda c.\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1114", "state": "closed", "labels": [ "question" ], "created_at": "2022-06-13T11:27:52Z", "updated_at": "2022-06-20T22:11:37Z", "user": "p517332051" }, { "repo": "pytorch/serve", "number": 1684, "title": "How to decode the gRPC PredictionResponse string efficiently", "body": "### \ud83d\udcda The doc issue\n\nThere is no documentation about decoding the received bytes form PredictionResponse into torch tensor efficiently. Currently, the only working solution is using `ast.literal_eval`, which is extremely slow. \r\n\r\n```\r\nresponse = inference_stub.Predictions(\r\n inference_pb2.PredictionsRequest(model_name=model_name, input=input_data))\r\npredictions = torch.astensor(literal_eval(response.prediction.decode('utf-8')))\r\n```\r\n\r\nUsing methods like numpy.fromstring, numpy.frombuffer or torch.frombuffer returns the following error:\r\n\r\n```\r\n> np.fromstring(response.prediction.decode(\"utf-8\"))\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nValueError: string size must be a multiple of element size\r\n```\r\n\r\nThe following returns an incorrect tensor values. The number of elements are not the same as expected number of elements.\r\n```\r\ntorch.frombuffer(response.prediction, dtype = torch.float32)\r\n```\r\n\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/serve/issues/1684", "state": "open", "labels": [ "documentation" ], "created_at": "2022-06-13T10:47:16Z", "updated_at": "2022-09-20T11:50:44Z", "user": "IamMohitM" }, { "repo": "pytorch/pytorch", "number": 79384, "title": "torch.load() fails on MPS backend (\"don't know how to restore data location\")", "body": "### \ud83d\udc1b Describe the bug\n\n```bash\r\n# warning: 5.8GB file\r\nwget https://huggingface.co/Cene655/ImagenT5-3B/resolve/main/model.pt\r\n```\r\n\r\n```python\r\nimport torch\r\ntorch.load('./model.pt', map_location='mps')\r\n```\r\n\r\nError thrown [from serialization.py](https://github.com/pytorch/pytorch/blob/bd1a35dfc894eced537b825e5569836e6a91266d/torch/serialization.py#L178):\r\n\r\n```\r\nException has occurred: RuntimeError (note: full exception trace is shown but execution is paused at: _run_module_as_main)\r\ndon't know how to restore data location of torch.storage._UntypedStorage (tagged with mps)\r\n File \"/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py\", line 178, in default_restore_location\r\n raise RuntimeError(\"don't know how to restore data location of \"\r\n File \"/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py\", line 970, in restore_location\r\n return default_restore_location(storage, map_location)\r\n File \"/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py\", line 1001, in load_tensor\r\n wrap_storage=restore_location(storage, location),\r\n File \"/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py\", line 1019, in persistent_load\r\n load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))\r\n File \"/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py\", line 1049, in _load\r\n result = unpickler.load()\r\n File \"/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py\", line 712, in load\r\n return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n File \"/Users/birch/git/imagen-pytorch-cene/repro.py\", line 2, in <module>\r\n torch.load('./ImagenT5-3B/model.pt', map_location='mps')\r\n File \"/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py\", line 268, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py\", line 197, in _run_module_as_main (Current frame)\r\n return _run_code(code, main_globals, None,\r\n```\r\n\r\nI think the solution will involve adding a [`register_package()` entry](https://github.com/pytorch/pytorch/blob/bd1a35dfc894eced537b825e5569836e6a91266d/torch/serialization.py#L160-L161) for the mps backend.\n\n### Versions\n\n```\r\nPyTorch version: 1.13.0.dev20220610\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 12.4 (arm64)\r\nGCC version: Could not collect\r\nClang version: 13.0.0 (clang-1300.0.29.30)\r\nCMake version: version 3.22.1\r\nLibc version: N/A\r\n\r\nPython version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)\r\nPython platform: macOS-12.4-arm64-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] imagen-pytorch==0.0.0\r\n[pip3] numpy==1.22.4\r\n[pip3] torch==1.13.0.dev20220610\r\n[pip3] torchaudio==0.14.0.dev20220603\r\n[pip3] torchvision==0.14.0.dev20220609\r\n[conda] numpy 1.23.0rc2 pypi_0 pypi\r\n[conda] torch 1.13.0.dev20220606 pypi_0 pypi\r\n[conda] torchaudio 0.14.0.dev20220603 pypi_0 pypi\r\n[conda] torchvision 0.14.0a0+f9f721d pypi_0 pypi\r\n```\n\ncc @mruberry @kulinseth @albanD", "url": "https://github.com/pytorch/pytorch/issues/79384", "state": "closed", "labels": [ "module: serialization", "triaged", "module: mps" ], "created_at": "2022-06-12T19:30:24Z", "updated_at": "2022-08-06T09:25:21Z", "user": "Birch-san" }, { "repo": "pytorch/pytorch", "number": 79332, "title": "How to reimplement same behavior in AdaptiveAvgPooling2D", "body": "### \ud83d\udcda The doc issue\n\nHi, am trying written an op which should mimic behavior in Pytorch's AdaptiveAvgPooling, but I can not align the result.\r\n\r\nHere is what I do:\r\n\r\n```\r\ndef test_pool():\r\n a = np.fromfile(\"in.bin\", dtype=np.float32)\r\n a = np.reshape(a, [1, 12, 25, 25])\r\n a = torch.as_tensor(a)\r\n\r\n b = F.adaptive_avg_pool2d(a, [7, 7])\r\n print(b)\r\n print(b.shape)\r\n\r\n avg_pool = torch.nn.AvgPool2d([7, 7], [3, 3])\r\n c = avg_pool(a)\r\n print(c)\r\n print(c.shape)\r\n```\r\n\r\nthe `b` and `c` are not equal.\r\n\r\nMy algorithm was:\r\n\r\n```\r\nk = output_size // input_size\r\nstride = input_size - (output_size - 1) * k\r\npadding = 0\r\n```\r\n\r\nI think there maybe some gap in real algorithm in pytorch. But can not found any where said it.\r\n\r\nso, please make me clarify.\n\n### Suggest a potential alternative/fix\n\nDetails in adaptiveavgpool2d", "url": "https://github.com/pytorch/pytorch/issues/79332", "state": "closed", "labels": [], "created_at": "2022-06-11T02:06:59Z", "updated_at": "2022-08-18T11:39:51Z", "user": "lucasjinreal" }, { "repo": "pytorch/functorch", "number": 867, "title": "Why is using vmap(jacrev) for BatchNorm2d in non-tracking mode not working?", "body": "Hi, experts.\r\nI am trying to use vmap(jacrev) to calculate the per-sample jacobian in a batch for my network during inference. However, when there is BatchNorm2d, it does not work. Because during inference, BatchNorm2d is simply applying the statistics previously tracked (and not doing any inter-sample operations), I think it should work just as any other simple operation from my understanding. Is there a way for me to make it work, or is there anything I am misunderstanding?\r\n\r\nBelow is my minimal code:\r\n```\r\nfrom functorch import jacrev, vmap\r\nimport torch\r\nfrom torch import nn\r\n\r\nlayers = nn.Sequential(\r\n nn.Conv2d(3, 3, kernel_size=(3, 3)),\r\n nn.BatchNorm2d(3, track_running_stats=False),\r\n )\r\n\r\nx = torch.randn(4, 3, 30, 30)\r\nj = vmap(jacrev(layers))(x)\r\n```\r\n\r\nAnd I get this error in the bn layer\r\n`ValueError: expected 4D input (got 3D input)`\r\n\r\nI think this should fundamentally be doable, and just might be because of how vmap and jacrev is implemented.\r\nIs there any simple workaround, or am I misunderstanding anything?\r\n\r\nThank you for any help", "url": "https://github.com/pytorch/functorch/issues/867", "state": "closed", "labels": [], "created_at": "2022-06-11T00:15:32Z", "updated_at": "2022-07-18T18:44:14Z", "comments": 6, "user": "kwmaeng91" }, { "repo": "pytorch/pytorch", "number": 79106, "title": "How to find the code in '...'?", "body": "https://github.com/pytorch/pytorch/blob/4305f8e9bda34f18eb7aacab51c63651cfc61802/torch/storage.py#L34\r\n\r\nHere, I want to read the detailed code in `.cuda` func, however, I do not find any code about this api?\ud83d\ude22\r\n\r\nHope someone could help me\uff01\u2764\r\n\n\ncc @ngimel", "url": "https://github.com/pytorch/pytorch/issues/79106", "state": "closed", "labels": [ "module: cuda", "triaged" ], "created_at": "2022-06-08T02:49:10Z", "updated_at": "2022-06-13T20:44:10Z", "user": "juinshell" }, { "repo": "pytorch/data", "number": 574, "title": "Support offloading data pre-processing to auxiliary devices", "body": "### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nOccasionally one might find that their GPU is idle due to a bottleneck on the input data pre-processing pipeline (which might include data loading/filtering/manipulation/augmentation/etc). In these cases one could improve resource utilization by offloading some of the pre-processing to auxiliary CPU devices.\r\nI have demonstrated how to do this using gRPC in the following blog post: https://towardsdatascience.com/overcoming-ml-data-preprocessing-bottlenecks-with-grpc-ca30fdc01bee\r\n\r\nTensorFlow has built in (experimental) support for this feature (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service) that enables offloading in a few simple steps.\r\n\r\nThe request here is to include PyTorch APIs for offloading data pre-processing in a manner that would be simple and straight forward to the user... Similar to the TensorFlow APIs (though preferably without any limitations on pre-processing workload) .\r\n\r\n\r\n\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n\ncc @SsnL @VitalyFedyunin @ejguan @NivekT", "url": "https://github.com/meta-pytorch/data/issues/574", "state": "open", "labels": [ "feature", "module: dataloader", "triaged", "module: data" ], "created_at": "2022-06-07T10:12:00Z", "updated_at": "2022-07-06T18:12:47Z", "comments": 2, "user": "czmrand" }, { "repo": "pytorch/kineto", "number": 615, "title": "How to limit the scope of the profiler?", "body": "I am wondering if it is possible to limit the scope of the profiler to a particular part of the neural network. Currently, I am trying to analyze the bottleneck of my model using the following pseudocode:\r\n\r\n```\r\nimport torch.profiler as profiler\r\n with profiler.profile(\r\n activities=[\r\n profiler.ProfilerActivity.CPU,\r\n profiler.ProfilerActivity.CUDA,\r\n ],\r\n profile_memory=True,\r\n schedule=profiler.schedule(wait=5, warmup=2, active=1, repeat=1), \r\n on_trace_ready=profiler.tensorboard_trace_handler(tensorboard_logdir)\r\n ) as p:\r\n for sample in dataloader:\r\n model(sample)\r\n```\r\n\r\nHowever, the trace I created is still way too large (~800MB) for the tensorboard to function properly. Apparently tensorboard is only able to load the trace if it is smaller than about 500 MB, so I am thinking about limiting the trace of the profiler to only look at part of the neural net that leads to the issue. However, it seems like a warmup is necessary, so inserting the profiler.profile within a network will generate inaccurate results. Is there a way to limit the scope of the profiler without breaking the interface?", "url": "https://github.com/pytorch/kineto/issues/615", "state": "closed", "labels": [], "created_at": "2022-06-06T20:34:35Z", "updated_at": "2022-06-21T17:57:42Z", "user": "hyhuang00" }, { "repo": "pytorch/torchx", "number": 510, "title": "Implement an HPO builtin", "body": "## Description\r\nAdd a builtin component for launching HPO (hyper-parameter optimization) jobs. At a high-level something akin to:\r\n\r\n```\r\n# for grid search\r\n$ torchx run -s kubernetes hpo.grid_search --paramspacefile=~/parameters.json --component dist.ddp\r\n\r\n# for bayesian search\r\n$ torchx run -s kubernetes hpo.bayesian ...\r\n```\r\n\r\nIn both cases we use the Ax/TorchX integration to run the HPO driver job. (see motivation section below for details)\r\n\r\n## Motivation/Background\r\nTorchX already integrates with Ax that supports both bayesian and grid_search HPO. Some definitions before we get started:\r\n\r\n1. Ax: Experiment - ([docs](https://ax.dev/docs/glossary.html#experiment)) Defines the HPO search space and holds the optimizer state. Vends out the next set of parameters to search based on the observed results (relevant for Bayesian and Bandit optimizations, not so much for grid search).\r\n2. Ax: Trials - ([docs](https://ax.dev/docs/glossary.html#trial)) A step in an experiment, aka a (training) job that runs with a specific set of hyper-parameters as vended out by the optimizer in the experiment\r\n3. Ax: Runner - ([docs](https://ax.dev/docs/glossary.html#runner)) Responsible for launching trials.\r\n\r\nAx/TorchX integration is done at the Runner level. We implemented an [`ax/TorchXRunner`](https://ax.dev/api/runners.html#module-ax.runners.torchx) that implements Ax's `Runner` interface (do not confuse this with the TorchX runner. TorchX itself defines a runner concept). The `ax/TorchXRunner` runs the ax Trials using TorchX.\r\n\r\nThe [`ax/TorchXRunnerTest`](https://github.com/facebook/Ax/blob/main/ax/runners/tests/test_torchx.py#L72) serves as a full end-to-end example of how everything works. In summary the test runs a bayesian HPO to minimize the [\"booth\" function](https://en.wikipedia.org/wiki/Test_functions_for_optimization). **Note that in practice this function is replaced by your \"trainer\"**. The main module that computes the booth function given the parameters `x_1` and `x_2` as inputs is defined in [`torchx.apps.utils.booth`](https://github.com/pytorch/torchx/blob/main/torchx/apps/utils/booth_main.py).\r\n\r\nThe abridged code looks something like this:\r\n ```python\r\n parameters: List[Parameter] = [\r\n RangeParameter(\r\n name=\"x1\",\r\n lower=-10.0,\r\n upper=10.0,\r\n parameter_type=ParameterType.FLOAT,\r\n ),\r\n RangeParameter(\r\n name=\"x2\",\r\n lower=-10.0,\r\n upper=10.0,\r\n parameter_type=ParameterType.FLOAT,\r\n ),\r\n ]\r\n experiment = Experiment(\r\n name=\"torchx_booth_sequential_demo\",\r\n search_space=SearchSpace(parameters=self._parameters),\r\n optimization_config=OptimizationConfig(\r\n objective = Objective(metric=TorchXMetric(name=\"booth_eval\"),\r\n minimize=True,\r\n ),\r\n runner=TorchXRunner(\r\n tracker_base=self.test_dir,\r\n component=utils.booth,\r\n scheduler=\"local_cwd\",\r\n cfg={\"prepend_cwd\": True},\r\n ),\r\n )\r\n\r\n scheduler = Scheduler( \r\n experiment=experiment,\r\n generation_strategy=choose_generation_strategy(search_space=experiment.search_space),\r\n options=SchedulerOptions(),\r\n )\r\n\r\n for _ in range(3):\r\n scheduler.run_n_trials(max_trials=2) \r\n scheduler.report_results()\r\n ```\r\n\r\n## Detailed Proposal\r\nThe task here is to essentially create pre-packaged applications for the code above. We can define a two types of HPO apps by the \"strategy\" used:\r\n1. hpo.grid_search\r\n2. hpo.bayesian\r\n\r\nEach application will come with a companion \"component\" (e.g. `hpo.grid_search` and `hpo.bayesian`). The applications should be designed to take as input:\r\n\r\n1. parameter space\r\n2. what the objective function is (e.g. trainer)\r\n3. torchx cfgs (e.g. scheduler, scheduler runcfg, etc)\r\n4. ax experiment configs\r\n\r\nThe challenge is to be able to correctly and sanely \"parameterize\" the application in such a way that allows the user to sanely pass these argument from the CLI. For complex parameters such as parameter space, one might consider taking a file in a specific format rather than conjuring up a complex string encoding to pass as CLI input. \r\n\r\nFor instance for the `20 x 20` for `x_1` and `x_2` in the example above, rather than taking the parameter space as:\r\n```\r\n$ torchx run hpo.bayesian --parameter_space x_1=-10:10,x2_=-10:10\r\n```\r\n\r\nOne can take it as a well defined python parameter file:\r\n```\r\n# params.py\r\n# just defines the parameters using the regular Ax APIs\r\nparameters: List[Parameter] = [\r\n RangeParameter(\r\n name=\"x1\",\r\n lower=-10.0,\r\n upper=10.0,\r\n parameter_type=ParameterType.FLOAT,\r\n ),\r\n RangeParameter(\r\n name=\"x2\",\r\n low", "url": "https://github.com/meta-pytorch/torchx/issues/510", "state": "open", "labels": [ "enhancement", "module: components" ], "created_at": "2022-06-03T20:06:10Z", "updated_at": "2022-10-27T01:55:08Z", "comments": 0, "user": "kiukchung" }, { "repo": "pytorch/vision", "number": 6124, "title": "How to timing 'model.to(device)' correctly?", "body": "I am using pytorch's api in my python code to measure time for different layers of resnet152 to device(GPU, V-100).However, I cannot get a stable result.\r\nHere is my code:\r\n```python\r\nimport torch.nn as nn\r\ndevice = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')\r\nmodel = torchvision.models.resnet152(pretrained=True)\r\n\r\ndef todevice(_model_, _device_=device):\r\n T0 = time.perf_counter()\r\n _model_.to(_device_)\r\n torch.cuda.synchronize()\r\n T1 = time.perf_counter()\r\n print(\"model to device %s cost:%s ms\" % (_device_, ((T1 - T0) * 1000)))\r\n\r\nmodel1 = nn.Sequential(*list(resnet152.children())[:6])\r\ntodevice(model1)\r\n```\r\nWhen I use the code to test at different time, I can always get different answers, some of them are ridiculous, even to `200ms`.\r\nAlso, there are 4 GPU(Tesla V100) in my lab, I don't know whether other extra GPUs will affect my result.\r\nCould you tell me how to timing `model.to(device)` correctly? Is there anything wrong with my code or my lab environment?", "url": "https://github.com/pytorch/vision/issues/6124", "state": "closed", "labels": [ "question" ], "created_at": "2022-06-02T11:55:14Z", "updated_at": "2022-06-06T08:34:34Z", "user": "juinshell" }, { "repo": "pytorch/functorch", "number": 848, "title": "AOTAutograd makes unsafe assumptions on how the backward pass will look like", "body": "## Context: how AOTAutograd works today\r\n\r\nGiven a function `f`:\r\n- AOTAutograd traces out `run_forward_and_backward_f(*args, *grad_outputs)` to produce `forward_and_backward_trace`\r\n- AOTAutograd partitions `forward_and_backward_trace` into a forward_trace and a backward_trace\r\n- AOTAutograd compiles the forward_trace and backward_trace separately\r\n- The compiled_forward_trace and compiled_backward_trace are stitched into an autograd.Function\r\n\r\n## The Problem\r\n\r\nIn order to trace `run_forward_and_backward_f(*args, *grad_outputs)`, AOTAutograd needs to construct a Proxy for the grad_outputs. This ends up assuming properties of the grad_output: for example, AOTAutograd assumes that the grad_outputs are contiguous.\r\n\r\nThere are some more adversarial examples that we could construct. If the backward formula of at::sin were instead:\r\n```\r\ndef sin_backward(grad_output, input):\r\n if grad_output.is_sparse():\r\n return grad_output * input.sin()\r\n return grad_output * input.cos()\r\n```\r\nthen, depending on the properties of the input, the backward that should get executed is different. If AOTAutograd assumes that the Proxy is dense and contiguous, then the backward pass of the generated autograd.Function would be incorrect.\r\n\r\n## Potential proposal\r\n\r\nProposal: delay tracing the backward pass until the backward pass is invoked.\r\n\r\nSo, given a function `f`:\r\n- AOTAutograd constructs a trace of f (that includes intermediates as outputs), `forward_trace`\r\n- AOTAutograd constructs an autograd.Function that has `compiled(forward_trace)` as the forward pass\r\n\r\nThe autograd.Function's backward pass, when invoked:\r\n- traces out `run_forward_and_backward_f(*args, *grad_outputs)` to produce `forward_and_backward_trace`\r\n- takes the difference of `forward_and_backward_trace` and `forward_trace` to produce `backward_trace`.\r\n- compiles `backward_trace` into `compiled_backward_trace`\r\n- then invokes it.\r\n\r\nThings that we haven't mentioned that will need to be thought about:\r\n- how does AOTAutograd's rematerialization come into play here?\r\n\r\nThings that we haven't mentioned that should be orthogonal:\r\n- caching. `compiled(forward_trace)` needs a cache that uses the inputs as keys (among other things), `compiled(backward_trace)` needs a cache that takes the (inputs, grad_outputs) as keys.\r\n- what if the backward is user-defined (e.g., autograd.Function) and isn't traceable? See https://github.com/pytorch/pytorch/issues/93723 for ideas\r\n\r\n## Alternatives\r\n\r\nKeep the current scheme (AOTAutograd traces out both the forward+backward pass at the time of the forward), but somehow prove to ourselves that the produced trace of the backward pass is always correct.\r\n\r\ncc @Chillee @anijain2305 @ezyang @anjali411 @albanD ", "url": "https://github.com/pytorch/functorch/issues/848", "state": "open", "labels": [], "created_at": "2022-06-01T18:18:28Z", "updated_at": "2023-02-01T01:10:36Z", "comments": 4, "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 78365, "title": "How to calculate the gradient of the previous layer when the gradient of the latter layer is given?", "body": "Hi, there. Can someone help me solve this problem? if the gradients of a certain layer is known, how can I use the API in torch to calculate the gradient of the previous layer?I would appreciate it if anyone could reply me in time.", "url": "https://github.com/pytorch/pytorch/issues/78365", "state": "closed", "labels": [], "created_at": "2022-05-26T16:05:40Z", "updated_at": "2022-05-31T14:46:40Z", "user": "mankasto" }, { "repo": "pytorch/data", "number": 469, "title": "Suggestion: Dataloader with RPC-based workers", "body": "### \ud83d\ude80 The feature\r\n\r\nA dataloader which communicates with its workers via torch.distributed.rpc API.\r\n\r\n### Motivation, pitch\r\n\r\nPresently, process-based workers for Dataloader mean the workers live on the same server/PC as the process consuming that data. This incurs the following limitations:\r\n- the pre-processing workload cannot scale beyond the GPU server capacity\r\n- with random sampling, each worker might eventually see all the dataset, which is not cache friendly\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nA proof of concept is available ~~[here](https://github.com/nlgranger/data/blob/rpc_dataloader/torchdata/rpc/dataloader.py)~~ -> https://github.com/CEA-LIST/RPCDataloader\r\n\r\nI have not yet tested how efficient this is compared to communicating the preprocessed batch data via process pipes. Obviously the use of shared-memory is lost when the worker is remote but the TensorPipe rpc backend might be able to take advantage of other fast transfer methods (GPUDirect, rmda?).\r\n\r\nThe load distribution scheme used in this first implementation is round-robin. I have not yet put thoughts on how to make this modifiable both in term of implementation and API.", "url": "https://github.com/meta-pytorch/data/issues/469", "state": "closed", "labels": [], "created_at": "2022-05-26T11:14:13Z", "updated_at": "2024-01-30T09:29:17Z", "comments": 2, "user": "nlgranger" }, { "repo": "pytorch/examples", "number": 1010, "title": "Accessing weights of a pre-trained model", "body": "Hi, \r\n Can you share how to print weights and biases for each layer of a pre-trained Alexnet model?\r\n\r\nRegards,\r\nNivedita", "url": "https://github.com/pytorch/examples/issues/1010", "state": "closed", "labels": [], "created_at": "2022-05-26T06:50:13Z", "updated_at": "2022-06-02T00:11:56Z", "comments": 1, "user": "nivi1501" }, { "repo": "pytorch/TensorRT", "number": 1091, "title": "\u2753 [Question] Linking error with PTQ function", "body": "## \u2753 Question\r\n\r\nI am getting a linking error when using `torch_tensorrt::ptq::make_int8_calibrator`. I am using the Windows build based on CMake, so I'm not sure if it's a problem with the way it was built, but I suspect not since I can use functions from ::torchscript just fine.\r\n\r\nI am trying to create a barebones program to test ptq based on examples/int8/ptq/main.cpp, and I get this linker error whenever `torch_tensorrt::ptq::make_int8_calibrator` is used. Any help would be greatly appreciated.\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11+cu113\r\n - OS (e.g., Linux): Windows 10\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch from pytorch.org\r\n - CUDA version: 11.3\r\n\r\n## Additional context\r\n\r\nThis is the linker error that I get:\r\n> Severity\tCode\tDescription\tProject\tFile\tLine\tSuppression State\r\nError\tLNK2019\tunresolved external symbol \"__declspec(dllimport) class torch_tensorrt::ptq::Int8Calibrator<class nvinfer1::IInt8EntropyCalibrator2,class std::unique_ptr<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler>,struct std::default_delete<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler> > > > __cdecl torch_tensorrt::ptq::make_int8_calibrator<class nvinfer1::IInt8EntropyCalibrator2,class std::unique_ptr<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler>,struct std::default_delete<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler> > > >(class std::unique_ptr<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler>,struct std::default_delete<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler> > >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,bool)\" (__imp_??$make_int8_calibrator@VIInt8EntropyCalibrator2@nvinfer1@@V?$unique_ptr@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@U?$default_delete@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@@std@@@std@@@ptq@torch_tensorrt@@YA?AV?$Int8Calibrator@VIInt8EntropyCalibrator2@nvinfer1@@V?$unique_ptr@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@U?$default_delete@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@@std@@@std@@@01@V?$unique_ptr@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@", "url": "https://github.com/pytorch/TensorRT/issues/1091", "state": "closed", "labels": [ "question", "component: quantization", "channel: windows" ], "created_at": "2022-05-26T01:19:17Z", "updated_at": "2022-09-02T17:45:50Z", "user": "jonahclarsen" }, { "repo": "pytorch/torchx", "number": 503, "title": "add `torchx list` command and `Runner.list` APIs", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nAdd a `torchx list` and `Runner/Scheduler.list` methods. This would allow listing all jobs the user has launched and see their status when tracking multiple different jobs. \r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nCurrently users have to use the scheduler specific tools like `sacct/vcctl/ray job list` to see all of their jobs. Adding this would allow users to just interact via the torchx interface and not have to worry about interacting with other tools.\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nWe'd likely want something similar to https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.list\r\n\r\nFilters may be hard to support across all schedulers so we probably want to limit it to just a few common ones or none at all initially. We also want to filter so we only return torchx jobs instead of all jobs on the scheduler.\r\n\r\nLimiting it to jobs that the user owns would also be nice to have though may not be feasible for all schedulers.\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\n* https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.list\r\n* https://slurm.schedmd.com/sacct.html\r\n* https://docs.aws.amazon.com/batch/latest/APIReference/API_ListJobs.html\r\n* https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#list_namespaced_custom_object\r\n* https://docs.ray.io/en/master/cluster/jobs-package-ref.html#jobsubmissionclient\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/503", "state": "closed", "labels": [ "enhancement", "module: runner", "cli" ], "created_at": "2022-05-25T21:02:11Z", "updated_at": "2022-09-21T21:52:31Z", "comments": 10, "user": "d4l3k" }, { "repo": "pytorch/TensorRT", "number": 1089, "title": "I wonder if torch_tensorrt support mixed precisions for different layer", "body": "**Is your feature request related to a problem? Please describe.**\r\nI write a converter and plugin, but plugin only support fp32, then if I convert with enabled_precisions: torch.int8, then error happend\r\n\r\n**Describe the solution you'd like**\r\nif different layer can use different precisions, i can use fp32 this plugin layer and int8 other layers", "url": "https://github.com/pytorch/TensorRT/issues/1089", "state": "closed", "labels": [ "question" ], "created_at": "2022-05-25T10:07:21Z", "updated_at": "2022-05-30T06:05:07Z", "user": "pupumao" }, { "repo": "pytorch/data", "number": 454, "title": "Make `IterToMap` loading more lazily", "body": "### \ud83d\ude80 The feature\n\nCurrently, `IterToMap` starts to load all data from prior `IterDataPipe` when the first `__getitem__` is invoked here.\r\nhttps://github.com/pytorch/data/blob/13b574c80e8732744fee6ab9cb7e35b5afc34a3c/torchdata/datapipes/iter/util/converter.py#L78\r\n\r\nWe can stop loading data from prior `IterDataPipe` whenever we find the requested index. And, we might need to add a flag to prevent loading data multiple times.\n\n### Motivation, pitch\n\nThis would improve the performance if users simply iterate over the `MapDataPipe` as we don't need to pre-load everything at the beginning of the iteration, basically, simulating the behavior of `IterDataPipe`.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/454", "state": "open", "labels": [ "help wanted" ], "created_at": "2022-05-24T14:14:30Z", "updated_at": "2022-06-02T08:24:35Z", "comments": 7, "user": "ejguan" }, { "repo": "pytorch/data", "number": 453, "title": "Fix installation document for nightly and official release", "body": "### \ud83d\udcda The doc issue\n\nIn https://github.com/pytorch/data#local-pip-or-conda, we talk about the commands would install nightly pytorch and torchdata, which is actually the official release.\r\n\r\nWe should change this part and add another section for nightly installation\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/453", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-05-24T14:07:13Z", "updated_at": "2022-05-24T17:33:20Z", "comments": 0, "user": "ejguan" }, { "repo": "pytorch/torchx", "number": 498, "title": "Document .torchxconfig behavior in home directory", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\n<!-- link to the problematic documentation -->\r\n\r\nhttps://pytorch.org/torchx/main/runner.config.html\r\n\r\nContext: https://fb.workplace.com/groups/140700188041197/posts/326515519459662/?comment_id=328106399300574&reply_comment_id=328113552633192\r\n\r\n## What does it currently say?\r\n<!-- copy paste the section that is wrong -->\r\n\r\n```\r\nThe CLI only picks up .torchxconfig files from the current-working-directory (CWD) so chose a directory where you typically run torchx from.\r\n```\r\n\r\n## What should it say?\r\n<!-- the proposed new documentation -->\r\n\r\nIt should explain how it can also be read from home and how the options are merged together.\r\n\r\n## Why?\r\n<!-- (if not clear from the proposal) why is the new proposed documentation more correct/improvement over the existing one? -->\r\n\r\nBehavior is unclear to users.", "url": "https://github.com/meta-pytorch/torchx/issues/498", "state": "open", "labels": [ "documentation" ], "created_at": "2022-05-23T18:39:05Z", "updated_at": "2022-06-16T00:04:19Z", "comments": 2, "user": "d4l3k" }, { "repo": "pytorch/serve", "number": 1647, "title": "How to return n images instead of 1? ", "body": "Hi,\r\n\r\nI am trying to deploy a DALL-E type model, in which you get as input a text and you receive as output a couple of images.\r\n\r\n\r\n```\r\noutputs = []\r\nfor i, image in enumerate(images):\r\n byte_output = io.BytesIO()\r\n output.convert('RGB').save(byte_output, format='JPEG')\r\n bin_img_data = byte_output.getvalue()\r\n \r\n outputs.append(bin_img_data)\r\n\r\nreturn [outputs]\r\n```\r\n\r\n \r\n This does not work and results in a failure, with the logs from torchserve saying 'object of type bytearray is not json serializable'\r\n \r\n However, changing `return [outputs]` into `return [outputs[0]]` makes it work. What can I do regarding this? ", "url": "https://github.com/pytorch/serve/issues/1647", "state": "closed", "labels": [], "created_at": "2022-05-23T15:13:07Z", "updated_at": "2022-05-23T17:21:30Z", "user": "mhashas" }, { "repo": "pytorch/data", "number": 436, "title": "Is our handling of open files safe?", "body": "Our current strategy is to wrap all file handles in a [`StreamWrapper`](https://github.com/pytorch/pytorch/blob/88fca3be5924dd089235c72e651f3709e18f76b8/torch/utils/data/datapipes/utils/common.py#L154). It dispatches all calls to wrapped object and adds a `__del__` method:\r\n\r\n```py\r\nclass StreamWrapper:\r\n def __init__(self, file_obj):\r\n self.file_obj = file_obj\r\n\r\n def __del__(self):\r\n try:\r\n self.file_obj.close()\r\n except Exception:\r\n pass\r\n```\r\n\r\nIt will be called as soon as there are no more references to instance. The rationale is that if this happens we can close the wrapped file object. Since the `StreamWrapper` has a reference to the file object, GC should never try to delete the file object before `__del__` of the `StreamWrapper` is called. Thus, we should never delete an open file object.\r\n\r\nUnfortunately, the reasoning above seems not to be correct. In some cases, it seems GC will delete the file object before the `StreamWrapper` is deleted. This will emit a warning which the `torchvision` test suite will turn into an error. This was discussed at length in pytorch/vision#5801 and includes minimum requirements to reproduce the issue. Still, there was no minimal reproduction outside of the test environment found. The issue was presumably fixed in pytorch/pytorch#76345, but was popping up again in https://github.com/pytorch/data/runs/6500848588#step:9:1977.\r\n\r\nThus, I think it is valid question to ask if our approach is safe at all. It would be a quite bad UX if a user gets a lot of unclosed file warnings although they used `torchdata` or in extension `torchvision.datasets` as documented.", "url": "https://github.com/meta-pytorch/data/issues/436", "state": "closed", "labels": [], "created_at": "2022-05-23T10:37:11Z", "updated_at": "2023-01-05T15:05:51Z", "comments": 3, "user": "pmeier" }, { "repo": "pytorch/TensorRT", "number": 1076, "title": "\u2753 [Question] What am I missing to install TensorRT v1.1.0 in a Jetson with JetPack 4.6", "body": "## \u2753 Question\r\n\r\nI am getting some errors trying to install TensorRT v1.1.0 in a Jetson with JetPack 4.6 for using with Python3\r\n\r\n## What you have already tried\r\n\r\nI followed the Official installation of Pytorch v1.10.0 by using binaries according to the [ offical Nvidia Forum](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048). Then, I followed the official steps of this repository which are:\r\n\r\n1. Install Bazel - successfully\r\n2. Build Natively on aarch64 (Jetson) - Here I am getting the problem \r\n\r\n## Environment\r\n\r\n - PyTorch Version :1.10.0\r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How you installed PyTorch: Using pip3 according to [Nvidia Forum](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048)\r\n - Python version: 3.6\r\n - CUDA version: 10.2\r\n - TensorRT version: 8.2.1.8\r\n - CUDNN version: 8.2.0.1\r\n - GPU models and configuration: Jetson NX with JetPack 4.6\r\n - Any other relevant information: Installation is clean. I am using the CUDA and TensorRT that come flashed wich JetPack.\r\n\r\n## Additional context\r\nStarting from a clean installation of JetPack and Torch 1.10.0 installed by using official binaries, I describe the installation steps I did for using this repository with the errors I am getting.\r\n\r\n### 1- Install Bazel\r\n\r\n```\r\ngit clone -b v1.1.0 https://github.com/pytorch/TensorRT.git\r\nsudo apt-get install openjdk-11-jdk\r\nexport BAZEL_VERSION=$(cat /home/tkh/TensorRT.bazelversion)\r\nmkdir bazel\r\ncd bazel\r\ncurl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip\r\nunzip bazel-$BAZEL_VERSION-dist.zip\r\nbash ./compile.sh\r\ncp output/bazel /usr/local/bin/\r\n```\r\n\r\nAt this point I can see `bazel 5.1.1- (@non-git)` with `bazel --version`. \r\n\r\n### 2- Build Natively on aarch64 (Jetson)\r\n\r\nThen, I modified my WORKSPACE file of this repository in this way\r\n\r\n```\r\nworkspace(name = \"Torch-TensorRT\")\r\n\r\nload(\"@bazel_tools//tools/build_defs/repo:http.bzl\", \"http_archive\")\r\nload(\"@bazel_tools//tools/build_defs/repo:git.bzl\", \"git_repository\")\r\n\r\nhttp_archive(\r\n name = \"rules_python\",\r\n sha256 = \"778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f\",\r\n url = \"https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz\",\r\n)\r\n\r\nload(\"@rules_python//python:pip.bzl\", \"pip_install\")\r\n\r\nhttp_archive(\r\n name = \"rules_pkg\",\r\n sha256 = \"038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d\",\r\n urls = [\r\n \"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n \"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n ],\r\n)\r\n\r\nload(\"@rules_pkg//:deps.bzl\", \"rules_pkg_dependencies\")\r\n\r\nrules_pkg_dependencies()\r\n\r\ngit_repository(\r\n name = \"googletest\",\r\n commit = \"703bd9caab50b139428cea1aaff9974ebee5742e\",\r\n remote = \"https://github.com/google/googletest\",\r\n shallow_since = \"1570114335 -0400\",\r\n)\r\n\r\n# External dependency for torch_tensorrt if you already have precompiled binaries.\r\nlocal_repository(\r\n name = \"torch_tensorrt\",\r\n path = \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt\"\r\n)\r\n\r\n# CUDA should be installed on the system locally\r\nnew_local_repository(\r\n name = \"cuda\",\r\n build_file = \"@//third_party/cuda:BUILD\",\r\n path = \"/usr/local/cuda-10.2/\",\r\n)\r\n\r\nnew_local_repository(\r\n name = \"cublas\",\r\n build_file = \"@//third_party/cublas:BUILD\",\r\n path = \"/usr\",\r\n)\r\n#############################################################################################################\r\n# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)\r\n#############################################################################################################\r\n\r\n\r\n####################################################################################\r\n# Locally installed dependencies (use in cases of custom dependencies or aarch64)\r\n####################################################################################\r\n\r\n# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path\r\n# with your local libtorch, just point deps at the same path to satisfy bazel.\r\n\r\n# NOTE: NVIDIA's aarch64 PyTorch (python) wheel file uses the CXX11 ABI unlike PyTorch's standard\r\n# x86_64 python distribution. If using NVIDIA's version just point to the root of the package\r\n# for both versions here and do not use --config=pre-cxx11-abi\r\n\r\nnew_local_repository(\r\n name = \"libtorch\",\r\n path = \"/home/tkh-ad/.local/lib/python3.6/site-packages/torch\",\r\n build_file = \"third_party/libtorch/BUILD\"\r\n)\r\n\r\nnew_local_repository(\r\n name = \"libtorch_pre_cxx11_abi\",\r\n path = \"/home/tkh-ad/.local/lib/python3.6/site-packages/torch\",\r\n build_file = \"third_party/libtorch/BUILD\"\r\n)\r\n\r\nnew_local_repository(\r\n name = \"cudnn\",\r\n path = \"/usr/local/cud", "url": "https://github.com/pytorch/TensorRT/issues/1076", "state": "closed", "labels": [ "question", "channel: linux-jetpack" ], "created_at": "2022-05-20T13:56:30Z", "updated_at": "2022-05-20T22:35:42Z", "user": "mjack3" }, { "repo": "pytorch/data", "number": 433, "title": "HashChecker example is broken", "body": "https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/torchdata/datapipes/iter/util/hashchecker.py#L36-L48\r\n\r\nRunning this will raise a `StopIteration`. The reason is simple: we want to read from a stream that was already exhausted by the hash checking. The docstring tells us that much\r\n\r\nhttps://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/torchdata/datapipes/iter/util/hashchecker.py#L32-L33\r\n\r\nand we correctly set `rewind=False`.", "url": "https://github.com/meta-pytorch/data/issues/433", "state": "closed", "labels": [ "documentation", "good first issue" ], "created_at": "2022-05-20T11:44:59Z", "updated_at": "2022-05-23T22:29:38Z", "comments": 1, "user": "pmeier" }, { "repo": "pytorch/functorch", "number": 823, "title": "Dynamic shape error in vmap with jacrev of jacrev", "body": "I'd like to compute the following expression in a vectorized way: first take the derivative wrt. to the data, and then take the derivative of this expression wrt. the parameters. I tried implementing it like this\r\n```\r\nfunc, params, buffer = make_functional_with_buffers(network)\r\nvmap(jacrev(jacrev(func, 2), 0), (None, None, 0))(params, buffers, data)\r\n```\r\n\r\nbut this isn't working since I get this error message:\r\n\r\n> RuntimeError: vmap: We do not support batching operators that can support dynamic shape. Attempting to batch over indexing with a boolean mask.\r\n\r\nI'm a bit surprised since I expected a second application of `jacrev` shouldn't change how `vmap` interacts with the function, but I guess that was incorrect.\r\n\r\n**Edit**:\r\nI also tried replacing this expression above using the `hessian` operation (and just ignoring the small computational overhead of computing the double derivatives I'm not interested in)\r\n```\r\nvmap(hessian(func, (0, 2)), (None, None, 0))(params, buffers, data)\r\n```\r\nbut that code resulted in the same error.\r\nCan you please point me to information about how to solve this problem?", "url": "https://github.com/pytorch/functorch/issues/823", "state": "closed", "labels": [], "created_at": "2022-05-20T10:41:39Z", "updated_at": "2022-05-25T12:12:20Z", "comments": 5, "user": "zimmerrol" }, { "repo": "pytorch/data", "number": 432, "title": "The developer install instruction are outdated", "body": "https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/CONTRIBUTING.md?plain=1#L49-L56\r\n\r\nWhile debugging #418 it took my quite a while to figure out that I need to set \r\n\r\nhttps://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/tools/setup_helpers/extension.py#L41\r\n\r\nfor the C++ code to be built.", "url": "https://github.com/meta-pytorch/data/issues/432", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-05-20T08:35:01Z", "updated_at": "2022-06-10T20:04:08Z", "comments": 3, "user": "pmeier" }, { "repo": "pytorch/pytorch", "number": 77732, "title": "multiprocessing: how to put a model which copied from main thread in the shared_queue", "body": "### \ud83d\udc1b Describe the bug\r\n\r\n1. If I shared a model in cuda, it raises\r\n```RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.``` \r\nSpecifically, I accept a model from the main process and return a duplication create by using ```copy.deepcopy(model)```\r\n2. ```torch.multiprocessing.manager.queue.get``` taken a long time to finish. If the queue just passed a file descriptor, I don't think it should take 1/3 of the total time, is there any faster way?\r\nHere's my script\r\nI opened a [thread](https://discuss.pytorch.org/t/how-sharing-memory-actually-worked-in-pytorch/151706) in pytorch'forum also\r\n\r\nI think this is related to #10375 #9996 and #7204\r\n\r\n```python\r\nimport torch\r\n\r\nimport torch.multiprocessing as mp\r\n\r\nfrom copy import deepcopy\r\n\r\nfrom functools import partial\r\n\r\nfrom time import *\r\n\r\nfrom torchvision import models\r\n\r\nimport numpy as np\r\n\r\nfrom tqdm import tqdm\r\n\r\ndef parallel_produce(\r\n\r\n queue: mp.Queue,\r\n\r\n model_method,\r\n\r\n i\r\n\r\n) -> None:\r\n\r\n pure_model: torch.nn.Module = model_method()\r\n\r\n # if you delete this line, model can be passed\r\n pure_model.to('cuda')\r\n\r\n pure_model.share_memory()\r\n\r\n while True:\r\n\r\n corrupt_model = deepcopy(pure_model)\r\n\r\n dic = corrupt_model.state_dict()\r\n\r\n dic[list(dic.keys())[0]]*=2\r\n\r\n corrupt_model.share_memory()\r\n\r\n queue.put(corrupt_model)\r\n\r\ndef parallel(\r\n\r\n valid,\r\n\r\n iteration: int = 1000,\r\n\r\n process_size: int=2,\r\n\r\n buffer_size: int=2\r\n\r\n):\r\n\r\n pool = mp.Pool(process_size)\r\n\r\n manager = mp.Manager()\r\n\r\n queue = manager.Queue(buffer_size)\r\n\r\n SeedSequence = np.random.SeedSequence()\r\n\r\n model_method = partial(models.squeezenet1_1,True)\r\n\r\n async_result = pool.map_async(\r\n\r\n partial(\r\n\r\n parallel_produce,\r\n\r\n queue,\r\n\r\n model_method,\r\n\r\n ),\r\n\r\n SeedSequence.spawn(process_size),\r\n\r\n )\r\n\r\n time = 0\r\n\r\n for iter_times in tqdm(range(iteration)):\r\n\r\n start = monotonic_ns()\r\n\r\n # this takes a long time\r\n\r\n corrupt_model: torch.nn.Module = queue.get()\r\n\r\n time += monotonic_ns() - start\r\n\r\n corrupt_model.to(\"cuda\")\r\n\r\n corrupt_result = corrupt_model(valid)\r\n\r\n del corrupt_model\r\n\r\n pool.terminate()\r\n\r\n print(time / 1e9)\r\n\r\nif __name__ == \"__main__\":\r\n\r\n valid = torch.randn(1,3,224,224).to('cuda')\r\n\r\n parallel(valid)\r\n\r\n```\r\n\r\n#total time of queue.get taken\r\n\r\n![image](https://user-images.githubusercontent.com/72636351/168984869-cc4e884d-1774-4b9f-81c8-701f4f02b7dc.png)\r\n\r\n### Versions\r\n\r\nCollecting environment information...\r\nPyTorch version: 1.10.0+cu113\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.3\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Microsoft Windows 11 Home\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: N/A\r\n\r\nPython version: 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)\r\nPython platform: Windows-10-10.0.22000-SP0\r\nIs CUDA available: True\r\nCUDA runtime version: 11.5.119\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU\r\nNvidia driver version: 512.77\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.21.5\r\n[pip3] pytorchfi==0.6.0\r\n[pip3] torch==1.10.0+cu113\r\n[pip3] torch-tb-profiler==0.3.1\r\n[pip3] torchaudio==0.10.0+cu113\r\n[pip3] torchei==0.0.4\r\n[pip3] torchinfo==1.5.4\r\n[pip3] torchstat==0.0.7\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchvision==0.11.1+cu113\r\n[conda] Could not collect\r\n\r\ncc @VitalyFedyunin", "url": "https://github.com/pytorch/pytorch/issues/77732", "state": "closed", "labels": [ "module: multiprocessing", "triaged" ], "created_at": "2022-05-18T07:41:34Z", "updated_at": "2022-06-29T08:18:00Z", "user": "Force1ess" }, { "repo": "pytorch/vision", "number": 6034, "title": "Question about center-ness branch in FCOS", "body": "Hi, thank you for your great work. I'm learning FCOS these days. I find some differences about position of center-ness between code and paper. In paper(https://arxiv.org/abs/1904.01355), the center-ness branch is put together with the classification branch. \r\n![image](https://user-images.githubusercontent.com/31005897/168759441-07ea8b54-3fe8-43aa-bd0e-8c05067b1547.png)\r\n\r\n\r\nBut in the code, the center-ness and regression branches are put together.\r\nhttps://github.com/pytorch/vision/blob/a1232c212d7cf84806189910ba83bc36bcea916c/torchvision/models/detection/fcos.py#L202-L233\r\n\r\nCould you tell me why? thanks.", "url": "https://github.com/pytorch/vision/issues/6034", "state": "closed", "labels": [ "question" ], "created_at": "2022-05-17T07:59:37Z", "updated_at": "2022-05-18T00:47:41Z", "user": "WZMIAOMIAO" }, { "repo": "pytorch/TensorRT", "number": 1070, "title": "\u2753 [Question] How to convert Torch-TensorRT module to TRT engine?", "body": "## \u2753 Question\r\n\r\nHow to convert Torch-TensorRT module (*.ts) to TRT engine? Is there any Python API to do that?\r\n## What you have already tried\r\n\r\nIn examples, I found\r\n```cpp\r\nauto engine = torch_tensorrt::ts::convert_method_to_trt_engine(mod, \"forward\", compile_spec);\r\n```\r\nin https://github.com/pytorch/TensorRT/blob/master/examples/int8/qat/main.cpp\r\n\r\nIf this is the correct way to do converting? If yes, is there any Python API?\r\n## Environment\r\n\r\namd64 + Linux\r\nall software is newest version\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1070", "state": "closed", "labels": [ "question" ], "created_at": "2022-05-17T07:36:45Z", "updated_at": "2022-05-23T16:16:13Z", "user": "lingffff" }, { "repo": "pytorch/pytorch", "number": 77589, "title": "How to handle __module__ attribute for Public API bindings", "body": "While working on the NN onboarding lab (with corresponding closed PR: #77425 ), after registering the functional version of new module in `torch/nn/functional.py` The following test would fail ` pytest test/test_public_bindings.py` with:\r\n```Bash\r\nFull list:\r\n# torch.nn.functional.bias:\r\n - Is public: it is an attribute that does not start with `_` on a module that does not have `__all__` defined\r\n - Does NOT look public: because its `__module__` attribute (`torch._C._nn`) is not within the torch library or does not start with the submodule where it is defined (`torch.nn.functional`)\r\n - You can do either of these two things to fix this problem:\r\n - To make it NOT public: either define a `__all__` for `torch.nn.functional` or add a `_` at the beginning of the name\r\n - To make it look public: make sure the `__module__` is properly set and points to a submodule of `torch.nn.functional`\r\n``` \r\nI defined the functional version analogously to the linear module:\r\n```Python\r\nbias = _add_docstr(\r\n torch._C._nn.bias,\r\n r\"\"\"\r\nbias(input, bias) -> Tensor\r\n\r\nAdds a bias vector the last dimension of input tensor\r\n\r\nShape:\r\n - Input: math:`(*, num\\_features)` where `*` means any number of\r\n additional dimensions, including none\r\n - Bias: :math:`(num\\_features)` or :math:`()`\r\n - Output: :math:`(*, num\\_features)` where `*` means any number of\r\n additional dimensions, including none, same shape as Input\r\n\"\"\")\r\n```\r\n\r\nI add this function 'bias' to the allowlist here: `test/allowlist_for_publicAPI.json` in the list for `\"torch.nn.functional\"`\r\n\r\nWhen reading the test function though it says that no new functions should be added to this list. If I def bias above and then implement `bias.__module__ = 'torch.nn.functional'` This does indeed work. \r\n\r\nIs that the correct solution?\r\n Would it be a nicer API if there was a function analogous to `_add_docstr` which also defined the `__module__` attribute when setting the doc string.\r\n\r\n\r\ncc @mruberry", "url": "https://github.com/pytorch/pytorch/issues/77589", "state": "open", "labels": [ "module: tests", "triaged" ], "created_at": "2022-05-16T20:40:52Z", "updated_at": "2022-05-17T14:37:45Z", "user": "drisspg" }, { "repo": "pytorch/vision", "number": 6011, "title": "Imagenet Version not documented?", "body": "### \ud83d\udcda The doc issue\n\nHello torchvision team,\r\n\r\nFirst, thanks for the epic work you are all putting into this tool! I would like to know the exact version of imagenet used at pertaining different models in torchvision, for research purposes regarding model inversion. All of them use the 2012 Imagenet Dataset version or maybe some newer version?\r\n\r\nThank you,\r\nTudor\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/vision/issues/6011", "state": "open", "labels": [ "question" ], "created_at": "2022-05-13T11:24:32Z", "updated_at": "2022-05-13T11:51:24Z", "user": "tudorcebere" }, { "repo": "pytorch/pytorch", "number": 77341, "title": "The input of the forward part of my model is a tuple, which cannot be converted to onnx format according to the existing methods. Can you tell me how to solve it", "body": "### \ud83d\udc1b Describe the bug\n\nimport torch\r\nimport torch.nn as nn\r\n\r\n\r\nclass Model(nn.Module):\r\n def __init__(self):\r\n super(Model, self).__init__()\r\n self.conv1 = nn.Linear(32, 16)\r\n self.relu1 = nn.ReLU(inplace=True)\r\n self.relu2 = nn.ReLU(inplace=True)\r\n self.fc = nn.Linear(32, 2)\r\n\r\n def forward(self, x):\r\n x1, x2 = x\r\n x1 = self.conv1(x1)\r\n x1 = self.relu1(x1)\r\n x2 = self.conv1(x2)\r\n x2 = self.relu1(x2)\r\n out = torch.cat((x1, x2), dim=-1)\r\n out = self.fc(out)\r\n return out\r\n\r\n\r\nmodel = Model()\r\nmodel.eval()\r\n\r\nx1 = torch.randn((2, 10, 32))\r\nx2 = torch.randn((2, 10, 32))\r\nx = (x1, x2)\r\n\r\ntorch.onnx.export(model,\r\n x,\r\n 'model.onnx',\r\n input_names=[\"input\"],\r\n output_names=[\"output\"],\r\n dynamic_axes={'input': {0: 'batch'}, 'output': {0: 'batch'}}\r\n )\r\nprint(\"Done\")\r\n\n\n### Versions\n\nBe like title!", "url": "https://github.com/pytorch/pytorch/issues/77341", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2022-05-12T06:38:49Z", "updated_at": "2022-05-18T01:04:49Z", "user": "singaln" }, { "repo": "pytorch/extension-ffi", "number": 26, "title": "How to fix \"undefined symbol: state error\" once importing a c shared library? ", "body": "I'm trying to import the compiled c shared library \"_crop_and_resize.so\", but I am receiving below error! \r\n\r\npytorch version = 1.9.0+cu102\r\n\r\nTorchvision version = 0.9.1\r\n\r\npython version = 3.6.10\r\n\r\n```\r\n>>> import _crop_and_resize as _backend\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nImportError: /home/username/DeepFacade01/roialign/roi_align/_ext/crop_and_resize/_crop_and_resize.so: undefined symbol: state\r\n>>> \r\n```", "url": "https://github.com/pytorch/extension-ffi/issues/26", "state": "closed", "labels": [], "created_at": "2022-05-12T00:01:49Z", "updated_at": "2022-05-14T22:33:53Z", "user": "Abbsalehi" }, { "repo": "pytorch/examples", "number": 1004, "title": "error: the following arguments are required: DIR", "body": "Excuse me\uff0chow can I deal with this problem\uff1f\r\n<img width=\"1227\" alt=\"image\" src=\"https://user-images.githubusercontent.com/58496897/167763473-f5d2a189-3ac5-4e77-9451-c6817065d5ed.png\">", "url": "https://github.com/pytorch/examples/issues/1004", "state": "closed", "labels": [], "created_at": "2022-05-11T03:31:07Z", "updated_at": "2022-07-01T16:07:30Z", "comments": 1, "user": "Elijah123463" }, { "repo": "pytorch/pytorch", "number": 77228, "title": "How can i remove 'lib/libtorch_cuda.so' gracefully to make deploy more small. \u3010Questions and Help\u3011", "body": "i want import torch in my project . and i will not use 'cuda' clearly .\r\n\r\nhow can i to remove 'lib/libtorch_cuda.so' gracefully to make deploy package more smaller. (serverless deploy)\r\n\r\ni remove lib/libtorch_cuda.so ,then cmd 'python3 index.py' . the result show...\r\n\r\n**Traceback (most recent call last):\r\n File \"index.py\", line 7, in <module>\r\n import torch\r\n File \"/root/python/src/pic-linux_all/torch/__init__.py\", line 199, in <module>\r\n from torch._C import * # noqa: F403\r\nImportError: libtorch_cuda.so: cannot open shared object file: No such file or directory**\r\n\r\nwhat should I do.\r\n\r\n### torch : i use 'pip install' to install it\r\n\r\n\r\n### Versions\r\n\r\nPython version: 3.8.0 (default, May 11 2022, 08:57:48) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] (64-bit runtime)\r\nPython platform: Linux-3.10.0-514.26.2.el7.x86_64-x86_64-with-glibc2.17\r\nIs CUDA available: N/A\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] No relevant packages\r\n[conda] Could not collect\r\n", "url": "https://github.com/pytorch/pytorch/issues/77228", "state": "closed", "labels": [ "triaged" ], "created_at": "2022-05-11T03:27:31Z", "updated_at": "2022-05-12T00:26:04Z", "user": "wangping886" }, { "repo": "pytorch/TensorRT", "number": 1049, "title": "\u2753 [Question] How can I move the converted tensorRT model in a Jetson system?", "body": "## \u2753 Question\r\n\r\nI optimized a pytorch module with torch-TensorRT. How can I move the engine to a Jetson?\r\n\r\n## What you have already tried\r\n\r\n\r\nI tried torch.jit.load('trt_traced_model.ts') \r\n\r\nbut get **__torch__.torch.classes.tensorrt.Engine** error\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 10.0\r\n - OS (e.g., Linux): ARM Ubuntu 18\r\n - How you installed PyTorch : pip from offical Nvidia support\r\n - Python version: 3.6\r\n - CUDA version: 10.2\r\n - GPU models and configuration: Jetson NX\r\n\r\n## Additional context\r\nI have a Jetson NX system with jetpack 4.6, torch v0.10.0 and torchvision v0.11.0 where I want to deploy a tensorRT model.\r\n\r\nFor that in my main computer I installed this repository and converted my model to tensorRT successfully. I need to move it into the Jetson for production.\r\n\r\nThis is the code that I use to export to tensorRT (main computer)\r\n\r\n```\r\nmodel.cuda().eval()\r\nmodel = torch.jit.trace(model, [torch.rand(1, 3, 224, 224).cuda()])\r\ntrt_model_fp32 = torch_tensorrt.compile(model,\r\n inputs=[torch_tensorrt.Input((1, 3, 224, 224))],\r\n enabled_precisions=torch.float32, # Run with FP32\r\n )\r\ntorch.jit.save(trt_model_fp32, dir)\r\n```\r\n\r\nThis is in my Jetson\r\n\r\n`model = torch.jit.load(dir)`\r\n\r\nbut i get **__torch__.torch.classes.tensorrt.Engine** error\r\n\r\nJetson hasn't installed torch-tensorRT. How can I move the tensorRT model? Do I need to install this repo also in the Jetson?\r\n\r\nThanks!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1049", "state": "closed", "labels": [ "question" ], "created_at": "2022-05-10T15:08:47Z", "updated_at": "2022-05-10T15:45:51Z", "user": "mjack3" }, { "repo": "pytorch/TensorRT", "number": 1047, "title": "can torch-tensorrt-1.1.0 support libtorch1.9 and cuda10.2?", "body": "## \u2753 Question\r\n\r\nI want to know if torch-tensorrt-1.1.0 can be compiled with libtorch1.9 and cuda-10.2 ?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.9.0):\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): linux\r\n - CUDA version:10.2\r\n - GPU models and configuration:T4\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1047", "state": "closed", "labels": [ "question" ], "created_at": "2022-05-10T11:54:58Z", "updated_at": "2022-05-11T07:27:45Z", "user": "f291400" }, { "repo": "pytorch/TensorRT", "number": 1045, "title": "\u2753 __torch__.torch.classes.tensorrt.Engine what does it mean?", "body": "Hello community and thanks for this repo.\r\n\r\n\r\n## \u2753 Question\r\n\r\nHow can I load a tensorRT model after using torch.jit.save?\r\n\r\n## What you have already tried\r\n\r\n```\r\nimport torch\r\nmodel = torch.jit.load('trt_model.torch-tensorrt') # give error __torch__.torch.classes.tensorrt.Engine\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10\r\n - CPU Architecture: x64\r\n - OS (e.g., Linux): 20.04\r\n - How you installed PyTorch: conda\r\n - Python version: 3.8\r\n - CUDA version: 11.6\r\n - GPU models and configuration: Nvidia RTX3090\r\n - Information: torchvision installed by pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases\r\n\r\n\r\n## Additional context\r\n\r\nMy code is very simple:\r\n\r\n```\r\nimport torch\r\nimport torch_tensorrt\r\n\r\ntraced_model = torch.jit.trace(eager_model, [torch.rand(1, 3, 224, 224).to(device)])\r\ntrt_model = torch_tensorrt.compile(traced_model,\r\n inputs= [torch_tensorrt.Input((1, 3, 224, 224))],\r\n enabled_precisions={torch.float32})\r\ntorch.jit.save(trt_model, 'trt_model.torch-tensorrt')\r\nmodel = torch.jit.load('trt_model.torch-tensorrt') # give error __torch__.torch.classes.tensorrt.Engine\r\n```\r\nAt the end, I want to move the trt_model.torch-tensorrt file into an Jetson for load with torch.jit.load\r\n\r\nThanks\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1045", "state": "closed", "labels": [ "question" ], "created_at": "2022-05-10T09:56:05Z", "updated_at": "2022-09-03T02:25:25Z", "user": "mjack3" }, { "repo": "pytorch/data", "number": 391, "title": "Allow users to provide `auth` and other data to `HttpReader`", "body": "### \ud83d\ude80 The feature\n\nThis should extend the functionality of `HttpReader` to send more complicated POST request.\r\nFor authentication, users don't necessarily need to provide via `http://user:password@domain.com/`. They should be able to provide `auth` to the `HttpReader` and relay it to `request`.\r\n\r\nhttps://github.com/pytorch/data/blob/8b95954ce431ade5905448ebd9a2909e30566377/torchdata/datapipes/iter/load/online.py#L38-L43\n\n### Motivation, pitch\n\nVersatile `HttpReader`\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/meta-pytorch/data/issues/391", "state": "closed", "labels": [ "good first issue", "help wanted" ], "created_at": "2022-05-09T22:36:27Z", "updated_at": "2022-05-11T19:28:14Z", "comments": 3, "user": "ejguan" }, { "repo": "pytorch/TensorRT", "number": 1034, "title": "torch_tensorrt.compile dynamic input shape failed", "body": "## dynamic input shape failed\r\n\r\n![image](https://user-images.githubusercontent.com/13358476/167369566-faabca74-ba4d-453c-a7b8-ef55aa6fc500.png)\r\n\r\n\r\n![image](https://user-images.githubusercontent.com/13358476/167369373-9980b51f-330b-4905-a0c8-7e1ea529fbc7.png)\r\n\r\nif set min_shape=[1,3,h, h] and op_shape= [1,3, h, h] and max_shape = [1,3, h, h] , which h is 32 or 512 or 1024, it works. but if set\r\nmin_shape = [1, 3, 32, 32] and op_shape=[1,3,512,512] and max_shape = [1, 3, 1024, 1024], it is failed .\r\n\r\n## Environment\r\n![image](https://user-images.githubusercontent.com/13358476/167370248-ad9ff570-fdfa-4b6b-af2a-3b05d063820d.png)\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1034", "state": "closed", "labels": [ "question", "component: core", "No Activity" ], "created_at": "2022-05-09T08:25:50Z", "updated_at": "2022-08-21T00:02:41Z", "user": "f291400" }, { "repo": "pytorch/pytorch", "number": 77016, "title": "Where is fx2trt fx to tensorrt tool?", "body": "### \ud83d\udcda The doc issue\n\nI found there are some PR:\r\n\r\nhttps://github.com/jerryzh168/pytorch/tree/fb09fd4ab4ba618db148f9dfc035be589efb9355/torch/fx/experimental/fx2trt\r\n\r\nwhich persist of fx2trt tool, where does it goes in main stream pytorch code?\n\n### Suggest a potential alternative/fix\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/77016", "state": "open", "labels": [ "triaged", "module: fx" ], "created_at": "2022-05-07T08:43:04Z", "updated_at": "2022-07-20T21:25:20Z", "user": "lucasjinreal" }, { "repo": "pytorch/serve", "number": 1609, "title": "How to set model batch size with TS_ environmental var", "body": "## \ud83d\udcda Documentation\r\n\r\nHi, I can't seem to figure out how to set the batch size with an environmental parameter. \r\n\r\nMy `config.properties` looks like this:\r\n\r\n```\r\ninference_address=http://0.0.0.0:8080\r\nmanagement_address=http://0.0.0.0:8081\r\nnumber_of_netty_threads=32\r\nenable_envvars_config=true\r\njob_queue_size=1000\r\nmodel_store=/opt/ml/model\r\nload_models=all\r\nenable_metrics_api=false\r\nmodels={\\\r\n \"model\": {\\\r\n \"1.0\": {\\\r\n \"defaultVersion\": true,\\\r\n \"marName\": \"model.mar\",\\\r\n \"runtime\": \"python3\",\\\r\n \"minWorkers\": 1,\\\r\n \"maxWorkers\": 4,\\\r\n \"batchSize\": 16,\\\r\n \"maxBatchDelay\": 50,\\\r\n \"responseTimeout\": 120\\\r\n }\\\r\n }\\\r\n}\r\n```\r\n\r\nBut I would like to be able to override `batchSize` with an env variable so that load testing is more simple (just creating endpoints with different env params instead of needing to generate different config files)\r\n", "url": "https://github.com/pytorch/serve/issues/1609", "state": "closed", "labels": [], "created_at": "2022-05-05T14:25:43Z", "updated_at": "2022-05-09T21:52:41Z", "user": "austinmw" }, { "repo": "pytorch/vision", "number": 5945, "title": "Training recipe for these weights", "body": "https://github.com/pytorch/vision/blob/62740807c18e68bb0acd85895dca527f9a655bd5/torchvision/models/vision_transformer.py#L377\r\n\r\nDoes anyone know how these weights were generated. Where they training from scratch only on ImageNet 1k or was it pre-trained on ImageNet 21k? Looking at the original Vision transformer paper: https://arxiv.org/abs/2010.11929 I'm not quite sure where the accuracy numbers in these lines are coming from:\r\n\r\n```python\r\nclass ViT_B_32_Weights(WeightsEnum):\r\n IMAGENET1K_V1 = Weights(\r\n url=\"https://download.pytorch.org/models/vit_b_32-d86f8d99.pth\",\r\n transforms=partial(ImageClassification, crop_size=224),\r\n meta={\r\n **_COMMON_META,\r\n \"num_params\": 88224232,\r\n \"min_size\": (224, 224),\r\n \"recipe\": \"https://github.com/pytorch/vision/tree/main/references/classification#vit_b_32\",\r\n \"metrics\": {\r\n \"acc@1\": 75.912,\r\n \"acc@5\": 92.466,\r\n },\r\n },\r\n )\r\n DEFAULT = IMAGENET1K_V1\r\n```\r\n\r\nHere's the corresponding numbers presented in the original Vision Transformer paper, ViT-B/32 accuracy of 75.912 is not in either the ImageNet 1k or the ImageNet 21k columns:\r\n\r\n![image](https://user-images.githubusercontent.com/1216594/166825518-d0279f89-b604-4ec7-9b61-ca13da794a71.png)\r\n\r\n\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/5945", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2022-05-04T21:07:25Z", "updated_at": "2022-05-05T16:49:12Z", "user": "briancheung" }, { "repo": "pytorch/serve", "number": 1606, "title": "How to distribute multi models to each gpu?", "body": "I have two models: model0,model1 and two gpus: gpu0,gpu1. I want to set model0 to gpu0,model0 to gpu1,it means that the work of model0 will always on gpu0 and model1 is on gpu1.\r\nHow to make it?\r\nIs it possible to implement by serve configuration or handle.py?\r\nCould you help me?Thank you very much!", "url": "https://github.com/pytorch/serve/issues/1606", "state": "open", "labels": [ "enhancement" ], "created_at": "2022-05-04T16:08:39Z", "updated_at": "2022-05-12T01:56:17Z", "user": "dzcmingdi" }, { "repo": "pytorch/data", "number": 382, "title": "The protocol of fsspec can be a list of strings rather than a single string", "body": "### \ud83d\udc1b Describe the bug\n\nhttps://github.com/pytorch/data/blob/92d18b088eb43b9805bed5c90a0afca87292a338/torchdata/datapipes/iter/load/fsspec.py#L61-L62\r\nThe `fs.protocol` can be a list rather than a string. For example of `s3`, it will return a list of `['s3', 's3a']`.\r\nThen, there will be an error due to `self.root.startswith(fs.protocol)`. We can't run `startswith` with a list.\n\n### Versions\n\nmain", "url": "https://github.com/meta-pytorch/data/issues/382", "state": "closed", "labels": [ "good first issue" ], "created_at": "2022-05-03T21:47:05Z", "updated_at": "2022-05-04T16:50:16Z", "comments": 1, "user": "ejguan" }, { "repo": "pytorch/TensorRT", "number": 1019, "title": "Missing 3 input files: libnvinfer_plugin.so, libcudnn.so and libnvinfer.so", "body": "## \u2753 Question\r\nI've been looking at all the great progress done previously when it comes to using Torch-TensorRT on Windows. \r\nI made progress to the point that it seems like only 1 thing is missing. I'm missing the 3 .so mentioned above. \r\nHow are they supposed to be built? Am I missing something? Is there any fix that I missed? \r\n\r\n\r\n## What you have already tried\r\n\r\nI followed the guides from from #856 \r\n\r\n## Environment\r\n\r\nWindows 10, trying to build for Visual Studio usage\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0\r\n - CPU Architecture: i9\r\n - OS (e.g., Linux): Windows 10\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch\r\n - Build command you used (if compiling from source): bazel\r\n - Are you using local sources or building from archives: building from archive\r\n - Python version: 3.9\r\n - CUDA version: 11.5\r\n - GPU models and configuration: RTX3090\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1019", "state": "closed", "labels": [ "question", "channel: windows" ], "created_at": "2022-05-03T01:10:39Z", "updated_at": "2022-08-01T16:01:45Z", "user": "fschvart" }, { "repo": "pytorch/TensorRT", "number": 1014, "title": "\u2753 [Question] Building torch_tensorrt.lib on Windows", "body": "## \u2753 Question\r\n\r\nI am wondering how to build the torch_tensorrt.lib on Windows.\r\n\r\n## What you have already tried\r\n\r\nI have followed #960 and #856 (with the same WORKSPACE as the latter) and managed to successfully build torch_tensorrt.dll. However, I need the .lib file in order to compile my Libtorch program. I tried linking to some of the .lib files that were created already (like bazel-out\\x64_windows-opt\\bin\\cpp\\torch_tensorrt.lo.lib), but that didn't work. I expect it's a fairly simple bazel command, but I have no idea where to put it.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.0 (release)\r\n - CPU Architecture: x86-64\r\n - OS (e.g., Linux): Windows 10\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch from pytorch.org\r\n - Build command you used (if compiling from source): bazel build //:libtorchtrt --compilation_mode opt\r\n - CUDA version: 11.3\r\n - Any other relevant information: Using VS2019\r\n\r\n## Additional context\r\n\r\nMy libtorch program runs fine even if I include the torch-tensorrt headers, but throws the following errors as soon as I try to use torch_tensorrt::torchscript::CompileSpec and call torch_tensorrt::torchscript::compile:\r\nError\tLNK1120\t2 unresolved externals\tOmkar 1.10.0+cu113\tB:\\Programming\\_Current Projects\\HelloLibTorch\\x64\\Release\\HelloTorch.exe\t1\r\n\t\r\nError\tLNK2019\tunresolved external symbol \"public: __cdecl torch_tensorrt::torchscript::CompileSpec::CompileSpec(class std::vector<class std::vector<__int64,class std::allocator<__int64> >,class std::allocator<class std::vector<__int64,class std::allocator<__int64> > > >)\" (??0CompileSpec@torchscript@torch_tensorrt@@QEAA@V?$vector@V?$vector@_JV?$allocator@_J@std@@@std@@V?$allocator@V?$vector@_JV?$allocator@_J@std@@@std@@@2@@std@@@Z) referenced in function main\tOmkar 1.10.0+cu113\tB:\\Programming\\_Current Projects\\HelloLibTorch\\main.obj\t1\t\r\n\r\nError\tLNK2019\tunresolved external symbol \"struct torch::jit::Module __cdecl torch_tensorrt::torchscript::compile(struct torch::jit::Module const &,struct torch_tensorrt::torchscript::CompileSpec)\" (?compile@torchscript@torch_tensorrt@@YA?AUModule@jit@torch@@AEBU345@UCompileSpec@12@@Z) referenced in function main\tOmkar 1.10.0+cu113\tB:\\Programming\\_Current Projects\\HelloLibTorch\\main.obj\t1\t\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1014", "state": "closed", "labels": [ "question", "channel: windows" ], "created_at": "2022-04-29T14:24:59Z", "updated_at": "2022-09-02T18:09:26Z", "user": "jonahclarsen" }, { "repo": "pytorch/TensorRT", "number": 1006, "title": "[Question]Doesn't torch tensorrt support LSTM-based decoder optimization?? ", "body": "## \u2753 Question\r\nDoesn't torch tensorrt support LSTM-based decoder optimization? The reason for asking this question is that the model forward and model test structures learned in the seq2seq structure are different (beam search, sequence inference ..), and the optimized model cannot be used by inputting only training forward logic.\r\n\r\n## Environment\r\nTensorrt 22.03 docker image: \r\nhttps://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/rel_22-03.html#rel_22-03\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/1006", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-04-27T06:50:45Z", "updated_at": "2022-11-10T00:02:45Z", "user": "koliaok" }, { "repo": "pytorch/TensorRT", "number": 1001, "title": "\u2753 [Question] How to differentiate a Torch-TensorRT model from a pure TorchScript model? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI'm developing a C++ inference server to deploy Torch-TensorRT models and TorchScript models. Since the Torch-TensorRT compilation process is done AOT, Is there a way to know wether the given .pt model file is a Torch-TensorRT model or a pure TorchScript model?\r\n\r\nThanks!", "url": "https://github.com/pytorch/TensorRT/issues/1001", "state": "closed", "labels": [ "question" ], "created_at": "2022-04-26T12:29:20Z", "updated_at": "2022-04-27T02:05:50Z", "user": "tiandi111" }, { "repo": "pytorch/vision", "number": 5872, "title": "Keypoint RCNN visibility flag for keypoints", "body": "### \ud83d\ude80 The feature\n\nHello All,\r\n\r\nThis is only my first day posting a request here so I apologize for any errors on my part. Also, sorry for the long post below.\r\n\r\nThe purpose of this post is to request an improvement/correction for the visibility flag behavior of Keypoint RCNN. Based on my results and those of other users I have encountered on different forums and sites, Keypoint RCNN always predicts a flag value of v=1 for all keypoints, no matter the training flag value for v>0 (even v=0), and predicts coordinates for them as well. In other words, the model does not appear to actually learn the flag value. My understanding is that the flag should be learned and is supposed to follow the COCO convention (v=0 \u2018not in image\u2019; v=1 \u2018occluded\u2019; v=2 \u2018visible\u2019) but does not do so.\r\n\r\n\r\n\r\n\r\n\n\n### Motivation, pitch\n\nGiven the usefulness of the visibility flags, being able to accurately predict them and use the information during inference to mark occluded vs. visible keypoints would be an important addition to the model capability. My understanding is that this is already supposed to be the case, but for some reason the documentation as well as the model behavior on this are lacking. I have found the performance of Keypoint RCNN overall to be very good and I have successfully fine-tuned it on my custom (multiclass) dataset with very good success in predicting the class, bbox, and keypoints. It would be very helpful to be able to distinguish between keypoints using visibility flag. \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nMy hope in writing here is to request and encourage updating of the model to address the issue/addition suggested. If not, then if I could please get some help in tracking down the source code where Keypoint RCNN is converting all flags to v=1 and handling/training flags so that I might be able to modify this behavior, as the model does not seem to learn the flag values presently. In my use case, what I want is for Keypoint RCNN to successfully predict the right flag (e.g. v=0) so that I can use it later on, or at least predict a coordinate of (0.0,0.0) (or some other fixed value) for keypoints with v=0. The need is to be able to distinguish between visible and occluded keypoints. Even just two learned flags that work as expected (v=0 and v=1) would be very useful to have. Any suggestions or guidance would be great. Thanks for taking the time to reply.\n\ncc @datumbox @YosuaMichael", "url": "https://github.com/pytorch/vision/issues/5872", "state": "open", "labels": [ "question", "topic: object detection" ], "created_at": "2022-04-24T21:44:35Z", "updated_at": "2024-08-26T08:33:51Z", "user": "mbadal1996" }, { "repo": "pytorch/torchx", "number": 470, "title": "Improve torchx/resources README", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\nhttps://github.com/pytorch/torchx/tree/main/resources\r\n\r\n## What does it currently say?\r\n```\r\n**Creating EKS cluster**\r\neksctl create cluster -f torchx-dev-eks.yml\r\n\r\n**Creating KFP**\r\nkfctl apply -V -f torchx-dev-kfp.yml\r\n```\r\n\r\n## What should it say?\r\nFor the **Creating EKS Cluster** it should actually list out how to create `torchx-dev-eks.yml`. The instructions are in `torchx-dev-eks-template.yml`, so just pulling those out to the README would be good.\r\n\r\nFor **Creating KFP**, it is missing the steps to generate `torchx-dev-kfp.yml`. I'm assuming you do this by following the instructions on the aws eks kfp website (https://www.kubeflow.org/docs/distributions/aws/deploy/install-kubeflow/), but a quick look at those docs doesn't seem like its obvious.\r\n\r\n## Why?\r\nFollowing the README step by step doesn't work due to missing files.\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/470", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-04-22T18:03:56Z", "updated_at": "2022-06-02T21:26:12Z", "comments": 1, "user": "kiukchung" }, { "repo": "pytorch/PiPPy", "number": 149, "title": "Figure out how to get `**kwargs` working with MetaTracer", "body": "https://github.com/pytorch/PiPPy/pull/138/files#diff-6d49246d94990874a38b3d05e50ea765d5c0a75270de5eec6dcda377f934976dR251\r\n\r\nMichael B from HF is also looking into this, maybe we'll figure something out together", "url": "https://github.com/pytorch/PiPPy/issues/149", "state": "closed", "labels": [], "created_at": "2022-04-21T16:34:00Z", "updated_at": "2022-06-10T18:19:27Z", "user": "jamesr66a" }, { "repo": "pytorch/vision", "number": 5845, "title": "about paste_mask_in_image question in mask rcnn", "body": "First of all, thanks for your great work.\r\nRecently, I was studying Mask R-CNN code in this repo. I have some questions, and I hope you could answer it when you are free.\r\n\r\n\r\nFirst question, Why do I need to expand the mask and box when mapping mask back to the original scale. I read the original paper of Mask R-CNN, which only said \"The m\u00d7m floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5.\".\r\nhttps://github.com/pytorch/vision/blob/35d1d9d3f01016c65ac7f3d0700d2474929acdea/torchvision/models/detection/roi_heads.py#L474-L477\r\n\r\n\r\nSecond question, What is the function of TO_REMOVE here?\r\nhttps://github.com/pytorch/vision/blob/35d1d9d3f01016c65ac7f3d0700d2474929acdea/torchvision/models/detection/roi_heads.py#L403-L409\r\n\r\nLook forward to your reply. :laughing: \r\n\n\ncc @datumbox @YosuaMichael", "url": "https://github.com/pytorch/vision/issues/5845", "state": "closed", "labels": [ "question", "topic: object detection" ], "created_at": "2022-04-21T08:52:39Z", "updated_at": "2022-05-18T00:51:04Z", "user": "WZMIAOMIAO" }, { "repo": "pytorch/torchx", "number": 464, "title": "Volcano job scheduling issues due to bad upgrade", "body": "This is an after the fact issue to help anyone who stumbles upon it later resolve the issue.\r\n\r\n## Pod won't schedule due to CreateContainerConfigError\r\n\r\n```\r\nWarning Failed 12m (x12 over 15m) kubelet Error: couldn't find key VC_PYTHON-0_HOSTS in ConfigMap default/torchxcomponentspython-bwg4m0sktd9mwc-svc\r\n```\r\n\r\n```\r\n state:\r\n waiting:\r\n message: couldn't find key VC_PYTHON-0_HOSTS in ConfigMap default/torchxcomponentspython-bwg4m0sktd9mwc-svc\r\n reason: CreateContainerConfigError\r\n```\r\n\r\nThis is likely due to a Volcano version upgrade issue. Volcano 1.4 changed the ENV key format to correctly handle `-` characters. This means if a job was submitted under Volcano 1.3 and then is upgraded to Volcano 1.4 before running the job will fail to schedule. You just need to relaunch your job under the new version.\r\n\r\n## Partial Upgrade Issues\r\n\r\n```\r\nError creating pods: [failed to create pod pv5xp2lpf65vz-python-0-0, err: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"Internal error occurred: failed calling webhook \\\"mutatepod.volcano.sh\\\": the server could not find the requested resource\", Reason:\"InternalError\", Details:(*v1.StatusDetails)(0xc002a125a0), Code:500}}]\r\n```\r\n\r\nWhen you upgrade Volcano you need to completely delete the `volcano-system` namespace and all resources within it before running `kubectl apply .../development.yaml`. If you don't, some of the setup jobs resources will conflict and won't run for the new version leaving the cluster in a bad state.", "url": "https://github.com/meta-pytorch/torchx/issues/464", "state": "closed", "labels": [ "bug", "documentation", "kubernetes" ], "created_at": "2022-04-20T19:14:15Z", "updated_at": "2022-04-20T20:30:06Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/vision", "number": 5838, "title": "return_layers problem about fasterrcnn_mobilenet_v3_large_fpn", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nThere may be a problem with the setting of return_layers in fasterrcnn_mobilenet_v3_large_fpn. If the default setting is used, the resolution of collected feature map is the same. As a result, the effect of detecting small targets will become worse.\r\nhttps://github.com/pytorch/vision/blob/e8cb0bacd86c49e67a7e1a5f83c6da866bc451cf/torchvision/models/detection/backbone_utils.py#L225-L226\r\ntest code:\r\n```python\r\nimport torch\r\nfrom torchvision.models.detection import fasterrcnn_mobilenet_v3_large_fpn\r\nmodel = fasterrcnn_mobilenet_v3_large_fpn(pretrained_backbone=False)\r\nimg = torch.randn(1, 3, 224, 224)\r\noutputs = model.backbone(img)\r\n[print(f\"{k} shape: {v.shape}\") for k, v in outputs.items()]\r\n```\r\noutput:\r\n```\r\n0 shape: torch.Size([1, 256, 7, 7])\r\n1 shape: torch.Size([1, 256, 7, 7])\r\npool shape: torch.Size([1, 256, 4, 4])\r\n```\r\n`feauture map: 0` and `feature map: 1` have same resolution(`7x7`).\r\n\r\nmay need to change:\r\n```\r\nreturned_layers = [num_stages - 2, num_stages - 1]\r\n```\r\nto:\r\n```\r\nreturned_layers = [num_stages - 3, num_stages - 1]\r\n```\r\n\r\noutput:\r\n```\r\n0 shape: torch.Size([1, 256, 14, 14])\r\n1 shape: torch.Size([1, 256, 7, 7])\r\npool shape: torch.Size([1, 256, 4, 4])\r\n```\r\n\r\n### Versions\r\n\r\n```\r\nPyTorch version: 1.10.0+cpu\r\nIs debug build: False\r\nCUDA used to build PyTorch: Could not collect\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.6 LTS (x86_64)\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: Could not collect\r\nCMake version: version 3.10.2\r\nLibc version: glibc-2.27\r\n\r\nPython version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.17\r\nIs CUDA available: False\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: Quadro P620\r\nNvidia driver version: 470.103.01\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.3\r\n[pip3] torch==1.10.0+cpu\r\n[pip3] torchaudio==0.10.0+cpu\r\n[pip3] torchvision==0.11.1+cpu\r\n[conda] numpy 1.21.3 pypi_0 pypi\r\n[conda] torch 1.10.0+cpu pypi_0 pypi\r\n[conda] torchaudio 0.10.0+cpu pypi_0 pypi\r\n[conda] torchvision 0.11.1+cpu pypi_0 pypi\r\n```\n\ncc @datumbox @YosuaMichael", "url": "https://github.com/pytorch/vision/issues/5838", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2022-04-20T04:32:20Z", "updated_at": "2022-04-21T07:47:05Z", "user": "WZMIAOMIAO" }, { "repo": "pytorch/data", "number": 364, "title": "Linter for DataPipe/DataLoader2 ", "body": "### \ud83d\ude80 The feature\r\n\r\nThis issue proposes the addition of a linter for DataPipes and DataLoader2. The linter can analyze the graph of DataPipes and input arguments to DataLoaderV, and inform the users if any errors may occur ahead of time. The incomplete list of issues that the linter may try to analyze and raise is below. Please feel free to edit the list directly to add more or comment below.\r\n\r\nEssential:\r\n- [ ] Multiple references to the same iterator/DataPipe\r\n - This can cause issue when serialized, suggest users to `fork`\r\n- [ ] Duplicate usage of shuffle/batch/collate\r\n- [ ] Shuffle/batch/collate are missing?\r\n- [ ] Warn if shuffling is not done?\r\n- [ ] Warn if sharding is not specificed for Distributed/Multiprocessing\r\n- [ ] Warn about shuffling before sharding (not mandatory because inputs may be pre-shuffled)\r\n- [ ] Multiprocess/distributed behavior related to sharding/shuffling\r\n- [ ] Warn if filter appears between on_disk_cache and end_caching sections.\r\n- [ ] Find unreachable children within graph and warns (because they might prevent buffers from being empty in `fork` and etc)\r\n- [ ] Warn about passing DataPipes that have already been partially read (invalid state), but are passed into DataLoader (and we might have to force `reset` the DataPipe in DataLoader)\r\n- [ ] Detect what external packages are not installed within DataPipe graph\r\n\r\nNice-to-have:\r\n- [ ] Check DataPipe object size and warn if it is too big (e.g. premature initialization of large structures)\r\n- [ ] Check if `fork` datapipe creates two or more copies of `StreamWrapper` or `IOBase` \r\n\r\n### Motivation, pitch\r\nHaving a linter will encourage best practices of DataPipe usages and reduces the number of unexpected bugs/behaviors in the data loading process during runtime. \r\n\r\n### Alternatives\r\nOnly raise exceptions during runtime.\r\n\r\n### Additional context\r\nThis linter is expected to work with DataPipes and DataLoaderV2. We should consider if it should work with the original DataLoader as well (and how).\r\n\r\ncc: @VitalyFedyunin @ejguan ", "url": "https://github.com/meta-pytorch/data/issues/364", "state": "open", "labels": [ "help wanted" ], "created_at": "2022-04-19T21:49:54Z", "updated_at": "2023-04-11T16:58:51Z", "comments": 5, "user": "NivekT" }, { "repo": "pytorch/TensorRT", "number": 987, "title": "\u2753 [Question] How do you add CUDA kernels used for implemented plugins ? ", "body": "## \u2753 Question\r\n\r\nHow do you add CUDA kernels used for implemented plugins ? I have developed my own implementation for several layers that are not supported yet by Torch-TensorRT. I'm not familiar with the bazel compilation flow and i would like to know how to compile .cu files in Torch-TensorRT. \r\n\r\nCurrent provided Torch-TensorRT plugins make calls to external libraries (cuDNN for example) but there is no example about how to add a custom plugins that call CUDA kernels.\r\n \r\n## Additional context\r\n\r\nIn addition it could be nice to have a clear way on how to get the PyTorch signature of the methods that we want to encapsulate.\r\n\r\nCheers\r\n\r\nDavid ", "url": "https://github.com/pytorch/TensorRT/issues/987", "state": "closed", "labels": [ "question", "No Activity", "component: plugins" ], "created_at": "2022-04-19T15:59:59Z", "updated_at": "2022-08-12T00:02:25Z", "user": "david-PHR" }, { "repo": "pytorch/pytorch", "number": 76023, "title": "How to disable check onnx in torch.onnx.export in pytorch1.11 version?", "body": "### \ud83d\udcda The doc issue\n\nOld params were removed, now how to disable check on onnx when export?\n\n### Suggest a potential alternative/fix\n\nAlso, why disable this feature? Some onnx using customized op can not pass check.", "url": "https://github.com/pytorch/pytorch/issues/76023", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-needs-info" ], "created_at": "2022-04-19T08:26:42Z", "updated_at": "2022-05-05T04:57:24Z", "user": "lucasjinreal" }, { "repo": "pytorch/TensorRT", "number": 985, "title": "Error Code 1: Myelin (Compiled against cuBLASLt 10.2.2.0 but running against cuBLASLt 11.4.2.0.)", "body": "Hi I am using TensorRT for an images in python but getting this issue. \r\n**I am Yolort to infer image.**\r\n[https://github.com/zhiqwang/yolov5-rt-stack](url)\r\n```\r\nimport os\r\nimport torch\r\nimport cv2\r\nfrom yolort.utils import Visualizer\r\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\r\ncuda_visible = \"0\"\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = cuda_visible\r\nfrom yolort.runtime import PredictorTRT\r\nassert torch.cuda.is_available()\r\ndevice = torch.device('cuda')\r\nengine_path = \"yolov5n6.engine\"\r\ny_runtime = PredictorTRT(engine_path, device=device)\r\nimg_path = r\"D:\\new_york.jpg\"\r\nimg_raw = cv2.imread(img_path)\r\nlabel_source = r\"D:\\coco.names\"\r\nlabel_path = label_source.split(\"/\")[-1]\r\ny_runtime.warmup()\r\npredictions_trt = y_runtime.predict(img_path)\r\nprint(predictions_trt)\r\n```\r\n**Here is my environment** \r\n\r\n```\r\n>python -m torch.utils.collect_env\r\nCollecting environment information...\r\nPyTorch version: 1.11.0+cu113\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.3\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Microsoft Windows 10 Home\r\nGCC version: Could not collect\r\nClang version: Could not collect\r\nCMake version: version 3.23.0\r\nLibc version: N/A\r\n\r\nPython version: 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] (64-bit runtime)\r\nPython platform: Windows-10-10.0.19041-SP0\r\nIs CUDA available: True\r\nCUDA runtime version: 11.6.124\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU\r\nNvidia driver version: 511.65\r\ncuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin\\cudnn_ops_train64_8.dll\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.6\r\n[pip3] torch==1.11.0+cu113\r\n[pip3] torchaudio==0.11.0+cu113\r\n[pip3] torchvision==0.12.0+cu113\r\n[conda] blas 1.0 mkl\r\n[conda] cudatoolkit 11.3.1 h59b6b97_2\r\n[conda] libblas 3.9.0 12_win64_mkl conda-forge\r\n[conda] libcblas 3.9.0 12_win64_mkl conda-forge\r\n[conda] liblapack 3.9.0 12_win64_mkl conda-forge\r\n[conda] mkl 2021.4.0 h0e2418a_729 conda-forge\r\n[conda] mkl-service 2.4.0 py39h6b0492b_0 conda-forge\r\n[conda] mkl_fft 1.3.1 py39h0cb33c3_1 conda-forge\r\n[conda] mkl_random 1.2.2 py39h2e25243_0 conda-forge\r\n[conda] mypy_extensions 0.4.3 py39hcbf5309_5 conda-forge\r\n[conda] numpy 1.22.3 pypi_0 pypi\r\n[conda] numpy-base 1.20.3 py39hc2deb75_0\r\n[conda] numpydoc 1.2.1 pyhd8ed1ab_2 conda-forge\r\n[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 0.11.0 py39_cu113 pytorch\r\n[conda] torchvision 0.12.0 py39_cu113 pytorch\r\n```\r\n\r\n\r\n![image](https://user-images.githubusercontent.com/76849182/163939405-8de7b9e3-a15d-4563-9ede-af71a70bc1f8.png)\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/985", "state": "closed", "labels": [ "question" ], "created_at": "2022-04-19T06:40:10Z", "updated_at": "2022-04-20T10:02:08Z", "user": "IamNaQi" }, { "repo": "pytorch/TensorRT", "number": 977, "title": "\u2753 [Question] how to enable \"torch fallback\"", "body": "## \u2753 Question\r\n\r\nI was told that torch-trt was able to partially convert graph to tensorrt while keep the unsupported part running on torch-runtime.\r\nAnd I also hava Found some 'Torch Fallback' or 'torch_fallback' str at the source code.\r\n\r\nSo I generate a module containing `torch.argmax` , which is not supported by torch-tensorrt. And give it a shot, but it failed.\r\n\r\nI hava two question:\r\n1. Is the fallback feature really supported by torch-tensorrt or is going to be supported?\r\n2. If allready supported, is there any sample showing how to use it.\r\n\r\n## What you have already tried\r\n\r\ntake a look at this script:\r\n```python\r\nimport torch\r\nimport torch_tensorrt\r\nimport numpy as np\r\nfrom torchvision import models\r\n\r\nclass MyModel(torch.nn.Module):\r\n\r\n def __init__(self):\r\n super(MyModel, self).__init__()\r\n models_dict = {\r\n \"resnet50_v2\": models.resnet50,\r\n \"resnet101_v2\": models.resnet101,\r\n \"resnet152_v2\": models.resnet152,\r\n \"mobilenet_v2\": models.mobilenet_v2,\r\n \"shufflenet_v2\": models.shufflenet_v2_x1_0,\r\n \"densenet169\": models.densenet169\r\n }\r\n\r\n self.model = models_dict['resnet50_v2'](pretrained=False)\r\n\r\n def forward(self, x):\r\n x = self.model(x)\r\n return torch.argmax(x, -1)\r\n\r\ndef main():\r\n model = MyModel().eval().cuda() #.cuda()\r\n x = torch.from_numpy(np.random.randn(1,3,224,224).astype(np.float32)).cuda()\r\n scripted_model = torch.jit.script(model)\r\n\r\n compile_settings = {\r\n \"inputs\": [x],\r\n \"enabled_precisions\": {torch.float},\r\n \"torch_fallback\": { # also tryied with Torch Fallback\r\n \"enabled\": True\r\n \"min_block_size\": 1\r\n \"forced_fallback_operators\": [\r\n ]\r\n \"forced_fallback_modules\": [\r\n ]\r\n }\r\n }\r\n trt_ts_module = torch_tensorrt.ts.compile(scripted_model, **compile_settings)\r\n print(trt_ts_module)\r\n\r\n torch_tensorrt_out = trt_ts_module(x)\r\n print('torch_tensorrt_out shape: \\n', torch_tensorrt_out.shape, print(torch_tensorrt_out))\r\n\r\n pytorch_out = model(x)\r\n print('pytorch out shape: \\n', pytorch_out.shape, pytorch_out)\r\n\r\n# torch._C._jit_to_backend is buggy, spec will be transformed into wrong json structure.\r\ndef main2():\r\n model = MyModel().eval().cuda() #.cuda()\r\n x = torch.from_numpy(np.random.randn(1,3,224,224).astype(np.float32))\r\n scripted_model = torch.jit.script(model)\r\n\r\n spec = {\r\n \"forward\":\r\n torch_tensorrt.ts.TensorRTCompileSpec({\r\n \"inputs\": [torch_tensorrt.Input([1, 3, 224, 224], dtype=torch.float)],\r\n \"enabled_precisions\": {torch.float},\r\n \"refit\": False,\r\n \"debug\": False,\r\n \"device\": {\r\n \"device_type\": torch_tensorrt.DeviceType.GPU,\r\n \"gpu_id\": 0,\r\n \"dla_core\": 0,\r\n \"allow_gpu_fallback\": True\r\n },\r\n \"capability\": torch_tensorrt.EngineCapability.default,\r\n \"num_min_timing_iters\": 2,\r\n \"num_avg_timing_iters\": 1,\r\n })\r\n }\r\n\r\n trt_ts_module = torch._C._jit_to_backend(\"tensorrt\", script_model, spec)\r\n print(trt_ts_module)\r\n\r\n torch_tensorrt_out = trt_ts_module(x)\r\n print('torch_tensorrt_out shape: \\n', torch_tensorrt_out.shape, print(torch_tensorrt_out))\r\n\r\n pytorch_out = model(x)\r\n print('pytorch out shape: \\n', pytorch_out.shape, pytorch_out)\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nget output:\r\n```bash\r\nTraceback (most recent call last):\r\n File \"./torch_trt_custom.py\", line 86, in <module>\r\n main()\r\n File \"./torch_trt_custom.py\", line 42, in main\r\n trt_ts_module = torch_tensorrt.ts.compile(scripted_model, **compile_settings)\r\nTypeError: compile() got an unexpected keyword argument 'torch_fallback'\r\n```\r\n## Environment\r\n\r\nngc pytorch 22.02\r\n\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/977", "state": "closed", "labels": [ "question" ], "created_at": "2022-04-15T08:25:12Z", "updated_at": "2022-04-15T09:28:54Z", "user": "WingEdge777" }, { "repo": "pytorch/pytorch", "number": 75723, "title": "[ONNX] How to export fx quantized model to onnx?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nFX is great! How to export fx quantized model to onnx?\n\n### Alternatives\n\nCurrently, I have traced the quantized int8 model to torchscript, it works OK.\n\n### Additional context\n\nI just wonder, If torch already supported export fx model to onnx, how to do it? I got error:\r\n```\r\nRuntimeError: Exporting the operator quantize_per_tensor to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.\r\n\r\n```\r\n\r\nIf not support, then, when will support? What's the obstacles behind it?\r\n\r\n**this is really needed, so that bring the gap between int8 quantize and other forward framework through onnx**\n\ncc @ezyang @SherlockNoMad", "url": "https://github.com/pytorch/pytorch/issues/75723", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-needs-info", "module: fx" ], "created_at": "2022-04-13T07:40:14Z", "updated_at": "2022-11-15T23:44:03Z", "user": "lucasjinreal" }, { "repo": "pytorch/examples", "number": 987, "title": "What accuracy should we expect when training Alexnet from scratch on ImageNet?", "body": "## \ud83d\udcda Documentation\r\n\r\nThe README https://github.com/pytorch/examples/blob/main/imagenet/README.md is very helpful when getting started with training AlexNet.\r\n\r\nWe are able to successfully train AlexNet to approximately 56% top-1 and 79% top-5 accuracy on the validation set. But this is still a fair bit below Krizhevsky's published results of circa 83% or 85% top-5 accuracy on these training sets. \r\n\r\nWe are training with the default recommendations for a single GPU in the README for AlexNet:\r\n```\r\npython main.py -a alexnet --lr 0.01 --gpu 0 /data/datasets/imagenet/\r\n```\r\n\r\nWhat out-of the box accuracy should we expect when training AlexNet on ImageNet with the default PyTorch implementation?\r\n\r\nWhat sort of hyperparameter changes do you recommend to duplicate Alex Krizhevsky's accuracies?", "url": "https://github.com/pytorch/examples/issues/987", "state": "open", "labels": [ "reproducibility" ], "created_at": "2022-04-11T20:56:15Z", "updated_at": "2023-01-12T03:26:38Z", "comments": 8, "user": "yoderj" }, { "repo": "pytorch/text", "number": 1677, "title": "what is currently the ideal effective torchtext pipeline for almost any nlp tasks ", "body": "## searching the ideal torchtext pipeline \r\n\r\n**Description**\r\nhey there, so ive been using the legacy version of torchtext for quite sometime as it provides easier ways to load custom dataset and custom pretrained word embeddings locally and i can semlessly implement it for seq2seq, text classification, pos tagging, language modeling etc. most importantly i could use Buckeriterator to sort samples based on their length and group batches based on similar length thus minimize padding. \r\nIve read that the torchdata has these functionalities implemented but couldnt find any tangible resources. \r\n\r\n**I have 3 requirements:**\r\n1. loading any custom dataset locally. \r\n2. loading any custom pre-trained embedding locally (fasttext, GLoVe) \r\n3. being able to implement sort and batch by length to get minimum padding \r\n", "url": "https://github.com/pytorch/text/issues/1677", "state": "open", "labels": [], "created_at": "2022-04-07T13:29:10Z", "updated_at": "2022-04-07T13:29:10Z", "user": "StephennFernandes" }, { "repo": "pytorch/data", "number": 352, "title": "DataLoader tutorial does not handle num_workers > 0", "body": "I just wanted to document an issue with the tutorials https://pytorch.org/data/beta/tutorial.html#working-with-dataloader\r\n\r\nThe code in the tutorial will not work when running multiple DataLoader processes as the datapipe will be duplicated across workers:\r\n\r\n```py\r\n dl = DataLoader(dataset=datapipe, batch_size=2, shuffle=True, num_workers=2)\r\n\r\n for i, e in enumerate(dl):\r\n print(e)\r\n```\r\ngives\r\n\r\n```\r\n{'label': tensor([7, 0], dtype=torch.int32), 'data': tensor([[0.5105, 0.7899],\r\n [0.0152, 0.5981]], dtype=torch.float64)}\r\n{'label': tensor([7, 0], dtype=torch.int32), 'data': tensor([[0.5105, 0.7899],\r\n [0.0152, 0.5981]], dtype=torch.float64)}\r\n{'label': tensor([4, 6], dtype=torch.int32), 'data': tensor([[0.9998, 0.5452],\r\n [0.8515, 0.8264]], dtype=torch.float64)}\r\n{'label': tensor([4, 6], dtype=torch.int32), 'data': tensor([[0.9998, 0.5452],\r\n [0.8515, 0.8264]], dtype=torch.float64)}\r\n{'label': tensor([1, 9], dtype=torch.int32), 'data': tensor([[0.8423, 0.3664],\r\n [0.6397, 0.6408]], dtype=torch.float64)}\r\n{'label': tensor([1, 9], dtype=torch.int32), 'data': tensor([[0.8423, 0.3664],\r\n [0.6397, 0.6408]], dtype=torch.float64)}\r\n...\r\n```\r\n\r\nEven though this is still beta, it may still be worth letting users know about such pitfalls.\r\n\r\nAlso, since there are various ways to achieve the sharding, it could be useful to settle on a definite canonical way of handling all this.", "url": "https://github.com/meta-pytorch/data/issues/352", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-04-07T13:00:41Z", "updated_at": "2022-06-10T20:02:57Z", "comments": 3, "user": "NicolasHug" }, { "repo": "pytorch/TensorRT", "number": 960, "title": "\u2753 [Question] Problem with cudnn dependency when compiling plugins on windows? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI am trying to compile a windows dll for torch-tensorRT, however I get the following traceback:\r\n\r\nERROR: C:/users/48698/source/libraries/torch-tensorrt-1.0.0/core/plugins/BUILD:10:11: Compiling core/plugins/register_plugins.cpp failed: undeclared inclusion(s) in rule '//core/plugins:torch_tensorrt_plugins':\r\nthis rule is missing dependency declarations for the following files included by 'core/plugins/register_plugins.cpp':\r\n 'external/cuda/cudnn.h'\r\n 'external/cuda/cudnn_version.h'\r\n 'external/cuda/cudnn_ops_infer.h'\r\n 'external/cuda/cudnn_ops_train.h'\r\n 'external/cuda/cudnn_adv_infer.h'\r\n 'external/cuda/cudnn_adv_train.h'\r\n 'external/cuda/cudnn_cnn_infer.h'\r\n 'external/cuda/cudnn_cnn_train.h'\r\n 'external/cuda/cudnn_backend.h'\r\n \r\n which is weird cause I do have the cudnn included, and can find the files under the C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6 path\r\n\r\nI am new to Bazel, is there another way I could link those? \r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\nFollowed this guide: https://github.com/NVIDIA/Torch-TensorRT/issues/856 to a t. I think I am linking cudnn in a weird way?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0\r\n - OS (e.g., Linux): windows\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: local\r\n - Python version: 3.9\r\n - CUDA version: 11.6\r\n - GPU models and configuration: 3070\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nMy torch-tensorrt-1.0.0/core/plugins/BUILD is as follows: \r\n\r\n```package(default_visibility = [\"//visibility:public\"])\r\n\r\nconfig_setting(\r\n name = \"use_pre_cxx11_abi\",\r\n values = {\r\n \"define\": \"abi=pre_cxx11_abi\",\r\n }\r\n)\r\n\r\ncc_library(\r\n name = \"torch_tensorrt_plugins\",\r\n hdrs = [\r\n \"impl/interpolate_plugin.h\",\r\n \"impl/normalize_plugin.h\",\r\n \"plugins.h\",\r\n\r\n ],\r\n srcs = [\r\n \"impl/interpolate_plugin.cpp\",\r\n \"impl/normalize_plugin.cpp\",\r\n \"register_plugins.cpp\",\r\n ],\r\n deps = [\r\n \"@tensorrt//:nvinfer\",\r\n \"@tensorrt//:nvinferplugin\",\r\n \"//core/util:prelude\",\r\n ] + select({\r\n \":use_pre_cxx11_abi\": [\"@libtorch_pre_cxx11_abi//:libtorch\"],\r\n \"//conditions:default\": [\"@libtorch//:libtorch\"],\r\n }),\r\n alwayslink = True,\r\n copts = [\r\n \"-pthread\"\r\n ],\r\n linkopts = [\r\n \"-lpthread\",\r\n ]\r\n)\r\n\r\nload(\"@rules_pkg//:pkg.bzl\", \"pkg_tar\")\r\n\r\npkg_tar(\r\n name = \"include\",\r\n package_dir = \"core/plugins/\",\r\n srcs = [\"plugins.h\"],\r\n)\r\n\r\npkg_tar(\r\n name = \"impl_include\",\r\n package_dir = \"core/plugins/impl\",\r\n srcs = [\"impl/interpolate_plugin.h\",\r\n \"impl/normalize_plugin.h\"],\r\n)\r\n\r\n\r\nI could attach more build files if needed, but everything apart from the paths is the same as in the referenced issue.\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/960", "state": "closed", "labels": [ "question", "channel: windows" ], "created_at": "2022-04-04T00:38:36Z", "updated_at": "2022-09-02T17:51:14Z", "user": "pepinu" }, { "repo": "pytorch/TensorRT", "number": 947, "title": "hown to compile model for multi inputs?", "body": "1\uff09My model : out1, out2 = model(input1, input2)\r\n\r\n2\uff09How should i set compile settings, just like this:\r\n\r\ntrt_ts_module = torch_tensorrt.compile(torch_script_module,\r\n inputs = [example_tensor, # Provide example tensor for input shape or...\r\n torch_tensorrt.Input( # Specify input object with shape and dtype\r\n min_shape=[1, 3, 224, 224],\r\n opt_shape=[1, 3, 512, 512],\r\n max_shape=[1, 3, 1024, 1024],\r\n # For static size shape=[1, 3, 224, 224]\r\n dtype=torch.half) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)\r\n ],\r\n enabled_precisions = {torch.half}, # Run with FP16)", "url": "https://github.com/pytorch/TensorRT/issues/947", "state": "closed", "labels": [ "question" ], "created_at": "2022-03-31T08:00:41Z", "updated_at": "2022-03-31T20:29:39Z", "user": "shuaizzZ" }, { "repo": "pytorch/data", "number": 339, "title": "Build the nightlies a little earlier", "body": "`torchdata` builds the nightlies at 15:00 UTC+0\r\n\r\nhttps://github.com/pytorch/data/blob/198cffe7e65a633509ca36ad744f7c3059ad1190/.github/workflows/nightly_release.yml#L6\r\n\r\nand publishes them roughly 30 minutes later. The `torchvision` nightlies are build at 11:00 UTC+0 and also published roughly 30 minutes later.\r\n\r\nThis creates a 4 hour window where the `torchvision` tests that pull in `torchdata` run on outdated nightlies. For example see [this CI run](https://app.circleci.com/pipelines/github/pytorch/vision/16169/workflows/652e06c3-c941-4520-b6ee-f69b2348dd57/jobs/1309833):\r\n\r\nIn the step \"Install PyTorch from the nightly releases\" we have\r\n\r\n```\r\nInstalling collected packages: typing-extensions, torch\r\nSuccessfully installed torch-1.12.0.dev20220329+cpu typing-extensions-4.1.1\r\n```\r\n\r\nTwo steps later in \"Install torchdata from nightly releases\" we have\r\n\r\n```\r\nInstalling collected packages: torch, torchdata\r\n Attempting uninstall: torch\r\n Found existing installation: torch 1.12.0.dev20220329+cpu\r\n Uninstalling torch-1.12.0.dev20220329+cpu:\r\n Successfully uninstalled torch-1.12.0.dev20220329+cpu\r\nSuccessfully installed torch-1.12.0.dev20220328+cpu torchdata-0.4.0.dev20220328\r\n```\r\n\r\nWas the release schedule deliberately chosen this way? If not can we maybe move it to four hours earlier?", "url": "https://github.com/meta-pytorch/data/issues/339", "state": "closed", "labels": [], "created_at": "2022-03-29T15:42:24Z", "updated_at": "2022-03-29T19:24:52Z", "comments": 5, "user": "pmeier" }, { "repo": "pytorch/torchx", "number": 441, "title": "[Req] LSF scheduler support ", "body": "## Description\r\nLSF scheduler support \r\nDoes torchx team have plan to support LSF scheduler? \r\nOr is there any guide for extension, I would make PR. \r\n\r\n## Motivation/Background\r\nThanks for torchx utils. We can target various scheduler by configure torchxconfig. \r\n\r\n## Detailed Proposal\r\nIt would be better to support LSF scheduler. \r\n\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/441", "state": "open", "labels": [ "enhancement", "module: runner", "scheduler-request" ], "created_at": "2022-03-29T04:47:30Z", "updated_at": "2022-10-10T22:27:47Z", "comments": 6, "user": "ckddls1321" }, { "repo": "pytorch/data", "number": 335, "title": "[BE] Unify `buffer_size` across datapipes", "body": "The `buffer_size` parameter is currently fairly inconsistent across datapipes:\r\n\r\n| name | default `buffer_size` | infinite `buffer_size` | warn on infinite |\r\n|--------------------|-------------------------|--------------------------|--------------------|\r\n| Demultiplexer | 1e3 | -1 | yes |\r\n| Forker | 1e3 | -1 | yes |\r\n| Grouper | 1e4 | N/A | N/A |\r\n| Shuffler | 1e4 | N/A | N/A |\r\n| MaxTokenBucketizer | 1e3 | N/A | N/A |\r\n| UnZipper | 1e3 | -1 | yes |\r\n| IterKeyZipper | 1e4 | None | no |\r\n\r\nHere are my suggestion on how to unify this:\r\n\r\n- Use the same default `buffer_size` everywhere. It makes little difference whether we use `1e3` or `1e4` given that it is tightly coupled with the data we know nothing about. Given today's hardware / datasets, I would go with 1e4, but no strong opinion.\r\n- Give every datapipe with buffer the ability for an infinite buffer. Otherwise users will just be annoyed and use a workaround. For example, `torchvision` simply uses [`INFINITE_BUFFER_SIZE = 1_000_000_000`](https://github.com/pytorch/vision/blob/1db8795733b91cd6dd62a0baa7ecbae6790542bc/torchvision/prototype/datasets/utils/_internal.py#L42-L43), which for all intents and purposes lives up to its name. Which sentinel we use, i.e. `-1` or `None`, again makes little difference. I personally would use `None` to have a clear separation, but again no strong opinion other than being consistent.\r\n- Do not warn on infinite buffer sizes. Especially since infinite buffer is not the default behavior, the user is expected to know what they are doing when setting `buffer_size=None`. I'm all for having a warning like this in the documentation, but I'm strongly against a runtime warning. For example, `torchvision` datasets need to use an infinite buffer everywhere. Thus, by using the infinite buffer sentinel, users would always get runtime warnings although neither them nor we did anything wrong. ", "url": "https://github.com/meta-pytorch/data/issues/335", "state": "open", "labels": [ "Better Engineering" ], "created_at": "2022-03-28T17:36:32Z", "updated_at": "2022-07-06T18:44:05Z", "comments": 8, "user": "pmeier" }, { "repo": "pytorch/vision", "number": 5686, "title": "Question on segmentation code", "body": "### \ud83d\ude80 The feature\r\n\r\nHello.\r\nI want to ask you a simple question.\r\nI'm not sure if it's right to post a question in this 'Feature request' category.\r\n\r\nIn train.py code in the reference/segmentation, the get_dataset function is set the coco dataset classes 21.\r\nWhy the number of classes is 21?\r\nIs it wrong to set the number of classes to 91 which is the number of classes in the coco dataset?\r\n\r\nHere is the reference code.\r\n```python\r\ndef get_dataset(dir_path, name, image_set, transform):\r\n def sbd(*args, **kwargs):\r\n return torchvision.datasets.SBDataset(*args, mode=\"segmentation\", **kwargs)\r\n\r\n paths = {\r\n \"voc\": (dir_path, torchvision.datasets.VOCSegmentation, 21),\r\n \"voc_aug\": (dir_path, sbd, 21),\r\n \"coco\": (dir_path, get_coco, 21),\r\n }\r\n p, ds_fn, num_classes = paths[name]\r\n\r\n ds = ds_fn(p, image_set=image_set, transforms=transform)\r\n return ds, num_classes\n\ncc @vfdev-5 @datumbox @YosuaMichael", "url": "https://github.com/pytorch/vision/issues/5686", "state": "closed", "labels": [ "question", "topic: semantic segmentation" ], "created_at": "2022-03-28T06:05:39Z", "updated_at": "2022-03-28T07:29:35Z", "user": "kcs6568" }, { "repo": "pytorch/torchx", "number": 435, "title": "[torchx/examples] Remove usages of custom components in app/pipeline examples", "body": "## \ud83d\udcda Documentation\r\n\r\nSince we are making TorchX focused on Job launching and less about authoring components and AppDefs, we need to adjust our app and pipeline examples to demonstrate running the applications with the builtin `dist.ddp` and `utils.python` components rather than showing how to author a component for the application.\r\n\r\nFor 90% of the launch patterns `dist.ddp` (multi-homogeneous node) and `utils.python` (single node) is sufficient.\r\n\r\nThere are a couple of things we need to do:\r\n\r\n1. Delete `torchx/example/apps/**/component.py`\r\n2. For each application example show how to run it with the existing `dist.ddp` or `utils.python` builtin\r\n3. Link a section on how to copy existing components and further customizing (e.g. `torchx builtins --print dist.ddp > custom.py`)\r\n4. Make adjustments to the integration tests to test the example applications using builtin components (as advertised)\r\n5. Do 1-4 for the pipeline examples too.", "url": "https://github.com/meta-pytorch/torchx/issues/435", "state": "closed", "labels": [ "documentation" ], "created_at": "2022-03-25T23:34:26Z", "updated_at": "2022-05-25T22:52:40Z", "comments": 0, "user": "kiukchung" }, { "repo": "pytorch/tutorials", "number": 1872, "title": "Transfer learning tutorial: Loss and Accuracy curves the wrong way", "body": "Hey,\r\n\r\nI have a question concerning the transfer learning tutorial (https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html).\r\n\r\nFor a few days, I've been trying to figure out why the validation and training curves are reversed there. By this, I mean that for general neural networks the training curves are always better than the validation curves (lower loss and higher accuracy). However, as in the tutorial itself, this is not the case (see also values in the tutorial). To make the whole thing clearer, I also ran the tutorial for 100 epochs and plotted the accuracy and loss for training and validation. The graph looks like this:\r\n\r\n![100_epochs_training](https://user-images.githubusercontent.com/60505803/160148385-fc4f6de0-d799-4059-8c9a-eb2ee212e8d1.png)\r\n\r\nUnfortunately, I haven't found a real reason for this yet.\r\nIt shouldn't be the dataset itself (I tried the same with other data). The only thing is the BatchNorm, which is different for training and validation. But I also suspect that this is not the reason for this big difference and the changing role. In past projects also on neural networks, with batch normalization at least I didn't have these reversed roles of validation and training.\r\n\r\nHas anybody an idea, why this happens here and why it has not that effect using other neural networks?\n\ncc @suraj813", "url": "https://github.com/pytorch/tutorials/issues/1872", "state": "closed", "labels": [ "question", "intro" ], "created_at": "2022-03-25T15:23:39Z", "updated_at": "2023-03-06T21:50:25Z", "user": "AlexanderGeng" }, { "repo": "pytorch/pytorch", "number": 74741, "title": "[FSDP] How to use fsdp in GPT model in Megatron-LM", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nAre there any examples similar to DeepSpeed \u200b\u200bthat can experience the fsdp function of pytorch. It would be nice to provide the GPT model in Megatron-LM.\n\n### Alternatives\n\nI hope to provide examples of benchmarking DeepSpeed \u200b\u200bto facilitate the in-depth use of the fsdp function.\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/74741", "state": "closed", "labels": [], "created_at": "2022-03-25T08:30:05Z", "updated_at": "2022-03-25T21:12:04Z", "user": "Baibaifan" }, { "repo": "pytorch/text", "number": 1662, "title": "How to install LTS (0.9.2)?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n\r\nI've found that my PyTorch version is 1.8.2, so according to https://github.com/pytorch/text/#installation , the torchtext version is 0.9.2:\r\n![image](https://user-images.githubusercontent.com/48322321/160079803-6149cea6-ccfa-4b8f-a392-894c1a018216.png)\r\nBut as I use `conda install -c pytorch torchtext` to install, the version I installed defaultly is 0.6.0. So I wander, is this version also OK for me as the torchtext version 0.9.2 is the highest version I can install, or it's not OK as I can only install 0.9.2 version?", "url": "https://github.com/pytorch/text/issues/1662", "state": "closed", "labels": [], "created_at": "2022-03-25T08:12:03Z", "updated_at": "2024-03-11T00:55:30Z", "user": "PolarisRisingWar" }, { "repo": "pytorch/pytorch", "number": 74740, "title": "How to export onnx with dynamic batch size for models with multiple outputs?", "body": "## Issue description\r\n\r\nI want to export my model to onnx. Following is my code:\r\ntorch.onnx._export(\r\nmodel,\r\ndummy_input,\r\nargs.output_name,\r\ninput_names=[args.input],\r\noutput_names=args.output,\r\nopset_version=args.opset,\r\n)\r\n\r\nIt works well. But I want to export it with dynamic batch size. So I try this:\r\ntorch.onnx._export(\r\nmodel,\r\ndummy_input,\r\nargs.output_name,\r\ninput_names=[args.input],\r\noutput_names=args.output,\r\nopset_version=args.opset,\r\ndynamic_axes={'input_tensor' : {0 : 'batch_size'},\r\n'classes' : {0 : 'batch_size'},\r\n'boxes' : {0 : 'batch_size'},\r\n'scores' : {0 : 'batch_size'},}\r\n)\r\n\r\nIt crashed with following message:\r\n``2022-03-25 13:38:11.201 | ERROR | main::114 - An error has been caught in function '', process 'MainProcess' (1376540), thread 'MainThread' (139864366814016):\r\nTraceback (most recent call last):\r\n\r\nFile \"tools/export_onnx.py\", line 114, in\r\nmain()\r\n\u2514 <function main at 0x7f3434447f70>\r\n\r\nFile \"tools/export_onnx.py\", line 107, in main\r\nmodel_simp, check = simplify(onnx_model)\r\n\u2502 \u2514 ir_version: 7\r\n\u2502 producer_name: \"pytorch\"\r\n\u2502 producer_version: \"1.10\"\r\n\u2502 graph {\r\n\u2502 node {\r\n\u2502 output: \"607\"\r\n\u2502 name: \"Constant_0\"\r\n\u2502 ...\r\n\u2514 <function simplify at 0x7f3417604dc0>\r\n\r\nFile \"/home/xyz/anaconda3/envs/yolox/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py\", line 483, in simplify\r\nmodel = fixed_point(model, infer_shapes_and_optimize, constant_folding)\r\n\u2502 \u2502 \u2502 \u2514 <function simplify..constant_folding at 0x7f34175d5f70>\r\n\u2502 \u2502 \u2514 <function simplify..infer_shapes_and_optimize at 0x7f342715c160>\r\n\u2502 \u2514 ir_version: 7\r\n\u2502 producer_name: \"pytorch\"\r\n\u2502 producer_version: \"1.10\"\r\n\u2502 graph {\r\n\u2502 node {\r\n\u2502 output: \"607\"\r\n\u2502 name: \"Constant_0\"\r\n\u2502 ...\r\n\u2514 <function fixed_point at 0x7f3417604d30>\r\nFile \"/home/xyz/anaconda3/envs/yolox/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py\", line 384, in fixed_point\r\nx = func_b(x)\r\n\u2502 \u2514 ir_version: 7\r\n\u2502 producer_name: \"pytorch\"\r\n\u2502 producer_version: \"1.10\"\r\n\u2502 graph {\r\n\u2502 node {\r\n\u2502 input: \"input_tensor\"\r\n\u2502 input: \"608\"\r\n\u2502 ...\r\n\u2514 <function simplify..constant_folding at 0x7f34175d5f70>\r\nFile \"/home/xyz/anaconda3/envs/yolox/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py\", line 473, in constant_folding\r\nres = forward_for_node_outputs(model,\r\n\u2502 \u2514 ir_version: 7\r\n\u2502 producer_name: \"pytorch\"\r\n\u2502 producer_version: \"1.10\"\r\n\u2502 graph {\r\n\u2502 node {\r\n\u2502 input: \"input_tensor\"\r\n\u2502 input: \"608\"\r\n\u2502 ...\r\n\u2514 <function forward_for_node_outputs at 0x7f34176048b0>\r\nFile \"/home/xyz/anaconda3/envs/yolox/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py\", line 229, in forward_for_node_outputs\r\nres = forward(model,\r\n\u2502 \u2514 ir_version: 7\r\n\u2502 producer_name: \"pytorch\"\r\n\u2502 producer_version: \"1.10\"\r\n\u2502 graph {\r\n\u2502 node {\r\n\u2502 input: \"input_tensor\"\r\n\u2502 input: \"608\"\r\n\u2502 ...\r\n\u2514 <function forward at 0x7f3417604820>\r\nFile \"/home/xyz/anaconda3/envs/yolox/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py\", line 210, in forward\r\ninputs.update(generate_specific_rand_input(model, {name: shape}))\r\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2514 [0, 3, 640, 640]\r\n\u2502 \u2502 \u2502 \u2502 \u2514 'input_tensor'\r\n\u2502 \u2502 \u2502 \u2514 ir_version: 7\r\n\u2502 \u2502 \u2502 producer_name: \"pytorch\"\r\n\u2502 \u2502 \u2502 producer_version: \"1.10\"\r\n\u2502 \u2502 \u2502 graph {\r\n\u2502 \u2502 \u2502 node {\r\n\u2502 \u2502 \u2502 input: \"input_tensor\"\r\n\u2502 \u2502 \u2502 input: \"608\"\r\n\u2502 \u2502 \u2502 ...\r\n\u2502 \u2502 \u2514 <function generate_specific_rand_input at 0x7f3417604550>\r\n\u2502 \u2514 <method 'update' of 'dict' objects>\r\n\u2514 {}\r\nFile \"/home/xyz/anaconda3/envs/yolox/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py\", line 98, in generate_specific_rand_input\r\nraise RuntimeError(\r\n\r\nRuntimeError: The shape of input \"input_tensor\" has dynamic size \"[0, 3, 640, 640]\", please determine the input size manually by \"--dynamic-input-shape --input-shape xxx\" or \"--input-shape xxx\". Run \"python3 -m onnxsim -h\" for details\r\n``\r\nMy environments:\r\n`pip list\r\nPackage Version Editable project location\r\n------------------------- --------------------- ------------------------------------------------------------------\r\nabsl-py 1.0.0\r\nalbumentations 1.1.0\r\nanykeystore 0.2\r\napex 0.1\r\nappdirs 1.4.4\r\ncachetools 4.2.4\r\ncertifi 2021.10.8\r\ncharset-normalizer 2.0.9\r\ncryptacular 1.6.2\r\ncycler 0.11.0\r\nCython 0.29.25\r\ndefusedxml 0.7.1\r\nflatbuffers 2.0\r\nfonttools 4.28.3\r\ngoogle-auth 2.3.3\r\ngoogle-auth-oauthlib 0.4.6\r\ngreenlet 1.1.2\r\ngrpcio 1.42.0\r\nhupper 1.10.3\r\nidna 3.3\r\nimageio 2.13.3\r\nimgaug 0.4.0\r\nimportlib-metadata 4.8.2\r\njoblib 1.1.0\r\nkiwisolver 1.3.2\r\nloguru 0.5.3\r\nMako 1.1.6\r\nMarkdown 3.3.6\r\nMarkupSafe 2.0.1\r\nmatplotlib 3.5.1\r\nnetworkx 2.6.3\r\nninja 1.10.2.3\r\nnumpy 1.2", "url": "https://github.com/pytorch/pytorch/issues/74740", "state": "closed", "labels": [], "created_at": "2022-03-25T07:55:45Z", "updated_at": "2022-03-25T08:15:58Z", "user": "LLsmile" }, { "repo": "pytorch/pytorch", "number": 74616, "title": "__rpow__(self, other) OpInfo should not test the case where `other` is a Tensor", "body": "### \ud83d\udc1b Describe the bug\n\nAfter https://github.com/pytorch/pytorch/pull/74280 (cc @mruberry), the `__rpow__` OpInfo has a sample input where `other` is a Tensor. This cannot happen during normal execution: to get to `Tensor.__rpow__` a user does the following:\r\n\r\n```\r\n# self = some_tensor\r\n# other = not_a_tensor\r\nnot_a_tensor ** some_tensor\r\n```\r\nIf instead `not_a_tensor` is a Tensor, this ends up calling `__pow__` in Python which will then handle the case.\r\n\r\nAre there any legitimate cases where we do want this to happen?\r\n\r\n## Context\r\n\r\nThis caused some functorch tests to fail because we don't support the route where both `self` and `other` are Tensors. pytorch/pytorch also has some cryptic warning in that route:\r\n![image](https://user-images.githubusercontent.com/5652049/159735534-1f19bbad-0596-4577-8ced-d3a61c6a8bfd.png)\r\n\r\nbut it's not clear to me if we want to support this or not.\n\n### Versions\n\npytorch main branch", "url": "https://github.com/pytorch/pytorch/issues/74616", "state": "open", "labels": [ "module: tests", "triaged" ], "created_at": "2022-03-23T15:28:17Z", "updated_at": "2022-04-18T02:34:55Z", "user": "zou3519" }, { "repo": "pytorch/TensorRT", "number": 936, "title": " \u2753[Question] RuntimeError: [Error thrown at core/conversion/converters/impl/select.cpp:236] Expected const_layer to be true but got false", "body": "## \u2753 Question\r\n\r\nwhen i convert jit model, got the error\r\nthis is my forward code: \r\ninput `x` shape is `(batch, 6, height, width)`, first step is to split `x` into two tensors, but failed\r\n```\r\n def forward(self, x):\r\n fg = x[:,0:3,:,:] ## this line got error\r\n bg = x[:,3:,:,:]\r\n \r\n fg = self.backbone(fg)\r\n bg = self.backbone(bg)\r\n out = self.heads(fg, bg)\r\n return out\r\n```\r\ncomplete traceback:\r\n```\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 3: [network.cpp::addConstant::1052] Error Code 3: Internal Error (Parameter check failed at: optimizer/api/network.cpp::addConstant::1052, condition: !weights.values == !weights.count\r\n)\r\nTraceback (most recent call last):\r\n File \"model_converter.py\", line 263, in <module>\r\n engine = get_engine(model_info.trt_engine_path, calib, int8_mode=int8_mode, optimize_params=optimize_params)\r\n File \"model_converter.py\", line 173, in get_engine\r\n return build_engine(max_batch_size)\r\n File \"model_converter.py\", line 95, in build_engine\r\n return build_engine_from_jit(max_batch_size)\r\n File \"model_converter.py\", line 80, in build_engine_from_jit\r\n tensorrt_engine_model = torch_tensorrt.ts.convert_method_to_trt_engine(traced_model, \"forward\", **compile_settings)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_tensorrt/ts/_compiler.py\", line 211, in convert_method_to_trt_engine\r\n return _C.convert_graph_to_trt_engine(module._c, method_name, _parse_compile_spec(compile_spec))\r\nRuntimeError: [Error thrown at core/conversion/converters/impl/select.cpp:236] Expected const_layer to be true but got false\r\nUnable to create constant layer from node: %575 : Tensor = aten::slice(%570, %13, %12, %14, %13) # /data/small_detection/centernet_pytorch_small_detection/models/low_freeze_comb_net.py:455:0\r\n```\r\n\r\n## What you have already tried\r\n\r\ntry use `fg, bg = x.split(int(x.shape[1] // 2), dim=1)` instead of `fg = x[:,0:3,:,:]` and `bg = x[:,3:,:,:]` but got convert error for op not support \r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.4.0\r\n - CPU Architecture: arm (nx)\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch: docker of nvidia l4t\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2.300\r\n - Tensorrt version: 8.0.1.6\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/936", "state": "closed", "labels": [ "question", "component: converters", "No Activity" ], "created_at": "2022-03-22T02:40:39Z", "updated_at": "2023-02-10T00:13:18Z", "user": "pupumao" }, { "repo": "pytorch/text", "number": 1661, "title": "what's is the replacement of legacy?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n\r\n<!-- Please send questions or ask for help here. -->\r\nin torchtext0.12.0, the module legacy has been removed, so how to implement the same functions as the class legacy.Field?\r\nthanks for your help.", "url": "https://github.com/pytorch/text/issues/1661", "state": "closed", "labels": [], "created_at": "2022-03-21T11:03:55Z", "updated_at": "2022-10-04T01:51:51Z", "user": "1152545264" }, { "repo": "pytorch/serve", "number": 1518, "title": "How to return a dict response, not a list", "body": "<!--\r\nThank you for suggesting an idea to improve torchserve model serving experience.\r\n\r\nPlease fill in as much of the template below as you're able.\r\n-->\r\n\r\n## Is your feature request related to a problem? Please describe.\r\n<!-- Please describe the problem you are trying to solve. -->\r\nwhen I retuan a dict value, serve return a error.\r\n\r\n## Describe the solution\r\n<!-- Please describe the desired behavior. -->\r\n\r\n## Describe alternatives solution\r\n<!-- Please describe alternative solutions or features you have considered. -->\r\n", "url": "https://github.com/pytorch/serve/issues/1518", "state": "closed", "labels": [], "created_at": "2022-03-20T10:30:09Z", "updated_at": "2022-03-25T20:14:17Z", "user": "liuhuiCNN" }, { "repo": "pytorch/data", "number": 310, "title": "MapDatapipe Mux/Demux Support", "body": "### \ud83d\ude80 The feature\r\n\r\nMapDatapipes are missing Mux and Demux pipes as noted in https://github.com/pytorch/pytorch/issues/57031\r\n\r\nTalked to @ejguan on https://discuss.pytorch.org/t/mapdatapipe-support-mux-demux/146305, I plan to do a PR with Mux/Demux added. However, I will add rough outlines / ideas here first. I plan to match the same test strategy as the Mux/Demux pipes already in IterDataPipes.\r\n\r\n### Motivation, pitch\r\n\r\nFor Demux: My basic test/goal is to download mnist, and split it into train/validation sets using map.\r\nFor Mux: Then attempt to mux them back together (not sure how to come up with a useful example of this). \r\n - Might try a scenario where I split train into k splits and rejoin them? \r\n\r\nNot sure when this should be converted to a pr. This would be my first pr into pytorch, so I want the pr to be as clean as possible. Putting code changes ideas here I feel could allow for more dramatic/messy changes/avoid a messy git diff/worry about formatting once code is finalized.\r\n\r\nNote: doc strings are removed to make code shorter and will be readded in pr. Not-super-useful comments will be removed in pr.\r\n\r\nNote: let me know if a draft pr would be better.\r\n\r\nDemux working code:\r\nDraft 1: https://github.com/josiahls/fastrl/blob/848f90d0ed5b0c2cd0dd3e134b0b922dd8a53d7c/fastrl/fastai/data/pipes.py\r\n\r\nDemux working code + Basic Test\r\nDraft 1: https://github.com/josiahls/fastrl/blob/848f90d0ed5b0c2cd0dd3e134b0b922dd8a53d7c/nbs/02c_fastai.data.pipes.ipynb\r\n\r\nMux working code:\r\nDraft 1: https://github.com/josiahls/fastrl/blob/30cd47766e9fb1bc75d32de877f54b8de9567c36/fastrl/fastai/data/pipes/mux.py\r\n\r\nBasic Test\r\nDraft 1: https://github.com/josiahls/fastrl/blob/30cd47766e9fb1bc75d32de877f54b8de9567c36/nbs/02c_fastai.data.pipes.mux.ipynb\r\n", "url": "https://github.com/meta-pytorch/data/issues/310", "state": "open", "labels": [], "created_at": "2022-03-19T19:31:49Z", "updated_at": "2022-03-27T03:31:32Z", "comments": 7, "user": "josiahls" }, { "repo": "pytorch/data", "number": 303, "title": "DataPipe for GCS (Google Cloud Storage)", "body": "### \ud83d\ude80 The feature\r\n\r\nBuild a DataPipe that allows users to connect to GCS (Google Cloud Storage). There is a chance that existing DataPipes may suffice, so we should examine the relevant APIs first.\r\n\r\n### Motivation, pitch\r\n\r\nGCS (Google Cloud Storage) is one of the commonly used cloud storage for storing data.\r\n\r\n### Alternatives\r\n\r\nExisting DataPipes are sufficient and we should provide an example of how that can be done instead.\r\n\r\n### Additional context\r\n\r\nFeel free to react or leave a comment if this feature is important for you or for any other suggestion.", "url": "https://github.com/meta-pytorch/data/issues/303", "state": "closed", "labels": [], "created_at": "2022-03-16T19:01:03Z", "updated_at": "2023-03-07T14:49:15Z", "comments": 2, "user": "NivekT" }, { "repo": "pytorch/data", "number": 302, "title": "Notes on shuffling, sharding, and batchsize", "body": "(I'm writing this down here to have a written trace, but I'm looking forward to discuss this with you all in our upcoming meetings :) )\r\n\r\nI spent some time porting the torchvision training recipes to use datapipes, and I noticed that the model I trained on ImageNet with DPs was much less accurate than the one with regular datasets. After **a lot** of digging I came to the following conclusion:\r\n\r\n1. the datapipe must be shuffled **before** it is sharded\r\n2. the DataLoader does not behave in the same way with a datapipe and with a regular indexable dataset, in particular when it comes to size of the last batches in an epoch. This has a **dramatic** effect on accuracy (probably because of batch-norm).\r\n\r\nDetails below. Note: for sharding, I used [this custom torchvision sharder](https://github.com/pytorch/vision/blob/eb6e39157cf1aaca184b52477cf1e9159bbcbd63/torchvision/prototype/datasets/utils/_internal.py#L120) which takes DDP and dataloader workers into account, + the TakerIterDataPipe below it.\r\n\r\n-----\r\n\r\n### Shuffle before shard\r\n\r\nFirst, some quick results (training a resnext50_32x4d for 5 epochs with 8 GPUs and 12 workers per GPU):\r\nShuffle before shard: Acc@1 = 47% -- this is on par with the regular indexable dataset version (phew!!)\r\nShuffle after shard: Acc@1 = 2%\r\n\r\nOne way to explain this is that if we shuffle after we shard, then only sub-parts of the dataset get shuffled. Namely, each of the 8 * 12 = 96 dataloader workers receive ~1/96th of the dataset, and each of these parts get shuffled. But that means that the shuffling is far from uniform and for datasets in which the layout is `all_samples_from_class1, all_samples_from_class2, ... all_samples_from_classN`, it's possible that some class i is **never** in the same batch as class j.\r\n\r\nSo it looks like we need to shuffle before we shard. Now, if we shuffle before sharding, we still need to make sure that all of the 96 workers shuffle the dataset with the same RNG. Otherwise we risk sampling a given sample in more than one worker, or not at all. For that to happen, one can set a random seed in `worker_init_fn`, but that causes a second problem: the random transformations of each worker will also be the same, and this will lead to slightly less accurate results; on top of that, all epochs will start with the same seed, so the shuffling is the same across all epochs. **I do not know how to solve this problem yet.**\r\n\r\nNote that TF shuffles the dataset before storing it. We might do something similar, but that would still not solve the issue for custom users datasets.\r\n\r\n\r\n----\r\n\r\n### Size of the batches at the end of an epoch\r\n\r\nSome quick results (same experiment as above):\r\n\r\nwith drop_last=True: Acc@1 = 47%\r\nwith drop_last=False: Acc@1 = 11%\r\n\r\nNear the end of the epoch, the dataloader with DP will produce a lot of batches with size 1 if drop_last is False. See the last batches of an epoch on indices from `[0, len(imagenet))` with a requested batch size of 32: https://pastebin.com/wjS7YC90. In contrast, this does not happen when using an indexable dataset: https://pastebin.com/Rje0U8Dx.\r\n\r\nI'm not too sure of why this has such a dramatic impact, but it's possible that this has to do with batch-norm, as @fmassa pointed out offline. Using `drop_last` will make sure that the 1-sized batches are eliminated, producing a much better accuracy.\r\n\r\nI guess the conclusion here is that it's worth unifying the behaviour of the DataLoader both DPs and regular indexable datasets regarding the batch size, because with indexable datasets and drop_last=False we still get ~47% acc.", "url": "https://github.com/meta-pytorch/data/issues/302", "state": "open", "labels": [], "created_at": "2022-03-16T18:08:41Z", "updated_at": "2022-05-24T12:55:18Z", "comments": 28, "user": "NicolasHug" }, { "repo": "pytorch/data", "number": 301, "title": "Add TorchArrow Nightly CI Test", "body": "### \ud83d\ude80 The feature\r\n\r\nTorchArrow nightly build is now [available for Linux](https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html) (other versions will be next).\r\n\r\nWe should add TorchArrow nightly CI tests for these [TorchArrow dataframe related unit tests](https://github.com/pytorch/data/blob/main/test/test_dataframe.py).\r\n\r\n### Motivation, pitch\r\n\r\nThis will ensure that our usages remain compatible with TA's APIs.\r\n\r\n### Additional context\r\nThis is a good first issue for people who want to understand how our CI works. Other [domain CI tests](https://github.com/pytorch/data/blob/main/.github/workflows/domain_ci.yml) (for Vision, Text) can serve as examples on how to set this up.", "url": "https://github.com/meta-pytorch/data/issues/301", "state": "closed", "labels": [ "good first issue" ], "created_at": "2022-03-16T17:28:27Z", "updated_at": "2022-05-09T15:38:31Z", "comments": 1, "user": "NivekT" }, { "repo": "pytorch/pytorch", "number": 74288, "title": "How to Minimize Rounding Error in torch.autograd.functional.jacobian?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nBefore I start, let me express my sincerest gratitude to issue #49171, in making it possible to take the jacobian wrt all model parameters! A great functionality indeed!\r\n\r\nI am raising an issue about the approximation error when the jacobian function goes to high dimensions. This is necessary when calculating the jacobian wrt parameters using batch inputs. In low dimensions, the following code work fine\r\n```\r\nimport torch\r\nfrom torch.autograd.functional import jacobian\r\nfrom torch.nn.utils import _stateless\r\nfrom torch import nn\r\nfrom torch.nn import functional as F\r\n```\r\n\r\n```\r\nmodel = nn.Conv2d(3,1,1)\r\ninput = torch.rand(1, 3, 32, 32)\r\ntwo_input = torch.cat([input, torch.rand(1, 3, 32, 32)], dim=0)\r\nnames = list(n for n, _ in model.named_parameters())\r\n\r\n# This is exactly the same code as in issue #49171\r\njac1 = jacobian(lambda *params: _stateless.functional_call(model, {n: p for n, p in zip(names, params)}, input), tuple(model.parameters()))\r\njac2 = jacobian(lambda *params: _stateless.functional_call(model, {n: p for n, p in zip(names, params)}, two_input), tuple(model.parameters()))\r\nassert torch.allclose(jac1[0][0], jac2[0][0])\r\n```\r\n\r\nHowever, when I make the model slightly larger the assertion breaks down, which seem like it's due to rounding errors\r\n```\r\nclass ResBasicBlock(nn.Module):\r\n def __init__(self, n_channels, n_inner_channels, kernel_size=3):\r\n super().__init__()\r\n\r\n self.conv1 = nn.Conv2d(n_channels, n_inner_channels, (kernel_size, kernel_size), padding=kernel_size // 2,\r\n bias=False)\r\n self.conv2 = nn.Conv2d(n_inner_channels, n_channels, (kernel_size, kernel_size), padding=kernel_size // 2,\r\n bias=False)\r\n self.norm1 = nn.BatchNorm2d(n_inner_channels)\r\n self.norm2 = nn.BatchNorm2d(n_channels)\r\n self.norm3 = nn.BatchNorm2d(n_channels)\r\n\r\n def forward(self, z, x=None):\r\n if x == None:\r\n x = torch.zeros_like(z)\r\n y = self.norm1(F.relu(self.conv1(z)))\r\n return self.norm3(F.relu(z + self.norm2(x + self.conv2(y))))\r\n\r\nmodel = ResBasicBlock(3, 1)\r\ninput = torch.rand(1, 3, 32, 32)\r\ntwo_input = torch.cat([input, torch.rand(1, 3, 32, 32)], dim=0)\r\nnames = list(n for n, _ in model.named_parameters())\r\n\r\n# This is exactly the same code as in issue #49171\r\njac1 = jacobian(lambda *params: _stateless.functional_call(model, {n: p for n, p in zip(names, params)}, input), tuple(model.parameters()))\r\njac2 = jacobian(lambda *params: _stateless.functional_call(model, {n: p for n, p in zip(names, params)}, two_input), tuple(model.parameters()))\r\nassert torch.allclose(jac1[0][0], jac2[0][0])\r\n```\r\n\r\n### Versions\r\n\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.11.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 12.3 (x86_64)\r\nGCC version: Could not collect\r\nClang version: 13.1.6 (clang-1316.0.21.2)\r\nCMake version: version 3.17.1\r\nLibc version: N/A\r\n\r\nPython version: 3.8.12 (default, Oct 12 2021, 06:23:56) [Clang 10.0.0 ] (64-bit runtime)\r\nPython platform: macOS-10.16-x86_64-i386-64bit\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] functorch==0.1.0\r\n[pip3] numpy==1.21.2\r\n[pip3] torch==1.11.0\r\n[pip3] torchaudio==0.11.0\r\n[pip3] torchvision==0.12.0\r\n[conda] blas 1.0 mkl defaults\r\n[conda] ffmpeg 4.3 h0a44026_0 pytorch\r\n[conda] functorch 0.1.0 pypi_0 pypi\r\n[conda] mkl 2021.4.0 hecd8cb5_637 defaults\r\n[conda] mkl-service 2.4.0 py38h9ed2024_0 defaults\r\n[conda] mkl_fft 1.3.1 py38h4ab4a9b_0 defaults\r\n[conda] mkl_random 1.2.2 py38hb2f4e1b_0 defaults\r\n[conda] numpy 1.21.2 py38h4b4dc7a_0 defaults\r\n[conda] numpy-base 1.21.2 py38he0bd621_0 defaults\r\n[conda] pytorch 1.11.0 py3.8_0 pytorch\r\n[conda] torchaudio 0.11.0 py38_cpu pytorch\r\n[conda] torchvision 0.12.0 py38_cpu pytorch\r\n```\n\ncc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7", "url": "https://github.com/pytorch/pytorch/issues/74288", "state": "closed", "labels": [ "module: numerical-stability", "module: autograd", "triaged" ], "created_at": "2022-03-16T09:25:18Z", "updated_at": "2022-03-17T14:17:29Z", "user": "QiyaoWei" }, { "repo": "pytorch/pytorch", "number": 74256, "title": "Create secure credential storage for metrics credentials and associated documentation on how to regenerate them if needed", "body": "cc @seemethere @malfet @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/74256", "state": "open", "labels": [ "module: ci", "triaged" ], "created_at": "2022-03-15T20:21:20Z", "updated_at": "2022-03-16T17:30:02Z", "user": "seemethere" }, { "repo": "pytorch/torchx", "number": 422, "title": "kubernetes: add support for persistent volume claim volumes", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nAdd support for PersistentVolumeClaim mounts to Kubernetes scheduler.\r\n\r\n\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nhttps://github.com/pytorch/torchx/pull/420 adds bindmounts to K8S, we want to add in persistent volume claims for Kubernetes which will let us support most of the other remote mounts. \r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nAdd a new mount type to specs:\r\n\r\n```\r\nclass MountTypes(Enum):\r\n PERSISTENT_CLAIM = \"persistent-claim\"\r\n BIND = \"bind\"\r\n\r\nclass PersistentClaimMount(Mount):\r\n name: str\r\n dst_path: str\r\n read_only: bool = False\r\n\r\nclass Role:\r\n ...\r\n mounts: List[Union[BindMount,PersistentClaimMount]]\r\n```\r\n\r\nAdd a new format to `parse_mounts`:\r\n\r\n```\r\n--mounts bind=persistent-claim,name=foo,dst=/foo[,readonly]\r\n```\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\nUsers can already mount a volume on the host node and then bind mount it into kubernetes pod but this violates some isolation principles and can be an issue from a security perspective. It also is a worse experience for users since the mounts need to be mounted on ALL hosts.\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\n* V1Volume https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Volume.md\r\n* V1PersistentVolume https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PersistentVolumeClaimVolumeSource.md\r\n* FSx on EKS https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/examples/kubernetes/static_provisioning/README.md\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/422", "state": "closed", "labels": [], "created_at": "2022-03-15T18:21:10Z", "updated_at": "2022-03-16T22:12:26Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/TensorRT", "number": 929, "title": "\u2753 [Question] Expected isITensor() to be true but got false Requested ITensor from Var, however Var type is c10::IValue", "body": "I try to use python trtorch==0.4.1 to compile my own pytorch jit traced model, and I find that it goes wrong with the following information:\r\n\r\n`\r\nTraceback (most recent call last):\r\n File \"./prerecall_server.py\", line 278, in <module>\r\n ModelServing(args),\r\n File \"./prerecall_server.py\",, line 133, in __init__\r\n self.model = trtorch.compile(self.model, compile_settings)\r\n File \"/usr/local/lib/python3.6/dist-packages/trtorch/_compiler.py\", line 73, in compile\r\n compiled_cpp_mod = trtorch._C.compile_graph(module._c, _parse_compile_spec(compile_spec))\r\nRuntimeError: [Error thrown at core/conversion/var/Var.cpp:149] Expected isITensor() to be true but got false\r\nRequested ITensor from Var, however Var type is c10::IValue\r\n`\r\n\r\nI make debug and find that the module contains the unknown operation.\r\n\r\n`\r\n\r\n class Causal_Norm_Classifier(nn.Module):\r\n\r\n def __init__(self, num_classes=1000, feat_dim=2048, use_effect=False, num_head=2, tau=16.0, alpha=1.0, gamma=0.03125, mu=0.9, *args):\r\n super(Causal_Norm_Classifier, self).__init__()\r\n # default alpha = 3.0\r\n #self.weight = nn.Parameter(torch.Tensor(num_classes, feat_dim).cuda(), requires_grad=True)\r\n self.scale = tau / num_head # 16.0 / num_head\r\n self.norm_scale = gamma # 1.0 / 32.0\r\n self.alpha = alpha # 3.0\r\n self.num_head = num_head\r\n self.feat_dim = feat_dim\r\n self.head_dim = feat_dim // num_head\r\n self.use_effect = use_effect\r\n self.relu = nn.ReLU(inplace=True)\r\n self.mu = mu\r\n\r\n self.register_parameter('weight', nn.Parameter(torch.Tensor(num_classes, feat_dim), requires_grad=True))\r\n\r\n self.reset_parameters(self.weight)\r\n \r\n def reset_parameters(self, weight):\r\n stdv = 1. / math.sqrt(weight.size(1))\r\n weight.data.uniform_(-stdv, stdv)\r\n\r\n\r\n def forward(self, x, training=True, use_effect=True):\r\n # calculate capsule normalized feature vector and predict\r\n normed_w = self.multi_head_call(self.causal_norm, self.weight, weight=self.norm_scale)\r\n normed_x = self.multi_head_call(self.l2_norm, x)\r\n y = torch.mm(normed_x * self.scale, normed_w.t())\r\n\r\n return y\r\n\r\n def multi_head_call(self, func, x, weight=None):\r\n assert len(x.shape) == 2\r\n x_list = torch.split(x, self.head_dim, dim=1)\r\n if weight:\r\n y_list = [func(item, weight) for item in x_list]\r\n else:\r\n y_list = [func(item) for item in x_list]\r\n assert len(x_list) == self.num_head\r\n assert len(y_list) == self.num_head\r\n return torch.cat(y_list, dim=1)\r\n\r\n def l2_norm(self, x):\r\n normed_x = x / torch.norm(x, 2, 1, keepdim=True)\r\n return normed_x\r\n\r\n def causal_norm(self, x, weight):\r\n norm= torch.norm(x, 2, 1, keepdim=True)\r\n normed_x = x / (norm + weight)\r\n return normed_x\r\n`\r\n\r\n\r\nCan you help me with this?", "url": "https://github.com/pytorch/TensorRT/issues/929", "state": "closed", "labels": [ "question", "No Activity", "component: partitioning" ], "created_at": "2022-03-15T10:17:07Z", "updated_at": "2023-04-01T00:02:11Z", "user": "clks-wzz" }, { "repo": "pytorch/tutorials", "number": 1860, "title": "Where is the mnist_sample notebook?", "body": "In tutorial [WHAT IS TORCH.NN REALLY?](https://pytorch.org/tutorials/beginner/nn_tutorial.html#closing-thoughts), `Closing thoughts` part:\r\n\r\n```\r\nTo see how simple training a model can now be, take a look at the mnist_sample sample notebook.\r\n```\r\n\r\nDoes`mnist_sample notebook ` refer to https://github.com/pytorch/tutorials/blob/master/beginner_source/nn_tutorial.py and https://pytorch.org/tutorials/_downloads/5ddab57bb7482fbcc76722617dd47324/nn_tutorial.ipynb ?\r\n\r\nNote:\r\n\r\nhttps://github.com/pytorch/tutorials/blob/b1d8993adc3663f0f00d142ac67f6695baaf107a/beginner_source/nn_tutorial.py#L853", "url": "https://github.com/pytorch/tutorials/issues/1860", "state": "closed", "labels": [], "created_at": "2022-03-14T12:21:14Z", "updated_at": "2022-08-18T17:35:34Z", "user": "Yang-Xijie" }, { "repo": "pytorch/torchx", "number": 421, "title": "Document usage of .torchxconfig", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\nCurrent `.torchxconfig` docs (https://pytorch.org/torchx/main/runner.config.html) explain how it works and its APIs but does not provide any practical guidance on what configs can be put into it and why its useful.\r\n\r\n## What does it currently say?\r\nNothing wrong with what it currently says. \r\n\r\n## What should it say?\r\nShould add more practical user guide on what are the supported configs in `.torchxconfig` and under what circumstances it gets picked up with the `torchx` CLI. As well as:\r\n\r\n1. Examples\r\n2. Best Practices\r\n\r\n## Why?\r\nCurrent .torchxconfig docs is useful to the programmer but not for the user.\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/421", "state": "closed", "labels": [], "created_at": "2022-03-12T00:30:59Z", "updated_at": "2022-03-28T20:58:44Z", "comments": 1, "user": "kiukchung" }, { "repo": "pytorch/torchx", "number": 418, "title": "cli/colors: crash when importing if sys.stdout is closed", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nSometimes `sys.stdout` is closed and `isatty()` throws an error at https://github.com/pytorch/torchx/blob/main/torchx/cli/colors.py#L11\r\n\r\nSwitching to a variant that checks if it's closed should work:\r\n```\r\nnot sys.stdout.closed and sys.stdout.isatty()\r\n```\r\n\r\nModule (check all that applies):\r\n * [ ] `torchx.spec`\r\n * [ ] `torchx.component`\r\n * [ ] `torchx.apps`\r\n * [ ] `torchx.runtime`\r\n * [x] `torchx.cli`\r\n * [ ] `torchx.schedulers`\r\n * [ ] `torchx.pipelines`\r\n * [ ] `torchx.aws`\r\n * [ ] `torchx.examples`\r\n * [ ] `other`\r\n\r\n\r\n## To Reproduce\r\n\r\nI'm not sure how to repro this externally other than explicitly closing `sys.stdout`\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n```\r\nI/O operation on closed file\r\nStack trace:\r\n...\r\nfrom torchx.cli.cmd_log import get_logs\r\nFile: <\"/mnt/xarfuse/uid-27156/4adc7caa-seed-nspid4026533510_cgpid2017229-ns-4026533507/torchx/cli/cmd_log.py\">, line 20, in <module>\r\nfrom torchx.cli.colors import GREEN, ENDC\r\nFile: <\"/mnt/xarfuse/uid-27156/4adc7caa-seed-nspid4026533510_cgpid2017229-ns-4026533507/torchx/cli/colors.py\">, line 11, in <module>\r\nif sys.stdout.isatty():\r\n```\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nDoesn't crash\r\n\r\n## Environment\r\n\r\n - torchx version (e.g. 0.1.0rc1): main\r\n - Python version:\r\n - OS (e.g., Linux):\r\n - How you installed torchx (`conda`, `pip`, source, `docker`):\r\n - Docker image and tag (if using docker):\r\n - Git commit (if installed from source):\r\n - Execution environment (on-prem, AWS, GCP, Azure etc):\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/418", "state": "closed", "labels": [ "bug", "cli" ], "created_at": "2022-03-11T19:24:44Z", "updated_at": "2022-03-11T23:32:30Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/extension-cpp", "number": 76, "title": "How to debug in cuda-pytorch env?", "body": "Hi! I am wondering how to debug in such environment? I have tried to insert a \"printf(\"hello wolrd\")\" sentence in .cu file, but it compiles failure! If I delete it, everything works fine..... So how you debug in such environment? Thank you!!!!", "url": "https://github.com/pytorch/extension-cpp/issues/76", "state": "open", "labels": [], "created_at": "2022-03-10T07:45:31Z", "updated_at": "2022-03-10T07:45:31Z", "user": "Arsmart123" }, { "repo": "pytorch/examples", "number": 969, "title": "DDP: why does every process allocate memory of GPU 0 and how to avoid it?", "body": "Run [this](https://github.com/pytorch/examples/tree/main/imagenet) example with 2 GPUs.\r\nprocess 2 will allocate some memory on GPU 0.\r\n```\r\npython main.py --multiprocessing-distributed --world-size 1 --rank 0\r\n```\r\n\r\n![image](https://user-images.githubusercontent.com/34199488/157247908-a2f6be5a-a2f2-46f0-b3da-4cdee956470d.png)\r\n\r\n\r\nI have carefully checked the sample code and there seems to be no obvious error that would cause process 2 to transfer data to GPU 0.\r\n\r\nSo: \r\n1. Why does process 2 allocate memory of GPU 0?\r\n2. Is this part of the data involved in the calculation? I think if this part of the data is involved in the calculation when the number of processes becomes large, it will cause GPU 0 to be seriously overloaded?\r\n3. Is there any way to avoid it?\r\n\r\nThanks in advance to partners in the PyTorch community for their hard work.", "url": "https://github.com/pytorch/examples/issues/969", "state": "open", "labels": [ "distributed" ], "created_at": "2022-03-08T13:41:16Z", "updated_at": "2024-09-22T11:41:26Z", "user": "siaimes" }, { "repo": "pytorch/TensorRT", "number": 912, "title": "\u2728[Feature] New Release for pip", "body": "Would it be possible to get a new release for use with pip?\r\n\r\nThere have been quite a few features and bug-fixes added since November, and it would be great to have an up to date version available.\r\n\r\nI know that docker containers are often recommended, but that's often not a viable option.\r\n\r\nThank you for all of the great work!!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/912", "state": "closed", "labels": [ "question" ], "created_at": "2022-03-06T05:27:27Z", "updated_at": "2022-03-06T21:25:13Z", "user": "dignakov" }, { "repo": "pytorch/torchx", "number": 405, "title": "SLURM quality of life improvements", "body": "## Description\r\nMaking a couple of requests to improve QoL on SLURM \r\n\r\n## Detailed Proposal\r\nIt would be helpful to have -\r\n- [x] The ability to specify the output path. Currently, you need to cd to the right path for this, which generally needs a helper function to set up the directory, cd to it, and then launch via torchx. torchx can ideally handle it for us. #416\r\n- [x] Code isolation and reproducibility. While doing research, we make a change, launch an experiment, and repeat. To make sure each experiment uses the same consistent code, we copy the code to the experiment directory (which also helps with reproducibility). #416\r\n- [ ] Verification of the passed launch script. If I launch from a wrong directory for instance, I would still queue up the job, wait for a few minutes / hours only to crash because of a wrong path (i.e. the launch script does not exist).\r\n- [x] Being able to specify a job name - SLURM shows job details when running the `squeue` command including the job name. If our jobs are all run via torchx, every job will be named `train_app-{i}` which makes it hard to identify which experiment / project the job is from.\r\n- [x] The `time` argument doesn't say what the unit is - maybe we just follow the SLURM API, but it would be nice if we clarified that.\r\n- [ ] torchx submits jobs in [heterogeneous mode](https://slurm.schedmd.com/heterogeneous_jobs.html). This is something FAIR users don't have familiarity with - I'm guessing in terms of execution and command support there should be feature and scheduling speed parity (not sure about the latter)? The `squeue` logs show every node as a separate line - so a 32 node job would take 32 lines instead of 1. This just makes it harder to monitor jobs - not a technical issue, just a QoL one :)\r\n- [x] The job logs are created in `slurm-{job-id}-train_app-{node-id}.out` files (per node) and a single `slurm-{job-id}.out`. Normally, our jobs instead have logs of the form `{job-id}-{node-id}.out` and `{job-id}-{node-id}.err` (per node) - the separation between `stderr` and `stdout` helps find which machine actually crashed more easily. And I'm not sure what `slurm-{job-id}.out` corresponds to - maybe it's a consequence of the heterogeneous jobs? With torchelastic, it becomes harder to debug which node crashed since every node logs a crash (so grepping for `Traceback` will return each log file instead of just the node which originally crashed) - maybe there is a way to figure this out and I just don't know what to look for?\r\n- [ ] The `global_rank` is not equal to `local_rank + node_id * gpus_per_node`, i.e. the global rank 0 can be on node 3.\r\n- [ ] automatically set nomem on pcluster", "url": "https://github.com/meta-pytorch/torchx/issues/405", "state": "open", "labels": [ "slurm" ], "created_at": "2022-03-04T17:42:08Z", "updated_at": "2022-04-14T21:42:21Z", "comments": 5, "user": "mannatsingh" }, { "repo": "pytorch/serve", "number": 1487, "title": "how to get model.py file ?", "body": "`https://github.com/pytorch/serve/blob/master/docker/README.md#create-torch-model-archiver-from-container` in \r\nthe 4 step ,how to get model.py file\uff1f\r\n\r\nI followed the doc step by step \uff0cbut in step 4 \r\n`torch-model-archiver --model-name densenet161 --version 1.0 --model-file /home/model-server/examples/image_classifier/densenet_161/model.py --serialized-file /home/model-server/examples/image_classifier/densenet161-8d451a50.pth --export-path /home/model-server/model-store --extra-files /home/model-server/examples/image_classifier/index_to_name.json --handler image_classifier`\r\n\r\nerror because no model.py file.\r\nwhere to get this model.py file", "url": "https://github.com/pytorch/serve/issues/1487", "state": "closed", "labels": [], "created_at": "2022-03-04T01:41:59Z", "updated_at": "2022-03-04T20:03:41Z", "user": "jaffe-fly" }, { "repo": "pytorch/pytorch", "number": 73699, "title": "How to get tolerance override in OpInfo-based test?", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nThe documentation appears to be wrong, it suggests to use self.rtol and self.precision:\r\nhttps://github.com/pytorch/pytorch/blob/4168c87ed3ba044c9941447579487a2f37eb7973/torch/testing/_internal/common_device_type.py#L1000\r\n\r\nself.tol doesn't seem to exist in my tests.\r\nI did find a self.rel_tol, is that the right flag?\r\n\r\n### Versions\r\n\r\nmain\n\ncc @brianjo @mruberry", "url": "https://github.com/pytorch/pytorch/issues/73699", "state": "open", "labels": [ "module: docs", "triaged", "module: testing" ], "created_at": "2022-03-02T22:48:11Z", "updated_at": "2022-03-07T14:42:39Z", "user": "zou3519" }, { "repo": "pytorch/vision", "number": 5510, "title": "[RFC] How do we want to deal with images that include alpha channels?", "body": "This discussion started in https://github.com/pytorch/vision/pull/5500#discussion_r816503203 and @vfdev-5 and I continued offline.\r\n\r\nPIL as well as our image reading functions support RGBA images\r\n\r\nhttps://github.com/pytorch/vision/blob/95d418970e6dbf2e4d928a204c4e620da7bccdc0/torchvision/io/image.py#L16-L31\r\n\r\nbut our color transformations currently only support RGB images ignoring an extra alpha channel. This leads to wrong results. One thing that we agreed upon is that these transforms should fail if anything but 3 channels is detected.\r\n\r\n\r\nStill, some datasets include non-RGB images so we need to deal with this for a smooth UX. Previously we implicitly converted every image to RGB before returning it from a dataset\r\n\r\nhttps://github.com/pytorch/vision/blob/f9fbc104c02f277f9485d9f8727f3d99a1cf5f0b/torchvision/datasets/folder.py#L245-L249\r\n\r\nSince we no longer decode images in the datasets, we need to provide a solution for the users here. I currently see two possible options:\r\n\r\n1. We could deal with this on a per-image basis within the dataset. For example, the train split of ImageNet contains a single RGBA image. We could simply perform an appropriate conversion for irregular image modes in the dataset so this issue is abstracted away from the user. `tensorflow-datasets` uses this approach: https://github.com/tensorflow/datasets/blob/a1caff379ed3164849fdefd147473f72a22d3fa7/tensorflow_datasets/image_classification/imagenet.py#L105-L131\r\n2. The most common non-RGB image in datasets are grayscale images. For example, the train split of ImageNet contains 19970 grayscale images. Thus, the users will need a `transforms.ConvertImageColorSpace(\"rgb\")` in most cases anyway. If that would support RGBA to RGB conversions the problem would also be solved. The conversion happens with this formula:\r\n\r\n ```\r\n pixel_new = (1 - alpha) * background + alpha * pixel_old\r\n ```\r\n \r\n where `pixel_{old|new}` is a single value from a color channel. Since we don't know `background` we need to either make assumptions or require the user to provide a value for it. I'd wager a guess that in 99% of the cases the background is white. i.e. `background == 1`, but we can't be sure about that.\r\n \r\n Another issue with this is that the user has no option to set the background on a per-image basis in the transforms pipeline if that is needed.\r\n\r\n In special case for `alpha == 1` everywhere, the equation above simplifies to\r\n\r\n ```\r\n pixel_new = pixel_old\r\n ```\r\n\r\n which is equivalent to stripping the alpha channel. We could check for that and only perform the RGBA to RGB transform if the condition holds or the user supplies a background color.\r\n\r\n\r\n\n\ncc @pmeier @vfdev-5 @datumbox @bjuncek", "url": "https://github.com/pytorch/vision/issues/5510", "state": "closed", "labels": [ "module: datasets", "module: transforms", "prototype" ], "created_at": "2022-03-02T09:43:42Z", "updated_at": "2023-03-28T13:01:09Z", "user": "pmeier" }, { "repo": "pytorch/pytorch", "number": 73600, "title": "Add a section in DDP tutorial to explain why DDP sometimes is slower than local training and how to improve it", "body": "### \ud83d\udcda The doc issue\n\nAdd a section in DDP tutorial to explain why DDP sometimes is slower than local training and how to improve it\n\n### Suggest a potential alternative/fix\n\n_No response_\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang", "url": "https://github.com/pytorch/pytorch/issues/73600", "state": "open", "labels": [ "oncall: distributed", "triaged", "module: ddp" ], "created_at": "2022-03-01T20:34:58Z", "updated_at": "2022-03-08T22:03:17Z", "user": "zhaojuanmao" }, { "repo": "pytorch/tensorpipe", "number": 431, "title": "How to enable CudaGdrChannel registration in tensorpipeAgent when using pytorch's rpc", "body": "Can we just enable it by define some environment variables or we need to recompile pytorch? Thx!", "url": "https://github.com/pytorch/tensorpipe/issues/431", "state": "closed", "labels": [], "created_at": "2022-03-01T08:14:17Z", "updated_at": "2022-03-01T12:09:53Z", "user": "eedalong" }, { "repo": "pytorch/tutorials", "number": 1839, "title": "Missing 'img/teapot.jpg', 'img/trilobite.jpg' for `MODEL UNDERSTANDING WITH CAPTUM` tutorial.", "body": "Running this tutorial: https://pytorch.org/tutorials/beginner/introyt/captumyt.html\r\nCould not found 'img/teapot.jpg', 'img/trilobite.jpg' under _static folder.\r\n\r\nCould anyone help to provide?\r\nThanks!", "url": "https://github.com/pytorch/tutorials/issues/1839", "state": "closed", "labels": [ "question" ], "created_at": "2022-02-26T10:32:52Z", "updated_at": "2022-10-17T16:24:06Z", "user": "MonkandMonkey" }, { "repo": "pytorch/data", "number": 256, "title": "Support `keep_key` in `Grouper`?", "body": "`IterKeyZipper` has an option to keep the key that was zipped on:\r\n\r\nhttps://github.com/pytorch/data/blob/2cf1f208e76301f3e013b7569df0d75275f1aaee/torchdata/datapipes/iter/util/combining.py#L53\r\n\r\nIs this something we want to support going forward? If yes, it would be nice to have this also on `Grouper` and possibly other similar datapipes. That would come in handy in situations if the key is used multiple times for example if we have a `IterKeyZipper` after an `Grouper`.\r\n\r\n### Additional Context for New Contributors\r\n\r\nSee comment below", "url": "https://github.com/meta-pytorch/data/issues/256", "state": "closed", "labels": [ "good first issue" ], "created_at": "2022-02-25T08:39:53Z", "updated_at": "2023-01-27T19:03:08Z", "comments": 15, "user": "pmeier" }, { "repo": "pytorch/TensorRT", "number": 894, "title": "\u2753 [Question] Can you convert model that operates on custom classes?", "body": "## \u2753 Question\r\n\r\nI have a torch module that creates objects of custom classes that have tensors as fields. It can be torch.jit.scripted but torch.jit.trace can be problematic. When I torch.jit.script module and then torch_tensorrt.compile it I get the following error: `Unable to get schema for Node %317 : __torch__.src.MyClass = prim::CreateObject() (conversion.VerifyCoverterSupportForBlock)`\r\n\r\n## What you have already tried\r\n\r\ntorch.jit.trace avoids the problem but introduces problems with loops in module.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.2\r\n - CPU Architecture: intel\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: from archives\r\n - Python version: 3.8\r\n - CUDA version: 11.3\r\n - GPU models and configuration: rtx 3090\r\n - Any other relevant information:\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/894", "state": "closed", "labels": [ "question" ], "created_at": "2022-02-24T09:51:13Z", "updated_at": "2022-05-18T21:21:05Z", "user": "MarekPokropinski" }, { "repo": "pytorch/xla", "number": 3391, "title": "I want to Multi-Node Multi GPU training, how should I configure the environment", "body": "## \u2753 Questions and Help\r\nRunning XLA MultiGPU MultiNode\uff0cI know that I need to set XRT_SHARD_WORLD_SIZE and XRT_WORKERS, but I don't know how to configure the variable value of XRT_WORKERS.\r\nAre there some examples that exist for me to refer to?", "url": "https://github.com/pytorch/xla/issues/3391", "state": "closed", "labels": [ "stale", "xla:gpu" ], "created_at": "2022-02-23T06:52:01Z", "updated_at": "2022-04-28T00:10:36Z", "user": "ZhongYFeng" }, { "repo": "pytorch/TensorRT", "number": 881, "title": "\u2753 [Question] How do you convert part of the model to TRT? ", "body": "## \u2753 Question\r\n\r\nIs it possible to convert only part of the model to TRT. I have model that cannot be directly converted to trt because it uses custom classes. I wanted to convert only modules that can be converted but as I tried it torch cannot save it.\r\n\r\n## What you have already tried\r\n\r\nI tried the following:\r\n\r\n```\r\nimport torch.nn\r\nimport torch_tensorrt\r\n\r\n\r\nclass MySubmodule(torch.nn.Module):\r\n def __init__(self):\r\n super(MySubmodule, self).__init__()\r\n self.layer = torch.nn.Linear(10, 10)\r\n\r\n def forward(self, x):\r\n return self.layer(x)\r\n\r\n\r\nclass MyMod(torch.nn.Module):\r\n def __init__(self):\r\n super(MyMod, self).__init__()\r\n self.submod = MySubmodule()\r\n self.submod = torch_tensorrt.compile(self.submod, inputs=[\r\n torch_tensorrt.Input(shape=(1, 10))\r\n ])\r\n\r\n def forward(self, x):\r\n return self.submod(x)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n model = MyMod()\r\n scripted = torch.jit.script(model)\r\n scripted(torch.zeros(1, 10).cuda())\r\n scripted.save(\"test.pt\")\r\n\r\n```\r\nBut it raises exception: `RuntimeError: method.qualname() == QualifiedName(selfClass->name()->qualifiedName(), methodName)INTERNAL ASSERT FAILED at \"../torch/csrc/jit/serialization/python_print.cpp\":1105, please report a bug to PyTorch. \r\n`\r\n## Environment\r\n - PyTorch Version (e.g., 1.0): 1.10.2\r\n - CPU Architecture: intel\r\n - OS (e.g., Linux): linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: from archives\r\n - Python version: 3.8\r\n - CUDA version: 11.3\r\n - GPU models and configuration: rtx 3090\r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/881", "state": "closed", "labels": [ "question" ], "created_at": "2022-02-18T09:00:43Z", "updated_at": "2022-02-19T23:57:17Z", "user": "MarekPokropinski" }, { "repo": "pytorch/TensorRT", "number": 880, "title": "\u2753 [Question] What is the difference between docker built on PyTorch NGC Container and PyTorch NGC Container?", "body": "## \u2753 Question\r\n\r\nSince PyTorch NGC 21.11+ already includes Torch-TensorRT, is it possible to use Torch-TensorRT directly in PyTorch NGC Container?\r\n\r\n## What you have already tried\r\n\r\nI read the README and tried to build docker according to it, but it keeps failing.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):Do I need to install PyTorch locally?\r\n - CPU Architecture:AMD64/x64\r\n - OS (e.g., Linux):Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source)\uff1anot installed\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/880", "state": "closed", "labels": [ "question" ], "created_at": "2022-02-18T08:40:22Z", "updated_at": "2022-02-19T23:56:30Z", "user": "Guangyun-Xu" }, { "repo": "pytorch/serve", "number": 1440, "title": "[Discussion]: How to extend the base handler", "body": "Recently we've realized that an easy place for new contributors to improve torchserve is to either\r\n1. Add a reference example in `examples`\r\n2. Make an improvement to the base handler\r\n\r\n1 is easiest but makes means that users that want to benefit from that example, need to go through source code and adapt it to their examples\r\n\r\n2 is a bit less easy but still OK because the benefits are to anyone using torchserve. Unfortunately it's slowly making the base handler unmaintainable as it now includes code for model optimization, profiling, model interpretability https://github.com/pytorch/serve/blob/master/ts/torch_handler/base_handler.py\r\n\r\nThis problem will continue getting worse as we need to runtime exports, profiling techniques and other useful workflows for model serving all of which will be gated by slow error handling code that will encourage users to pip install missing dependencies.\r\n\r\n> So how we can continue making improvements to the base handler while keeping it simple and modular?\r\n\r\n## Option 1: Inheritance\r\n\r\nInstead of adding features to the base handler\r\n\r\nwe can instead create a new handler like\r\n\r\n```\r\nclass ExtendedHandler(BaseHandler):\r\n```\r\n\r\nBenefit is code remains modular but con is that to use profiling and a runtime users would need to resort to multiple inheritance which can be hard to debug\r\n\r\n## Option 2: Generic interfaces\r\nInstead of having a line that looks like this in our code \r\n\r\n`self.model = ipex.optimize(self.model)`\r\n\r\n\r\nWe can add a generic `optimize` in the base handler which specializes for a particular implementation depending on what's in the `config.properties`\r\n\r\nBenefit is this very modular but requires more work to create a future proof interface and needs users to change Java code to support their usecase \r\n\r\n## Option 3: Dynamic runtime loads\r\nInstead of having code in the base handler we can load it at runtime\r\n\r\n```\r\nclass BaseHandler:\r\n...\r\n\r\ndef optimize(self):\r\n print(\"self.v =\", self.v)\r\n\r\nsetattr(BaseHandler, 'optimize', optimize)\r\n\r\nBaseHandler().optimize\r\n```\r\n\r\nBenefit is this is very modular, doesn't require any changes to base handler code but given that torchserve is used via a CLI tool and not just running a python file it's tricky to figure out where this change needs to be\r\n\r\n## Option 4: Utility functions\r\nAnother simple approach is to move helpful utility functions to a different file called `handler_utils.py`\r\n\r\nA good candidate is moving a function like https://github.com/pytorch/serve/blob/master/ts/torch_handler/base_handler.py#L229\r\n\r\n` def _infer_with_profiler(self, data):`\r\n\r\nThat said this approach isn't perfect since even if modularized, profiling would need a footprint like https://github.com/pytorch/serve/blob/master/ts/torch_handler/base_handler.py#L213\r\n\r\n`if is_profiler_enabled:`\r\n\r\n## Option 5: Python decorators\r\n\r\nNot a silver bullet but python decorator like could make code more maintainable\r\n```\r\n@optimize\r\n@profile\r\n@metrics\r\n```\r\n\r\nFor example `@metrics` would be a decorator to keep track of a function start and end time. This works well for `@metrics` and maybe `@profile` but for `@optimize` would require passing the right argument as in the model which is not a parameter in `inference` but a property of the handler class. Maybe there's a larger discussion here in that handlers need to hold less state \r\n\r\nRelated we could use Python `contextmanager` to allocate a runtime so users can say something like\r\n`with ipex|tensorrt|etc..` and not have to worry about changes to the base handler.\r\n\r\n## Option 6: ?\r\n\r\nThere may be other options but I think this is an important problem to figure out to make it simpler for new contributors to add their changes\r\n\r\ncc: @HamidShojanazeri @chauhang @lxning @maaquib @nskool @min-jean-cho", "url": "https://github.com/pytorch/serve/issues/1440", "state": "closed", "labels": [ "enhancement" ], "created_at": "2022-02-17T16:15:09Z", "updated_at": "2022-05-04T03:57:34Z", "user": "msaroufim" }, { "repo": "pytorch/TensorRT", "number": 876, "title": "\u2753 [Question] How to Enable the Torch-TensorRT Partition Feature ? ", "body": "## \u2753 Question\r\nHello\uff0c\r\n\r\nI want to use TensorRT to run VectorNet from https://github.com/xk-huang/yet-another-vectornet \r\n\r\nHowever\uff0c when I try to convert torchscript using torchtrtc\uff0c it terminates by showing an unsupported op\uff1atorch_scatter::scatter_max \r\n\r\n```\r\nterminate called after throwing an instance of 'torch::jit::ErrorReport'\r\n what():\r\nUnknown builtin op: torch_scatter::scatter_max.\r\nCould not find any similar ops to torch_scatter::scatter_max. This op may not exist or may not be currently supported in TorchScript.\r\n:\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch_scatter/scatter.py(72): scatter_max\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch_scatter/scatter.py(160): scatter\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch_geometric/nn/conv/message_passing.py(426): aggregate\r\n/tmp/tom.hx_pyg/tmpjesxc50s.py(168): propagate\r\n/tmp/tom.hx_pyg/tmpjesxc50s.py(188): forward\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl\r\n/Data0/Users/tom.hx/work/ai-compiler/tvm/vectornet_test/modeling/subgraph.py(50): forward\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl\r\n/Data0/Users/tom.hx/work/ai-compiler/tvm/vectornet_test/modeling/vectornet.py(52): forward\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/jit/_trace.py(965): trace_module\r\n/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/jit/_trace.py(750): trace\r\nprofile.py(156): <module>\r\nSerialized File \"code/__torch__/GraphLayerPropJittable_4074db.py\", line 15\r\n src = torch.index_select(_0, -2, index)\r\n index0 = torch.select(edge_index, 0, 1)\r\n aggr_out, _1 = ops.torch_scatter.scatter_max(src, index0, -2, None, 225)\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n return torch.cat([_0, aggr_out], 1)\r\n\r\nAborted\r\n\r\n```\r\n\r\nI have been noticed that Torch-TensorRT can fallback to native PyTorch when TensorRT does not support the model subgraphs.\r\n\r\nThe question is, why does not this function work, and how to enable it?\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/876", "state": "closed", "labels": [ "question" ], "created_at": "2022-02-16T08:01:25Z", "updated_at": "2022-02-19T23:57:32Z", "user": "huangxiao2008" }, { "repo": "pytorch/text", "number": 1615, "title": "How to build pytorch text with system third_party libraries?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n\r\nThree packages are under [pytorch text third_party](https://github.com/pytorch/text/tree/main/third_party). However, I personally prefer using system installed packages, \r\n- libre2-dev\r\n- libdouble-conversion-dev\r\n- libsentencepiece-dev\r\n\r\nIn addition, isn't there a **CMakeLists.txt** for **pytorch text**??\r\n\r\nCheers\r\n\r\n\r\n", "url": "https://github.com/pytorch/text/issues/1615", "state": "open", "labels": [], "created_at": "2022-02-16T03:03:31Z", "updated_at": "2023-04-18T06:07:10Z", "user": "jiapei100" }, { "repo": "pytorch/torchx", "number": 388, "title": "RFC: Improve OCI Image Python Tooling", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nQuite a few of the cloud services / cluster tools for running ML jobs use OCI/Docker containers so I've been looking into how to make dealing with these easier.\r\n\r\nContainer based services:\r\n* Kubernetes / Volcano scheduler\r\n* AWS EKS / Batch\r\n* Google AI Platform training\r\n* Recent versions of slurm https://slurm.schedmd.com/containers.html\r\n\r\nTorchX currently supports patches on top of existing images to make it fast to iterate and then launch a training job. These patches are just overlaying files from the local directory on top of a base image. Our current patching implementation relies on having a local docker daemon to build a patch layer and push it: https://github.com/pytorch/torchx/blob/main/torchx/schedulers/docker_scheduler.py#L437-L493\r\n\r\nIdeally we could build a patch layer and push it in pure Python without requiring any local docker instances since that's an extra burden on ML researchers/users. Building a patch should be fairly straightforward since it's just appending to a layer and pushing will require some ability to talk to the registry to download/upload containers.\r\n\r\nIt seems like OCI containers are a logical choice to use for packaging ML training jobs/apps but the current Python tooling is fairly lacking as far as I can see. Making it easier to work with this will likely help with the cloud story.\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nCreate a library for Python to manipulate OCI images with the following subset of features:\r\n\r\n* download/upload images to OCI repos\r\n* append layers to OCI images\r\n\r\nNon-goals:\r\n\r\n* Execute containers\r\n* Dockerfiles\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\nThere is an existing oci-python library but it's fairly early. May be able to build upon it to enable this.\r\n\r\nI opened an issue there as well: https://github.com/vsoch/oci-python/issues/15\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/388", "state": "open", "labels": [ "enhancement", "RFC", "kubernetes", "slurm" ], "created_at": "2022-02-11T04:47:27Z", "updated_at": "2023-01-23T14:54:10Z", "comments": 1, "user": "d4l3k" }, { "repo": "pytorch/TensorRT", "number": 862, "title": "\u2753 [Question] Running a same torchscript using the same input producing different results.", "body": "## \u2753 Question\r\n\r\nI'm trying to run a pretrained resnet50 model from torch.torchvision.models. enabled_precisions is set to torch.half.\r\nEach time I load the same resnet50 torchscript, using the same input\uff08which is set to zero using np.zeros\uff09. But after running serveral times I've found the output is not stable.\r\n\r\n## What you have already tried\r\n\r\nI've tried two ways:\r\n\r\n1. Load the same resetnet50 torchscript and compile it, the do the inference. The output is not stable.\r\n2. Save the compiled script, load it each time and to the inference. The output is stable.\r\n\r\nI wonder whether there's some random behaviors in `torch_tensorrt.compile()` when enabled_precisions is set to torch.half.\r\n\r\n## Environment\r\n\r\n - PyTorch Version : 1.10\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): installed via pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.6.9\r\n - CUDA version: 11.4\r\n - GPU models and configuration: pretrained resnet50 model from torch.torchvision.models\r\n - Any other relevant information: Torch-TensorRT version: v1.0\r\n\r\n## Additional context\r\n\r\nThe python code producing unstable result is as below:\r\n\r\n```python\r\nfrom torchvision import models\r\nimport numpy as np\r\nimport torch\r\nimport torch_tensorrt\r\nimport time\r\n\r\ninput = np.zeros((1, 3, 224, 224)).astype(np.float32)\r\ninput = torch.from_numpy(input).cuda()\r\n\r\ntorch_script_module = torch.jit.load('torch_script_module.ts')\r\n\r\ntrt_ts_module = torch_tensorrt.compile(torch_script_module,\r\n inputs=[\r\n torch_tensorrt.Input( # Specify input object with shape and dtype\r\n min_shape=[1, 3, 224, 224],\r\n opt_shape=[1, 3, 224, 224],\r\n max_shape=[1, 3, 224, 224],\r\n # For static size shape=[1, 3, 224, 224]\r\n dtype=torch.float32) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)\r\n ],\r\n enabled_precisions={torch.half},) # Run with FP16)\r\n\r\nresult=trt_ts_module(input) # run inference\r\n\r\nt1 = time.time()\r\nfor i in range(1000):\r\n result=trt_ts_module(input) # run inference\r\nt2 = time.time()\r\nprint('result', result[0][0])\r\nprint(\"Cost: \", round(t2-t1, 4))\r\n```\r\nTwo iterations produce different outputs:\r\nIteration 1:\r\n```\r\nWARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Detected invalid timing cache, setup a local cache instead\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nresult tensor(-0.4390, device='cuda:0')\r\nCost: 1.3429\r\n```\r\nIteration 2:\r\n```\r\nWARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Detected invalid timing cache, setup a local cache instead\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.4.2\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.2.0\r\nresult tensor(-0.4463, device='cuda:0')\r\nCost: 1.3206\r\n```\r\n", "url": "https://github.com/pytorch/TensorRT/issues/862", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-02-10T12:18:34Z", "updated_at": "2022-09-10T00:02:32Z", "user": "SeTriones" }, { "repo": "pytorch/TensorRT", "number": 858, "title": "\u2753 [Question] ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory", "body": "## \u2753 Question\r\n\r\nAs I can't install `torch-tensorrt` for some reason in this method:`pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases\r\n`\r\nI download `torch-tensorrt` from here `https://github.com/NVIDIA/Torch-TensorRT/releases/tag/v1.0.0`\r\nusing `pip install torch_tensorrt-1.0.0-cp36-cp36m-linux_x86_64.whl`\r\n\r\nhowever when I `import torch_tensorrt`\r\nhere comes the error `ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory`\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):1.10.2+cu113\r\n - CPU Architecture:\r\n - OS (e.g., Linux):linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):pip\r\n - Build command you used (if compiling from source): pip install \r\n - Are you using local sources or building from archives:\r\n - Python version:3.6.2\r\n - CUDA version:11.3\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\ntensorrt==8.2.1.8\r\n", "url": "https://github.com/pytorch/TensorRT/issues/858", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-02-09T03:27:44Z", "updated_at": "2022-06-19T12:55:25Z", "user": "Biaocsu" }, { "repo": "pytorch/TensorRT", "number": 856, "title": "\u2753 [Question] Is it possibile to use a model optimized through TorchTensorRT in LibTorch under Windows?", "body": "## \u2753 Question\r\n\r\nI would need to optimize an already trained segmentation model through TorchTensorRT, the idea would be to optimize the model by running the [newest PyTorch NGC docker image](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-01.html#rel_22-01) under WSL2, exporting the model and then loading it in a C++ application that uses LibTorch, e.g.\r\n```\r\n#include <torch/script.h>\r\n// ...\r\ntorch::jit::script::Module module;\r\ntry {\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n module = torch::jit::load(argv[1]);\r\n}\r\n```\r\nWould this be the right approach?\r\n## What you have already tried\r\nAt the moment I only tried to optimize the model through TorchTensorRT, and something weird happens. Here I'll show the results for the Python script below that I obtained on two different devices:\r\n- a Ubuntu desktop with a GTX1080Ti (that I use for development)\r\n- a Windows PC with a RTX3080 (that is my target device)\r\n\r\nAs you can see, **the optimization process under WSL gives me a lot of GPU errors**, while on Ubuntu it seems to work fine. Why does this happen?\r\n\r\nMy script:\r\n```\r\nimport torch_tensorrt\r\nimport yaml\r\nimport torch\r\nimport os\r\nimport time\r\nimport numpy as np\r\nimport torch.backends.cudnn as cudnn\r\nimport argparse\r\nimport segmentation_models_pytorch as smp\r\nimport pytorch_lightning as pl\r\ncudnn.benchmark = True\r\n\r\ndef benchmark(model, input_shape=(1, 3, 512, 512), dtype=torch.float, nwarmup=50, nruns=1000):\r\n input_data = torch.randn(input_shape)\r\n input_data = input_data.to(\"cuda\")\r\n if dtype==torch.half:\r\n input_data = input_data.half()\r\n \r\n print(\"Warm up ...\")\r\n with torch.no_grad():\r\n for _ in range(nwarmup):\r\n features = model(input_data)\r\n torch.cuda.synchronize()\r\n print(\"Start timing ...\")\r\n timings = []\r\n with torch.no_grad():\r\n for i in range(1, nruns+1):\r\n start_time = time.time()\r\n features = model(input_data)\r\n torch.cuda.synchronize()\r\n end_time = time.time()\r\n timings.append(end_time - start_time)\r\n if i%100==0:\r\n print('Iteration %d/%d, ave batch time %.2f ms'%(i, nruns, np.mean(timings)*1000))\r\n\r\n print(\"Input shape:\", input_data.size())\r\n print(\"Output features size:\", features.size())\r\n \r\n print('Average batch time: %.2f ms'%(np.mean(timings)*1000))\r\n \r\ndef load_config(config_path: str):\r\n with open(config_path) as f:\r\n config = yaml.load(f, Loader=yaml.FullLoader)\r\n return config\r\n \r\n \r\n \r\ndef main():\r\n # Load target model\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"weights_path\")\r\n parser.add_argument(\"config_path\")\r\n args = parser.parse_args()\r\n config = load_config(args.config_path)\r\n model_dict = config[\"model\"]\r\n model_dict[\"activation\"] = \"softmax2d\"\r\n model = smp.create_model(**model_dict)\r\n state_dict = torch.load(args.weights_path)[\"state_dict\"]\r\n model.load_state_dict(state_dict)\r\n model.to(\"cuda\")\r\n model.eval()\r\n # Create dummy data for tracing and benchmarking purposes.\r\n dtype = torch.float32\r\n shape = (1, 3, 512, 512)\r\n input_data = torch.randn(shape).to(\"cuda\")\r\n \r\n # Convert model to script module\r\n print(\"Tracing PyTorch model...\")\r\n traced_script_module = torch.jit.trace(model, input_data)\r\n # torch_script_module = torch.jit.load(model_path).cuda()\r\n print(\"Script Module generated.\")\r\n print(\"\\nBenchmarking Script Module...\")\r\n # First benchmark <===================================\r\n benchmark(traced_script_module, shape, dtype)\r\n \r\n \r\n # Convert to TRT Module...\r\n output_path = args.config_path.split(os.path.sep)[-1] + \"_trt_.pt\"\r\n print(\"Creating TRT module...\")\r\n trt_ts_module = torch_tensorrt.compile(\r\n traced_script_module,\r\n inputs = [\r\n torch_tensorrt.Input( # Specify input object with shape and dtype\r\n shape=shape,\r\n dtype=dtype) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)\r\n ],\r\n enabled_precisions = {dtype},\r\n )\r\n print(\"TRT Module created\")\r\n print(\"\\nBenchmarking TRT Module...\")\r\n benchmark(trt_ts_module, shape, dtype)\r\n torch.jit.save(trt_ts_module, os.path.join(\"models\",output_path)) # save the TRT embedded Torchscript\r\n \r\nif __name__ == \"__main__\":\r\n main()\r\n \r\n```\r\n\r\n### Ubuntu desktop\r\n```\r\nroot@ca10ddc496a3:/DockerStuff# python script.py path/to/checkout.tar path/to/config.yaml\r\nNo pretrained weights exist for this model. Using random initialization.\r\nTracing PyTorch model...\r\n/opt/conda/lib/python3.8/site-packages/segmentation_models_pytorch/base/model.py:16: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that", "url": "https://github.com/pytorch/TensorRT/issues/856", "state": "closed", "labels": [ "question", "No Activity", "channel: windows" ], "created_at": "2022-02-08T10:22:57Z", "updated_at": "2022-08-27T00:03:53Z", "user": "andreabonvini" }, { "repo": "pytorch/TensorRT", "number": 852, "title": "How to set custom GCC path when compiling the source code", "body": "## \u2753 Question\r\nHow to set the GCC path when compiling the source code\r\n\r\n## What you have already tried\r\nI try to build Torch-TensorRT using locally installed cuDNN & TensorRT, But the following error occurred\r\n![image](https://user-images.githubusercontent.com/61401199/152751388-e243228f-84f9-4761-8011-3f650adc91d0.png)\r\nI found that this maybe a problem with the GCC version and needs to be upgraded, but the default /usr/bin/gcc requires root permission to change, I can't do anything about this path. So, I want to install a higher version of GCC in another path and specify the path of GCC when compiling Torch-TensorRT, but I don't know where to set the path of GCC.\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/852", "state": "closed", "labels": [ "question" ], "created_at": "2022-02-07T08:33:12Z", "updated_at": "2022-04-25T17:01:14Z", "user": "yuezhuang1387" }, { "repo": "pytorch/TensorRT", "number": 851, "title": "docker build failed", "body": "```\r\ngit clone https://github.com/NVIDIA/Torch-TensorRT\r\ncd Torch-TensorRT\r\n```\r\n\r\n`docker build --build-arg BASE=21.11 -f docker/Dockerfile -t torch_tensorrt:latest .`\r\n\r\n```\r\ngets the error like this:\r\nSending build context to Docker daemon 29.61MB\r\nStep 1/33 : ARG BASE=21.10\r\nStep 2/33 : ARG BASE_IMG=nvcr.io/nvidia/pytorch:${BASE}-py3\r\nStep 3/33 : FROM ${BASE_IMG} as base\r\n ---> 6eae00e8ee65\r\nStep 4/33 : FROM base as torch-tensorrt-builder-base\r\n ---> 6eae00e8ee65\r\nStep 5/33 : RUN rm -rf /opt/torch-tensorrt /usr/bin/bazel\r\n ---> Using cache\r\n ---> 407b606a69ba\r\nStep 6/33 : ARG ARCH=\"x86_64\"\r\n ---> Using cache\r\n ---> a47c16d2137b\r\nStep 7/33 : ARG TARGETARCH=\"amd64\"\r\n ---> Using cache\r\n ---> 2aa5a3eab761\r\nStep 8/33 : ARG BAZEL_VERSION=4.2.1\r\n ---> Using cache\r\n ---> f21f368cf46b\r\nStep 9/33 : RUN git config --global url.\"https://github.com.cnpmjs.org/\".insteadOf https://github.com/\r\n ---> Using cache\r\n ---> 8b689f617bb2\r\nStep 10/33 : RUN [[ \"$TARGETARCH\" == \"amd64\" ]] && ARCH=\"x86_64\" || ARCH=\"${TARGETARCH}\" && wget -q https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-linux-${ARCH} -O /usr/bin/bazel && chmod a+x /usr/bin/bazel\r\n ---> Using cache\r\n ---> a3c8f7522040\r\nStep 11/33 : RUN touch /usr/lib/$HOSTTYPE-linux-gnu/libnvinfer_static.a\r\n ---> Using cache\r\n ---> d21a2d4dff51\r\nStep 12/33 : RUN rm -rf /usr/local/cuda/lib* /usr/local/cuda/include && ln -sf /usr/local/cuda/targets/$HOSTTYPE-linux/lib /usr/local/cuda/lib64 && ln -sf /usr/local/cuda/targets/$HOSTTYPE-linux/include /usr/local/cuda/include\r\n ---> Using cache\r\n ---> 39ee2cf4915f\r\nStep 13/33 : RUN apt-get update && apt-get install -y --no-install-recommends locales ninja-build && rm -rf /var/lib/apt/lists/* && locale-gen en_US.UTF-8\r\n ---> Using cache\r\n ---> 711e012e97fd\r\nStep 14/33 : FROM torch-tensorrt-builder-base as torch-tensorrt-builder\r\n ---> 711e012e97fd\r\nStep 15/33 : COPY . /workspace/torch_tensorrt/src\r\n ---> Using cache\r\n ---> 2ea5a90787b7\r\nStep 16/33 : WORKDIR /workspace/torch_tensorrt/src\r\n ---> Using cache\r\n ---> b8e79eb37534\r\nStep 17/33 : RUN cp ./docker/WORKSPACE.docker WORKSPACE\r\n ---> Using cache\r\n ---> 7a90e4a378d4\r\nStep 18/33 : RUN ./docker/dist-build.sh\r\n ---> Running in 669eeb348f7c\r\nrunning bdist_wheel\r\nExtracting Bazel installation...\r\nStarting local Bazel server and connecting to it...\r\nLoading:\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nAnalyzing: target //:libtorchtrt (1 packages loaded, 0 targets configured)\r\nINFO: Analyzed target //:libtorchtrt (43 packages loaded, 2965 targets configured).\r\nINFO: Found 1 target...\r\n[0 / 10] [Prepa] Creating source manifest for @rules_pkg//:build_tar\r\n[1,111 / 1,235] Compiling core/lowering/passes/remove_bn_dim_check.cpp; 3s processwrapper-sandbox ... (3 actions running)\r\n[1,112 / 1,235] Compiling core/lowering/passes/remove_bn_dim_check.cpp; 7s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,115 / 1,235] Compiling core/lowering/passes/linear_to_addmm.cpp; 8s processwrapper-sandbox ... (4 actions running)\r\n[1,118 / 1,235] Compiling core/lowering/passes/exception_elimination.cpp; 6s processwrapper-sandbox ... (4 actions running)\r\n[1,121 / 1,235] Compiling core/conversion/converters/impl/squeeze.cpp; 10s processwrapper-sandbox ... (4 actions running)\r\n[1,122 / 1,235] Compiling core/conversion/converters/impl/interpolate.cpp; 13s processwrapper-sandbox ... (4 actions running)\r\n[1,125 / 1,235] Compiling core/conversion/converters/impl/lstm_cell.cpp; 11s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,129 / 1,235] Compiling cpp/bin/torchtrtc/main.cpp; 8s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,133 / 1,235] Compiling cpp/bin/torchtrtc/main.cpp; 21s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,142 / 1,235] Compiling core/conversion/converters/Weights.cpp; 7s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,147 / 1,235] Compiling core/conversion/converters/impl/topk.cpp; 12s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,155 / 1,235] Compiling core/conversion/converters/impl/cast.cpp; 16s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,163 / 1,235] Compiling core/conversion/converters/impl/layer_norm.cpp; 15s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,176 / 1,235] Compiling cpp/src/ptq.cpp; 8s processwrapper-sandbox ... (4 actions, 3 running)\r\n[1,187 / 1,235] Compiling core/conversion/evaluators/aten.cpp; 17s processwrapper-sandbox ... (4 actions running)\r\nERROR: /workspace/torch_tensorrt/src/core/conversion/evaluators/BUILD:10:11: Compiling core/conversion/evaluators/eval_util.cpp failed: (Exit 1): gcc failed: error executing command /usr/", "url": "https://github.com/pytorch/TensorRT/issues/851", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-02-07T07:02:01Z", "updated_at": "2022-05-20T00:02:07Z", "user": "Biaocsu" }, { "repo": "pytorch/pytorch", "number": 72365, "title": "How is Tensor.type supposed to work with strings?", "body": "### \ud83d\udc1b Describe the bug\n\nI was looking for a functionality to convert Tensor dtype in-place by passing a string instead of the relevant `torch.dtype`.\r\n\r\n`Tensor.type`, according to the docs, is supposed to work with `dtype`s and `str`s:\r\n```python\r\ndef type(self: T, dst_type: Union[dtype, str]) -> T:\r\n r\"\"\"Casts all parameters and buffers to :attr:`dst_type`.\r\n\r\n .. note::\r\n This method modifies the module in-place.\r\n\r\n Args:\r\n dst_type (type or string): the desired type\r\n\r\n Returns:\r\n Module: self\r\n \"\"\"\r\n```\r\n\r\nHowever, it seems not to work if `dst_type` is passed as a string.\r\nI would expect it to work the same way as NumPy's `astype(...)`\r\nI did not find usage examples around.\r\n\r\nExample code:\r\n```python\r\nimport torch\r\nimport numpy as np\r\n\r\nx = torch.rand(5,5)\r\ny = np.random.rand(5,5)\r\n\r\n# conversion using the relevant dtype works\r\nx.type(torch.float16)\r\ny.astype(np.float16)\r\n\r\n# np supports also dtype passed as strings\r\ny.astype(\"float16\")\r\n\r\n# however, torch does not\r\nx.type(\"float16\")\r\n\r\n# also this does not work\r\nx.type(\"torch.float16\")\r\n```\r\n\r\n#### Error stack\r\nFirst example:\r\n```\r\n>>> x.type(\"float16\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nValueError: invalid type: 'float16'\r\n```\r\nSecond example:\r\n```\r\n>>> x.type(\"torch.float16\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nValueError: invalid type: 'torch.float16'\r\n```\n\n### Versions\n\nCollecting environment information...\r\nPyTorch version: 1.10.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 21.04 (x86_64)\r\nGCC version: (Ubuntu 10.3.0-1ubuntu1) 10.3.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\nLibc version: glibc-2.33\r\n\r\nPython version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.11.0-46-generic-x86_64-with-glibc2.17\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: GeForce RTX 2080\r\nNvidia driver version: 460.91.03\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8\r\n/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.2\r\n[pip3] pytorch-ranger==0.1.1\r\n[pip3] torch==1.10.0\r\n[pip3] torch-optimizer==0.1.0\r\n[pip3] torch-pruning==0.2.7\r\n[pip3] torch-summary==1.4.5\r\n[pip3] torchattacks==3.1.0\r\n[pip3] torchaudio==0.10.0\r\n[pip3] torchinfo==0.0.9\r\n[pip3] torchvision==0.11.1\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 10.2.89 hfd86e86_1 \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2020.2 256 \r\n[conda] mkl-service 2.3.0 py38he904b0f_0 \r\n[conda] mkl_fft 1.3.0 py38h54f3939_0 \r\n[conda] mkl_random 1.1.1 py38h0573a6f_0 \r\n[conda] numpy 1.19.2 py38h54aff64_0 \r\n[conda] numpy-base 1.19.2 py38hfa32c7d_0 \r\n[conda] pytorch 1.10.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] pytorch-ranger 0.1.1 pypi_0 pypi\r\n[conda] torch-optimizer 0.1.0 pypi_0 pypi\r\n[conda] torch-pruning 0.2.7 pypi_0 pypi\r\n[conda] torch-summary 1.4.5 pypi_0 pypi\r\n[conda] torchattacks 3.1.0 pypi_0 pypi\r\n[conda] torchaudio 0.10.0 py38_cu102 pytorch\r\n[conda] torchinfo 0.0.9 pypi_0 pypi\r\n[conda] torchvision 0.11.1 py38_cu102 pytorch\n\ncc @mruberry @rgommers", "url": "https://github.com/pytorch/pytorch/issues/72365", "state": "closed", "labels": [ "triaged", "module: numpy", "module: ux" ], "created_at": "2022-02-04T21:44:27Z", "updated_at": "2023-05-13T06:07:10Z", "user": "marcozullich" }, { "repo": "pytorch/text", "number": 1581, "title": "Specified Field dtype <torchtext.legacy.data.pipeline.Pipeline object at ...> can not be used with use_vocab=False because we do not know how to numericalize it.", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\nI am trying to implement a sequence (multi-output) regression task using `torchtext`, but I am getting the error in the title. \r\n\r\ntorch version: 1.10.1\r\ntorchtext version: 0.11.1\r\n\r\nHere's how I proceed: \r\n\r\n**Given.** sequential data (own data) of the form:\r\n```\r\n text label\r\n 'w1' '[0.1, 0.3, 0.1]' \r\n 'w2' '[0.74, 0.4, 0.65]' \r\n 'w3' '[0.21, 0.56, 0.23]' \r\n<empty line denoting the beginning of a new sentence>\r\n ... ...\r\n```\r\n**TorchText Fields to read this data.** (works perfectly)\r\n\r\n```\r\nimport torchtext\r\nfrom torchtext.legacy import data\r\nfrom torchtext.legacy import datasets\r\n\r\n\r\nTEXT = data.Field(use_vocab=True, # use torchtext.vocab, and later on, numericalization based on pre-trained vectors\r\n lower=True)\r\n\r\nLABEL = data.Field(is_target=True,\r\n use_vocab=False, # I don't think that I need a vocab for my task, because the output is a list of doubles \r\n unk_token=None,\r\n preprocessing=data.Pipeline(\r\n lambda x: torch.tensor(list(map(float, removeBracets(x).split(' '))),\r\n dtype=torch.double)), # I implement this Pipeline to transform labels from string(list(doubles)) to torch.Tensor(doubles)\r\n dtype=torch.DoubleTensor) # the label is a tensor of doubles\r\n\r\nfields = [(\"text\",TEXT) , (\"label\",LABEL)]\r\n```\r\n\r\nSince I have sequential data, I used `datasets.SequenceTaggingDataset` to split the data into training, validation and testing sets.\r\n\r\n```\r\ntrain, valid, test = datasets.SequenceTaggingDataset.splits(path='./data/',\r\n train = train_path,\r\n validation = validate_path,\r\n test = test_path,\r\n fields=fields)\r\n```\r\nThen, I use a pre-trained embedding to build the vocab for the `TEXT` `Field`, e.g.\r\n\r\n``` \r\nTEXT.build_vocab(train, vectors=\"glove.840B.300d\")\r\n```\r\n\r\nAfter that, I use `BucketIterator` to create batches of the training data efficiently.\r\n\r\n```\r\ntrain_iterator, valid_iterator = data.BucketIterator.splits(\r\n (train, valid),\r\n device=DEVICE,\r\n batch_size=BATCH_SIZE,\r\n sort_key=lambda x: len(x.text),\r\n repeat=False,\r\n sort=True) # for validation/testing, better set it to False\r\n``` \r\nEverything works perfectly till now. However, when I try to iterate over train_iterator,\r\n\r\n```\r\nbatch = next(iter(train_iterator))\r\nprint(\"text\", batch.text)\r\nprint(\"label\", batch.label)\r\n```\r\n\r\n I get the following error:\r\n\r\n```\r\n 229 \"\"\"\r\n 230 padded = self.pad(batch)\r\n--> 231 tensor = self.numericalize(padded, device=device)\r\n 232 return tensor\r\n 233 \r\n\r\nPATH_TO\\torchtext\\legacy\\data\\field.py in numericalize(self, arr, device)\r\n 340 \"use_vocab=False because we do not know how to numericalize it. \"\r\n 341 \"Please raise an issue at \"\r\n--> 342 \"https://github.com/pytorch/text/issues\".format(self.dtype))\r\n 343 numericalization_func = self.dtypes[self.dtype]\r\n 344 # It doesn't make sense to explicitly coerce to a numeric type if\r\n\r\nValueError: Specified Field dtype <torchtext.legacy.data.pipeline.Pipeline object at 0x0XXXXXXXX> can not be used with use_vocab=False because we do not know how to numericalize it. Please raise an issue at https://github.com/pytorch/text/issues\r\n```\r\nI looked into the question #609. Unlike this issue, I need to find a numericalization for the labels, which are of the form list(torch.DoubleTensor). Do you have any suggestion? ", "url": "https://github.com/pytorch/text/issues/1581", "state": "open", "labels": [ "legacy" ], "created_at": "2022-02-04T16:25:50Z", "updated_at": "2022-04-17T08:46:36Z", "user": "MSiba" }, { "repo": "pytorch/data", "number": 195, "title": "Documentation Improvements Tracker", "body": "Here are some improvements that we should make to the documentation. Some of these likely should be completed before beta release.\r\n\r\nCrucial:\r\n- [x] Add docstrings for the class `IterDataPipe` and `MapDataPipe`\r\n https://github.com/pytorch/pytorch/pull/72618\r\n- [x] Review the categorization of `IterDataPipe` in `torchdata.datapipes.iter.rst`\r\n https://github.com/pytorch/data/pull/219\r\n- [x] Edit first sentence of each DataPipe docstring to be a concise summary of functionality (also include functional name when it exists)\r\n https://github.com/pytorch/pytorch/pull/72476\r\n https://github.com/pytorch/pytorch/pull/72475\r\n https://github.com/pytorch/data/pull/209\r\n- [x] Add usage examples to each DataPipe docstring\r\n https://github.com/pytorch/pytorch/pull/73033\r\n https://github.com/pytorch/pytorch/pull/73250\r\n https://github.com/pytorch/data/pull/249\r\n- [x] Add tutorial (how to use DataPipe, how to write one, how to use it with DataLoader)\r\n https://github.com/pytorch/data/pull/212\r\n- [x] Add domain usage examples (links to files)\r\n https://github.com/pytorch/data/pull/216\r\n- [x] Decide what utility functions to include\r\n https://github.com/pytorch/data/pull/205\r\n- [x] Link to relevant DataLoader documentation\r\n https://github.com/pytorch/data/pull/205\r\n- [x] Turn on 'gh-pages' in this repo's setting\r\n It is enabled.\r\n- [x] Clear labelling of prototype vs beta phase\r\n https://github.com/pytorch/data/pull/252\r\n- [x] Add a link under the 'Docs' tab on pytorch.org\r\n\r\nNice-to-have:\r\n- [x] Update issue form for documentation related issues\r\n https://github.com/pytorch/data/pull/215\r\n- [ ] Add links to domain usage examples onto individual DataPipe pages (see how TorchVision does this)\r\n- [x] Remove tutorial from README.md and link it to the documentation tutorial\r\n- [ ] Make a functional equivalent table in documentation (in a separate page?)\r\n\r\ncc: @VitalyFedyunin @ejguan @wenleix @dongreenberg @NivekT ", "url": "https://github.com/meta-pytorch/data/issues/195", "state": "open", "labels": [ "todo" ], "created_at": "2022-02-03T19:39:09Z", "updated_at": "2022-06-02T15:18:39Z", "comments": 3, "user": "NivekT" }, { "repo": "pytorch/TensorRT", "number": 843, "title": "\u2753 [Question] Trying to find compatible versions between two different environments", "body": "## \u2753 Question\r\n\r\nI'm trying to save a serialized tensorRT optimized model using torch_tensorrt from one environment and then load it in another environment (different GPUs. one has Quadro M1000M, and another has Tesla P100.\r\n\r\nIn both environments I don't have full sudo control where I can install whatever I want (i.e. can't change nvidia driver), but I am able to install different cuda toolkits locally, same with pip installs with wheels.\r\n\r\n## What you have already tried\r\n\r\nI have tried (ones marked with @ are ones I can't change):\r\nenv #1 = \r\n@1. Tesla P100\r\n@2. Nvidia driver 460\r\n3. CUDA 11.3 (checked via torch.version.cuda). nvidia-smi shows 11.2. has many cuda versions installed from 10.2 to 11.4\r\n4. CuDNN 8.2.1.32\r\n5. TensorRT 8.2.1.8\r\n6. Torch_TensorRT 1.0.0\r\n7. Pytorch 1.10.1+cu113 (conda installed)\r\n\r\nenv #2 =\r\n@1. Quadro M1000M\r\n@2. Nvidia driver 455\r\n3. CUDA 11.3(checked via torch.version.cuda, backwards compatibilty mode I believe, but technically 11.3 requires 460+ nvidia driver according to the compatibility table). nvidia-smi shows 11.1. has 10.2 version available aside from 11.3 I installed.\r\n4. CuDNN 8.2.1.32\r\n5. TensorRT 8.2.1.8\r\n6. Torch_TensorRT 1.0.0\r\n7. Pytorch 1.10.1+cu113 (pip installed)\r\n\r\nSo as you can see the only difference is really the GPU and the NVIDIA driver (455 vs 460).\r\nIs this supposed to work?\r\nOn env#1, I can torch_tensorrt compile any models\r\nOn env#2, I run into issues if I try to compile any slightly complex models (i.e. resnet34) where it says:\r\nWARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.6.3 but loaded cuBLAS/cuBLAS LT 11.5.1\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 1: [wrapper.cpp::plainGemm::197] Error Code 1: Cublas (CUBLAS_STATUS_NOT_SUPPORTED)\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )\r\n\r\nIf I try to \"torch.jit.load\" any model made in env #1 (even the simplest ones like a model with 1 conv2d layer) on env #2, I get the following error msg:\r\n~/.local/lib/python3.6/site-packages/torch/jit/_serialization.py in load(f, map_location, _extra_files)\r\n 159 cu = torch._C.CompilationUnit()\r\n 160 if isinstance(f, str) or isinstance(f, pathlib.Path):\r\n--> 161 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\n 162 else:\r\n 163 cpp_module = torch._C.import_ir_module_from_buffer(\r\n\r\nRuntimeError: [Error thrown at core/runtime/TRTEngine.cpp:44] Expected most_compatible_device to be true but got false\r\nNo compatible device was found for instantiating TensorRT engine\r\n\r\n\r\n## Environment\r\nExplained above\r\n", "url": "https://github.com/pytorch/TensorRT/issues/843", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-02-01T19:33:31Z", "updated_at": "2022-05-20T00:02:07Z", "user": "hanbrianlee" }, { "repo": "pytorch/functorch", "number": 433, "title": "Determine how to mitigate the challenge of pytorch/pytorch changes breaking functorch", "body": "We get broken by pytorch/pytorch on an almost daily basis. Some of these changes are easy to resolve, some are not easy to resolve. This has cost me 10s of hours so far and going forward will cost even more. We should come up with some way to mitigate this.\r\n\r\nThere are at least two axes for the proposals. On one axis is development velocity for functorch, on the other axis is how much time it takes for us to get notified of a change in pytorch/pytorch that is problematic. These generally get traded off in the proposals.\r\n\r\nSome proposals that we've heard so far:\r\n- Follow what pytorch/xla did. That is, have a test in pytorch/pytorch that builds functorch main and signals if there's a problem. The tradeoff here is that functorch main must now be green most of the time (e.g. no more committing directly to main) and we need our CI to run off of pytorch main, not the pytorch nightlies.\r\n- The pytorch/xla idea, except, the test always reports green but emails someone if there is a problem.\r\n- Just merge functorch into pytorch/pytorch. This gives us the fastest signal to a problematic change (in fact, the problematic change won't get merged if they break a functorch test), but it trades off our development velocity completely.\r\n- put functorch as a submodule on pytorch/pytorch, package the two libraries together", "url": "https://github.com/pytorch/functorch/issues/433", "state": "closed", "labels": [ "actionable", "needs design" ], "created_at": "2022-02-01T15:54:27Z", "updated_at": "2022-10-17T19:55:44Z", "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 71991, "title": "How to make an LSTM Bidirectional?", "body": "### \ud83d\udc1b Describe the bug\n\nGoal: make LSTM `self.classifier()` learn from bidirectional layers. \r\n\r\n`# !` = code lines of interest\r\n\r\n**Question:**\r\nWhat changes to `LSTMClassifier` do I need to make, in order to have this LSTM work bidirectionally?\r\n\r\n---\r\n\r\nI *think* the problem is in `forward()`. It learns from the **last state** of LSTM neural network, by slicing:\r\n```python\r\ntag_space = self.classifier(lstm_out[:,-1,:])\r\n``` \r\n\r\nHowever, bidirectional changes the architecture and thus the output shape. \r\n\r\nDo I need to sum up or concatenate the values of the 2 layers/ directions?\r\n\r\n---\r\n\r\nInstalls:\r\n```\r\n!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl\r\n!pip -q install pytorch-lightning==1.2.7 torchmetrics awscli mlflow boto3 pycm\r\n!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl\r\n!pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchtext==0.10.0 -f https://download.pytorch.org/whl/cu111/torch_stable.html\r\n```\r\n\r\nWorking Code:\r\n```python\r\nfrom argparse import ArgumentParser\r\n\r\nimport torchmetrics\r\nimport pytorch_lightning as pl\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\nclass LSTMClassifier(nn.Module):\r\n\r\n def __init__(self, \r\n num_classes, \r\n batch_size=10,\r\n embedding_dim=100, \r\n hidden_dim=50, \r\n vocab_size=128):\r\n\r\n super(LSTMClassifier, self).__init__()\r\n\r\n initrange = 0.1\r\n\r\n self.num_labels = num_classes\r\n n = len(self.num_labels)\r\n self.hidden_dim = hidden_dim\r\n self.batch_size = batch_size\r\n\r\n self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)\r\n self.word_embeddings.weight.data.uniform_(-initrange, initrange)\r\n self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, batch_first=True, bidirectional=True) # !\r\n \r\n print(\"# !\")\r\n \r\n bi_grus = torch.nn.GRU(input_size=embedding_dim, hidden_size=hidden_dim, batch_first=True, bidirectional=True)\r\n reverse_gru = torch.nn.GRU(input_size=embedding_dim, hidden_size=hidden_dim, batch_first=True, bidirectional=False)\r\n \r\n self.lstm.weight_ih_l0_reverse = bi_grus.weight_ih_l0_reverse\r\n self.lstm.weight_hh_l0_reverse = bi_grus.weight_hh_l0_reverse\r\n self.lstm.bias_ih_l0_reverse = bi_grus.bias_ih_l0_reverse\r\n self.lstm.bias_hh_l0_reverse = bi_grus.bias_hh_l0_reverse\r\n \r\n bi_output, bi_hidden = bi_grus()\r\n reverse_output, reverse_hidden = reverse_gru()\r\n \r\n print(\"# !\")\r\n\r\n # self.classifier = nn.Linear(hidden_dim, self.num_labels[0])\r\n self.classifier = nn.Linear(2 * hidden_dim, self.num_labels[0]) # !\r\n\r\n\r\n def repackage_hidden(h):\r\n \"\"\"Wraps hidden states in new Tensors, to detach them from their history.\"\"\"\r\n\r\n if isinstance(h, torch.Tensor):\r\n return h.detach()\r\n else:\r\n return tuple(repackage_hidden(v) for v in h)\r\n\r\n\r\n def forward(self, sentence, labels=None):\r\n embeds = self.word_embeddings(sentence)\r\n lstm_out, _ = self.lstm(embeds) # lstm_out - 2 tensors, _ - hidden layer\r\n print(lstm_out[:,-1,:])\r\n tag_space = self.classifier(lstm_out[:,-1,:] + lstm_out[:,-1,:]) # ! # lstm_out[:,-1,:] - 1 tensor\r\n logits = F.log_softmax(tag_space, dim=1)\r\n loss = None\r\n if labels:\r\n loss = F.cross_entropy(logits.view(-1, self.num_labels[0]), labels[0].view(-1))\r\n return loss, logits\r\n\r\n\r\nclass LSTMTaggerModel(pl.LightningModule):\r\n def __init__(\r\n self,\r\n num_classes,\r\n class_map,\r\n from_checkpoint=False,\r\n model_name='last.ckpt',\r\n learning_rate=3e-6,\r\n **kwargs,\r\n ):\r\n\r\n super().__init__()\r\n self.save_hyperparameters()\r\n self.learning_rate = learning_rate\r\n self.model = LSTMClassifier(num_classes=num_classes)\r\n self.model.load_state_dict(torch.load(model_name), strict=False) # !\r\n self.class_map = class_map\r\n self.num_classes = num_classes\r\n self.valid_acc = torchmetrics.Accuracy()\r\n self.valid_f1 = torchmetrics.F1()\r\n\r\n\r\n def forward(self, *input, **kwargs):\r\n return self.model(*input, **kwargs)\r\n\r\n def training_step(self, batch, batch_idx):\r\n x, y_true = batch\r\n loss, _ = self(x, labels=y_true)\r\n self.log('train_loss', loss)\r\n return loss\r\n\r\n def validation_step(self, batch, batch_idx):\r\n x, y_true = batch\r\n _, y_pred = self(x, labels=y_true)\r\n preds = torch.argmax(y_pred, axis=1)\r\n self.valid_acc(preds, y_true[0])\r\n self.log('val_acc', self.valid_acc, prog_bar=True)\r\n self.valid_f1(preds, y_true[0])\r\n self.log('f1', self.valid_f1, prog_bar=True) \r\n\r\n def configure_optimizers(self):\r\n 'Pre", "url": "https://github.com/pytorch/pytorch/issues/71991", "state": "closed", "labels": [], "created_at": "2022-01-28T16:03:23Z", "updated_at": "2022-01-31T09:59:27Z", "user": "danielbellhv" }, { "repo": "pytorch/TensorRT", "number": 830, "title": "\u2753 [Question] Why BERT Base is slower w/ Torch-TensorRT than native PyTorch? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI'm trying to optimize hugging face's BERT Base uncased model using Torch-TensorRT, the code works after disabling full compilation (`require_full_compilation=False`), and the avg latency is ~10ms on T4. However, it it slower than native PyTorch implementation (~6ms on T4). In contrast, running the same model with `trtexec` only takes ~4ms. So, for BERT Base, it's 2.5x slower than TensorRT. I wonder if this is expected?\r\n\r\nHere's the full code:\r\n```\r\nfrom transformers import BertModel, BertTokenizer, BertConfig\r\nimport torch\r\nimport time\r\n\r\nenc = BertTokenizer.from_pretrained(\"./bert-base-uncased\")\r\n\r\n# Tokenizing input text\r\ntext = \"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]\"\r\ntokenized_text = enc.tokenize(text)\r\n\r\n# Masking one of the input tokens\r\nmasked_index = 8\r\ntokenized_text[masked_index] = '[MASK]'\r\nindexed_tokens = enc.convert_tokens_to_ids(tokenized_text)\r\nsegments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]\r\n\r\n# Creating a dummy input\r\ntokens_tensor = torch.tensor([indexed_tokens]).to(torch.int32).cuda()\r\nsegments_tensors = torch.tensor([segments_ids]).to(torch.int32).cuda()\r\n\r\ndummy_input = [tokens_tensor, segments_tensors]\r\ndummy_input_shapes = [list(v.size()) for v in dummy_input]\r\n\r\n# Initializing the model with the torchscript flag\r\n# Flag set to True even though it is not necessary as this model does not have an LM Head.\r\nconfig = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768,\r\n num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True)\r\n\r\n# Instantiating the model\r\nmodel = BertModel(config)\r\n\r\n# The model needs to be in evaluation mode\r\nmodel.eval()\r\n\r\n# If you are instantiating the model with `from_pretrained` you can also easily set the TorchScript flag\r\nmodel = BertModel.from_pretrained(\"./bert-base-uncased\", torchscript=True)\r\n\r\nmodel = model.eval().cuda()\r\n\r\n# Creating the trace\r\ntraced_model = torch.jit.trace(model, dummy_input)\r\n\r\nimport torch_tensorrt\r\ncompile_settings = {\r\n \"require_full_compilation\": False,\r\n \"truncate_long_and_double\": True,\r\n \"torch_executed_ops\": [\"aten::Int\"]\r\n}\r\noptimized_model = torch_tensorrt.compile(traced_model, inputs=dummy_input, **compile_settings)\r\n\r\ndef benchmark(model, input):\r\n # Warming up\r\n for _ in range(10):\r\n model(*input)\r\n\r\n inference_count = 1000\r\n # inference test\r\n start = time.time()\r\n for _ in range(inference_count):\r\n model(*input)\r\n end = time.time()\r\n print(f\"use {(end-start)/inference_count*1000} ms each inference\")\r\n print(f\"{inference_count/(end-start)} step/s\")\r\n\r\nprint(\"before compile\")\r\nbenchmark(traced_model, dummy_input)\r\n\r\nprint(\"after compile\")\r\nbenchmark(optimized_model, dummy_input)\r\n```\r\n\r\nSo, my question is why it is slower than native PyTorch, and how do I fine-tune it?\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\nI've checked out the log from Torch-TensorRT, looks like the model is partitioned into 3 parts, separated by `at::Int` op, and looks like Int op is [hard to implement](https://github.com/NVIDIA/Torch-TensorRT/issues/513).\r\n\r\nNext, I profiled the inference process with Nsight System, here's the screenshot:\r\n![CleanShot 2022-01-26 at 18 44 38](https://user-images.githubusercontent.com/552990/151149720-d707afcb-0fb0-467d-a468-b1b35eb9330a.png)\r\n\r\nIt is expected to see 3 divided segments, however, there are 2 things that caught my attention:\r\n1. Why segment 0 is slower than pure TensorRT? Is it due to over complicated conversion?\r\n2. Why the `cudaMemcpyAsync` took so long? Shouldn't it only return the `last_hidden_state` tensor?\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10\r\n - CPU Architecture:\r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): python setup.py develop\r\n - Are you using local sources or building from archives: local sources\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2\r\n - GPU models and configuration: T4\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/830", "state": "closed", "labels": [ "question", "No Activity", "performance" ], "created_at": "2022-01-26T10:55:56Z", "updated_at": "2023-11-09T09:13:15Z", "user": "void-main" }, { "repo": "pytorch/torchx", "number": 375, "title": "[torchx/config] Generate docs on the available configuration options in .torchxconfig", "body": "## \ud83d\udcda Documentation\r\n\r\nNote: not a request for correction of documentation!\r\n\r\n## Link\r\nhttps://pytorch.org/torchx/latest/experimental/runner.config.html\r\n\r\n## What does it currently say?\r\nNothing wrong with the current docs, but would be nice to have a list of the options that are \"set-able\" via .torchxconfig\r\n\r\n## What should it say?\r\nAdd a section that lists out the possible options and section names. Note that some options (e.g. the types of schedulers available and their respective runopts) are different between Meta-internal and OSS. Having a contextual `TorchXConfig` DAO-like object with placeholders and generating a docs page by dumping that object would make it possible to capture these differences.\r\n\r\n## Why?\r\nCurrently it is not clear what options can/cannot be set via .torchxconfig, we need a glossary of all the available options along with a help string on what they do and default values (if any)\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/375", "state": "open", "labels": [], "created_at": "2022-01-25T23:49:07Z", "updated_at": "2022-04-08T18:23:57Z", "comments": 2, "user": "kiukchung" }, { "repo": "pytorch/TensorRT", "number": 824, "title": "\u2753 [Question] How to use FP16 precision in C++", "body": "## \u2753 Question\r\n\r\nI am trying run inference on an FP16-Engine in C++. `engine->getBindingDataType(i)` correctly returns '1' (kHALF) for all Bindings. However, when I am using the following lines to get the output, the compiler is obviously interpreting it as normal floats (=FP32)\r\n```\r\n\r\nstd::vector<float> cpu_output(getSizeByDim(output_dims[0]) * 1);\r\ncudaMemcpy(cpu_output.data(), buffers[outputIndex], cpu_output.size() * sizeof(float), cudaMemcpyDeviceToHost);\r\n```\r\n\r\nHow can I make sure that the contents are correctly converted to float, or what datatype can I use to interpret them as halfs? Right now, the `cpu_output` vector somehow casts the halfs so that the output floats are way too large (estimated ~100 times larger than they should be). Can I just do something like \"`cpu_output[i] = cpu_output[i]<<8`\"?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/824", "state": "closed", "labels": [ "question" ], "created_at": "2022-01-25T09:52:19Z", "updated_at": "2022-01-25T10:01:40Z", "user": "DavidBaldsiefen" }, { "repo": "pytorch/text", "number": 1537, "title": "[META] how do we want to handle stale issues/PRs?", "body": "## \u2753 Questions and Help\r\n\r\nThere are many issues and PRs in the repo either related to long-gone legacy APIs or have been overcome by events. How do we want to track/manage these potentially stale issues?\r\n\r\nOptions:\r\n- A bot\r\n - I don't like this option because it can permit false positives which makes it hard for users to find real issues\r\n- Manual inspection\r\n - This can take a bit of time, but it's more precise\r\n- Others?\r\n", "url": "https://github.com/pytorch/text/issues/1537", "state": "closed", "labels": [], "created_at": "2022-01-24T17:24:28Z", "updated_at": "2022-03-07T22:52:11Z", "user": "erip" }, { "repo": "pytorch/TensorRT", "number": 823, "title": "\u2753 [Question] How do you override or remove evaluators ", "body": "## \u2753 Question\r\n\r\nI am trying to use YOLOv5 with Torch-TensorRT. When I load the model, I get the following error message (among others):\r\n```\r\n\r\nERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [layers.cpp::validate::2385] Error Code 4: Internal Error (%3264 : Tensor = aten::mul(%3263, %3257) # /home/.../yolov5/models/yolo.py:66:0: operation PROD has incompatible input types Float and Int32)\r\n```\r\n\r\nThus I wanted to try to overload the `aten::mul` operator to support `float*int` and `int*float` operations, which fails (see below)\r\n\r\n**(How) Is it possible to override or remove existing evaluators?**\r\n\r\n## What you have already tried\r\n\r\nI am using the following code:\r\n```\r\n\r\nstatic auto atenmul_evaluator =\r\n torch_tensorrt::core::conversion::evaluators::RegisterNodeEvaluators().evaluator(\r\n {c10::Symbol::fromQualString(\"aten::mul\"),\r\n [](const torch::jit::Node *n, torch_tensorrt::core::conversion::evaluators::kwargs &args)\r\n -> c10::optional<torch::jit::IValue> {\r\n ROS_INFO(\"Custom Evaluator is being accessed!\");\r\n\r\n if (args.at(n->input(0)).IValue()->isInt() && args.at(n->input(1)).IValue()->isInt()) {\r\n auto a = args.at(n->input(0)).unwrapToInt();\r\n auto b = args.at(n->input(1)).unwrapToInt();\r\n return a * b;\r\n } else if (args.at(n->input(0)).IValue()->isDouble() &&\r\n args.at(n->input(1)).IValue()->isDouble()) {\r\n auto a = args.at(n->input(0)).unwrapToDouble();\r\n auto b = args.at(n->input(1)).unwrapToDouble();\r\n return a * b;\r\n } else if (args.at(n->input(0)).IValue()->isInt() &&\r\n args.at(n->input(1)).IValue()->isDouble()) {\r\n auto a = args.at(n->input(0)).unwrapToInt();\r\n auto b = args.at(n->input(1)).unwrapToDouble();\r\n return a * b;\r\n } else if (args.at(n->input(0)).IValue()->isDouble() &&\r\n args.at(n->input(1)).IValue()->isInt()) {\r\n auto a = args.at(n->input(0)).unwrapToDouble();\r\n auto b = args.at(n->input(1)).unwrapToInt();\r\n return a * b;\r\n } else {\r\n TORCHTRT_THROW_ERROR(\"Unimplemented data type for aten::mul evaluator: \"\r\n << args.at(n->input(0)).IValue()->type()->str());\r\n return {};\r\n }\r\n },\r\n torch_tensorrt::core::conversion::evaluators::EvalOptions().validSchemas(\r\n {\"aten::mul.int(int a, int b) -> (float)\",\r\n \"aten::mul.float(float a, float b) -> (float)\",\r\n \"aten::mul.int_float(int a, float b) -> (float)\",\r\n \"aten::mul.float_int(float a, int b) -> (float)\"})});\r\n```\r\n\r\nBut then I get the errormessage `Attempting to override already registered evaluator aten::mul, merge implementations instead`. Thus I want to try and find a way to override or remove the evaluator without recompiling Torch-TensorRT.\r\n\r\nWhen I implemented the above only for `int_float` and `float_int` seperately, the output returned to the orignal errormessage from above, indicating that the new evaluator wasn't used.\r\n", "url": "https://github.com/pytorch/TensorRT/issues/823", "state": "closed", "labels": [ "question", "component: converters", "No Activity" ], "created_at": "2022-01-24T08:43:17Z", "updated_at": "2022-11-21T16:12:05Z", "user": "DavidBaldsiefen" }, { "repo": "pytorch/TensorRT", "number": 820, "title": "\u2753 [Question] Have anyone encounter this: RuntimeError: expected type comment but found 'eof' here", "body": "## \u2753 Question\r\n\r\nwhen I run compile command like this:\r\n```python\r\ntrt_ts_module = torch_tensorrt.compile(model,\r\n inputs=[torch_tensorrt.Input((1, 3, 128, 128), dtype=torch.float32),\r\n torch_tensorrt.Input((1, 3, 320, 320), dtype=torch.float32)],\r\n enabled_precisions = {torch.float, torch.half})\r\n```\r\nI encounter this error:\r\n```\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/jit/frontend.py\", line 310, in build_def\r\n type_comment_decl = torch._C.parse_type_comment(type_line)\r\nRuntimeError: expected type comment but found 'eof' here:\r\n# # type: (List[Tensor], Tensor) -> Tensor\r\n```\r\n\r\n## What you have already tried\r\n\r\nNo other attempts.\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.11.0a0+b6df043\r\n - CPU Architecture: amd64\r\n - OS (e.g., Linux): ubuntu18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8\r\n - CUDA version: 11.5\r\n - GPU models and configuration: GTX 1660ti\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\nI just use docker recommended by tutorial at [https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)\r\n", "url": "https://github.com/pytorch/TensorRT/issues/820", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2022-01-20T13:57:51Z", "updated_at": "2022-05-05T00:02:27Z", "user": "laisimiao" }, { "repo": "pytorch/data", "number": 175, "title": "Refactor test suite to be more readable?", "body": "While working on #174, I also worked on the test suite. In there we have the ginormous tests that are hard to parse, because they do so many things at the same time:\r\n\r\nhttps://github.com/pytorch/data/blob/c06066ae360fc6054fb826ae041b1cb0c09b2f3b/test/test_datapipe.py#L382-L426\r\n\r\nI was wondering if there is a reason for that. Can't we split this into multiple smaller ones? Utilizing `pytest`, placing the following class in the test module is equivalent to the test above:\r\n\r\n```python\r\nclass TestLineReader:\r\n @pytest.fixture\r\n def text1(self):\r\n return \"Line1\\nLine2\"\r\n\r\n @pytest.fixture\r\n def text2(self):\r\n return \"Line2,1\\nLine2,2\\nLine2,3\"\r\n\r\n def test_functional_read_lines_correctly(self, text1, text2):\r\n source_dp = IterableWrapper([(\"file1\", io.StringIO(text1)), (\"file2\", io.StringIO(text2))])\r\n line_reader_dp = source_dp.readlines()\r\n expected_result = [(\"file1\", line) for line in text1.split(\"\\n\")] + [\r\n (\"file2\", line) for line in text2.split(\"\\n\")\r\n ]\r\n assert expected_result == list(line_reader_dp)\r\n\r\n def test_functional_strip_new_lines_for_bytes(self, text1, text2):\r\n source_dp = IterableWrapper(\r\n [(\"file1\", io.BytesIO(text1.encode(\"utf-8\"))), (\"file2\", io.BytesIO(text2.encode(\"utf-8\")))]\r\n )\r\n line_reader_dp = source_dp.readlines()\r\n expected_result_bytes = [(\"file1\", line.encode(\"utf-8\")) for line in text1.split(\"\\n\")] + [\r\n (\"file2\", line.encode(\"utf-8\")) for line in text2.split(\"\\n\")\r\n ]\r\n assert expected_result_bytes == list(line_reader_dp)\r\n\r\n def test_functional_do_not_strip_newlines(self, text1, text2):\r\n source_dp = IterableWrapper([(\"file1\", io.StringIO(text1)), (\"file2\", io.StringIO(text2))])\r\n line_reader_dp = source_dp.readlines(strip_newline=False)\r\n expected_result = [\r\n (\"file1\", \"Line1\\n\"),\r\n (\"file1\", \"Line2\"),\r\n (\"file2\", \"Line2,1\\n\"),\r\n (\"file2\", \"Line2,2\\n\"),\r\n (\"file2\", \"Line2,3\"),\r\n ]\r\n assert expected_result == list(line_reader_dp)\r\n\r\n def test_reset(self, text1, text2):\r\n source_dp = IterableWrapper([(\"file1\", io.StringIO(text1)), (\"file2\", io.StringIO(text2))])\r\n line_reader_dp = LineReader(source_dp, strip_newline=False)\r\n expected_result = [\r\n (\"file1\", \"Line1\\n\"),\r\n (\"file1\", \"Line2\"),\r\n (\"file2\", \"Line2,1\\n\"),\r\n (\"file2\", \"Line2,2\\n\"),\r\n (\"file2\", \"Line2,3\"),\r\n ]\r\n\r\n n_elements_before_reset = 2\r\n res_before_reset, res_after_reset = reset_after_n_next_calls(line_reader_dp, n_elements_before_reset)\r\n assert expected_result[:n_elements_before_reset] == res_before_reset\r\n assert expected_result == res_after_reset\r\n\r\n def test_len(self, text1, text2):\r\n source_dp = IterableWrapper([(\"file1\", io.StringIO(text1)), (\"file2\", io.StringIO(text2))])\r\n line_reader_dp = LineReader(source_dp, strip_newline=False)\r\n\r\n with pytest.raises(TypeError, match=\"has no len\"):\r\n len(line_reader_dp)\r\n```\r\n\r\nThis is a lot more readable, since we now actually have 5 separate test cases that can individually fail. Plus, while writing this I also found that `test_reset` and `test_len` were somewhat dependent on `test_functional_do_not_strip_newlines` since they don't neither define `line_reader_dp` nor `expected_result` themselves.", "url": "https://github.com/meta-pytorch/data/issues/175", "state": "open", "labels": [ "Better Engineering" ], "created_at": "2022-01-20T09:52:17Z", "updated_at": "2023-04-11T16:59:28Z", "comments": 6, "user": "pmeier" }, { "repo": "pytorch/functorch", "number": 400, "title": "how to get related commits of pytorch/pytorch and pytorch/functorch ?", "body": "For some reason, i need to install newest **pytorch/functorch** from sources. but i don't know the related **pytorch/pytorch** newest source. if the pytorch/pytorch and pytorch/functorch is not compatible, functorch will not work. how i get a newest relative pair of pytorch/pytorch commit and pytorch/functorch commit ?\r\n\r\ndoes pytorch/functorch only match the released or nightly version of pytorch/pytorch?", "url": "https://github.com/pytorch/functorch/issues/400", "state": "open", "labels": [], "created_at": "2022-01-20T03:25:26Z", "updated_at": "2022-01-20T15:43:40Z", "user": "GipsonLeo" }, { "repo": "pytorch/TensorRT", "number": 819, "title": "Build torch-trt failed in Ubuntu18.04", "body": "I try to build the project from source according to the guide in. https://nvidia.github.io/Torch-TensorRT/tutorials/installation.html with bazel but failed.\r\n\r\nMy environment:\r\n```\r\nos: Ubuntu18.04\r\ngcc: 7.5.0\r\ng++: 7.5.0\r\ncuda: 11.3\r\ncudnn: 8.2\r\ntensorRT: 8.2\r\ntorch-trt branch: ngc-21.12\r\nbazel: 4.2.1 (installed in conde env through: `conda install -c conda-forge bazel=4.2.1`)\r\n```\r\n\r\nBuild command:\r\n```\r\n$ export TEST_TMPDIR=/tmp/cache_bazel\r\n$ export BAZEL_USER_ROOT=/tmp/trt/ltp\r\n$ export LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64:$LD_LIBRARY_PATH\r\n\r\n$ bazel --output_user_root=${BAZEL_USER_ROOT} \\\r\n build //:libtorchtrt -c opt \\\r\n --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]\r\n```\r\n\r\nError: `cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'k8'`\r\nDetail log: \r\n```\r\n$TEST_TMPDIR defined: output root default is '/tmp/cache_bazel' and max_idle_secs default is '15'.\r\nStarting local Bazel server and connecting to it...\r\nLoading:\r\nLoading: 0 packages loaded\r\nAnalyzing: target //:libtorchtrt (1 packages loaded, 0 targets configured)\r\nINFO: non-existent distdir /home/tianping/Torch-TensorRT/third_party/dist_dir/[x86_64-linux-gnu\r\nINFO: non-existent distdir /home/tianping/Torch-TensorRT/third_party/dist_dir/[x86_64-linux-gnu\r\nERROR: /tmp/trt/ltp/a7833d9e16b047b679ab8ac389d55fc8/external/local_config_cc/BUILD:47:19: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'k8'\r\nINFO: Repository tensorrt instantiated at:\r\n /home/tianping/Torch-TensorRT/WORKSPACE:89:13: in <toplevel>\r\nRepository rule http_archive defined at:\r\n /tmp/trt/ltp/a7833d9e16b047b679ab8ac389d55fc8/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>\r\nAnalyzing: target //:libtorchtrt (39 packages loaded, 155 targets configured)\r\nINFO: Repository libtorch instantiated at:\r\n /home/tianping/Torch-TensorRT/WORKSPACE:56:13: in <toplevel>\r\nRepository rule http_archive defined at:\r\n /tmp/trt/ltp/a7833d9e16b047b679ab8ac389d55fc8/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>\r\nERROR: Analysis of target '//:libtorchtrt' failed; build aborted: Analysis of target '@local_config_cc//:toolchain' failed\r\nINFO: Elapsed time: 3.881s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully (39 packages loaded, 155 targets configured)\r\nFAILED: Build did NOT complete successfully (39 packages loaded, 155 targets configured)\r\n```\r\n\r\ncould you help solve this problem, thanks a lot.", "url": "https://github.com/pytorch/TensorRT/issues/819", "state": "closed", "labels": [ "question" ], "created_at": "2022-01-19T11:24:27Z", "updated_at": "2022-01-20T01:42:26Z", "user": "Mookel" }, { "repo": "pytorch/xla", "number": 3305, "title": "how to get relative commits of pytorch/pytorch and pytorch/xla ?", "body": "## \u2753 Questions and Help\r\nFor some reason, i need to install newest torch XLA from sources. but i don't know the related pytorch/pytorch newest source. if the pytorch/pytorch and pytorch/xla is not compatible, xla will not work. how i get a newest relative pair of pytorch/pytorch commit and pytorch/xla commit ?\r\n\r\nFor example, an old pair is as flow, but too old:\r\npytorch/pytorch - HEAD git hash is a95abc46\r\npytorch/xla - HEAD git hash is 9c2f91e\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/3305", "state": "closed", "labels": [], "created_at": "2022-01-19T08:38:55Z", "updated_at": "2022-02-19T00:30:08Z", "user": "GipsonLeo" }, { "repo": "pytorch/pytorch", "number": 71272, "title": "UserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn(\"Seems like `optimizer.step()` has been overridden after learning rate scheduler", "body": "### \ud83d\udc1b Describe the bug\n\nI am following the same way that is provided [here ](https://pytorch.org/docs/1.10.1/generated/torch.optim.lr_scheduler.StepLR.html#torch.optim.lr_scheduler.StepLR) for using `StepLR`:\r\n```python \r\n\r\nscheduler = StepLR(optimizer, step_size=30, gamma=0.1)\r\nfor epoch in range(100):\r\n train(...)\r\n validate(...)\r\n scheduler.step()\r\n```\r\nbut I keep getting the following warning which is very annoying\r\n\r\n``` python\r\nUserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\r\n warnings.warn(\"Seems like `optimizer.step()` has been overridden after learning rate scheduler\r\n```\r\n\r\n\r\nalso the output of `collect_env` is:\r\n\r\n```\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.4\r\n[pip3] torch==1.10.0\r\n[pip3] torchaudio==0.10.0\r\n[pip3] torcheck==1.0.1\r\n[pip3] torchinfo==1.5.4\r\n[pip3] torchvision==0.11.1\r\n[conda] blas 1.0 mkl defaults\r\n[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults\r\n[conda] mkl 2021.4.0 h06a4308_640 defaults\r\n[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge\r\n[conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge\r\n[conda] mkl_random 1.2.2 py39hde0f152_0 conda-forge\r\n[conda] numpy 1.21.2 py39h20f2e39_0 defaults\r\n[conda] numpy-base 1.21.2 py39h79a1101_0 defaults\r\n[conda] pytorch 1.10.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 0.10.0 py39_cu113 pytorch\r\n[conda] torchinfo 1.5.4 pyhd8ed1ab_0 conda-forge\r\n[conda] torchvision 0.11.1 py39_cu113 pytorch\r\n```\n\n### Versions\n\nalso the output of `collect_env` is:\r\n\r\n```\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.4\r\n[pip3] torch==1.10.0\r\n[pip3] torchaudio==0.10.0\r\n[pip3] torcheck==1.0.1\r\n[pip3] torchinfo==1.5.4\r\n[pip3] torchvision==0.11.1\r\n[conda] blas 1.0 mkl defaults\r\n[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults\r\n[conda] mkl 2021.4.0 h06a4308_640 defaults\r\n[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge\r\n[conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge\r\n[conda] mkl_random 1.2.2 py39hde0f152_0 conda-forge\r\n[conda] numpy 1.21.2 py39h20f2e39_0 defaults\r\n[conda] numpy-base 1.21.2 py39h79a1101_0 defaults\r\n[conda] pytorch 1.10.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 0.10.0 py39_cu113 pytorch\r\n[conda] torchinfo 1.5.4 pyhd8ed1ab_0 conda-forge\r\n[conda] torchvision 0.11.1 py39_cu113 pytorch\r\n```\n\ncc @vincentqb @jbschlosser @albanD", "url": "https://github.com/pytorch/pytorch/issues/71272", "state": "open", "labels": [ "needs reproduction", "module: optimizer", "triaged", "module: LrScheduler" ], "created_at": "2022-01-13T19:03:46Z", "updated_at": "2022-01-20T16:33:17Z", "user": "seyeeet" }, { "repo": "pytorch/xla", "number": 3283, "title": "How to benchmark the JIT / XLA?", "body": "## \u2753 Questions and Help\r\n\r\nDear JAX developers,\r\n\r\nI am trying to better understand the performance of JAX and its underlying just-in-time compilation architecture, but am puzzled how to get access to this information. For example, it would be helpful to distinguish how much time is spent tracing in Python, doing HLO optimizations within XLA, and time spent further downstream in LLVM->PTX and PTX->SASS compilation steps.\r\n\r\nSurely these are useful metrics to JAX developers as well, but I could not find any information on how to access them.\r\n\r\nSearching online brings me to a [PyTorch/XLA troubleshoooting guide](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md) with promising-looking interfaces like\r\n\r\n```\r\nimport torch_xla.debug.metrics as met\r\n\r\nprint(met.metrics_report())\r\n```\r\n\r\nThis page also mentions a `XLA_METRICS_FILE` and other environment variables that can be used to extract metrics information --- however, it seems that all of these are 100% PyTorch specific.\r\n\r\nAny suggestions would be greatly appreciated!\r\n\r\nThanks,\r\nWenzel", "url": "https://github.com/pytorch/xla/issues/3283", "state": "closed", "labels": [], "created_at": "2022-01-08T16:31:55Z", "updated_at": "2022-01-10T08:26:40Z", "user": "wjakob" }, { "repo": "pytorch/pytorch", "number": 71058, "title": "`torch.Tensor.where` cannot work when `y` is float", "body": "### \ud83d\udc1b Describe the bug\n\nBased on the [documentation](https://pytorch.org/docs/stable/generated/torch.Tensor.where.html?highlight=where#torch.Tensor.where) of `torch.Tensor.where`, `self.where(condition, y)` is equivalent to `torch.where(condition, self, y)`. However, `torch.where` will succeed when `y` is a float but `Tensor.where` will raise an error.\r\n\r\n```python\r\nimport torch\r\ncondition= torch.randint(0,2,[2, 2], dtype=torch.bool)\r\nx= torch.rand([2, 2], dtype=torch.float64)\r\ny = 0.0\r\nprint( torch.where(condition, x, y) )\r\n# tensor([[0.0000, 0.6290],\r\n# [0.0000, 0.0000]], dtype=torch.float64)\r\nprint( x.where(condition, y) )\r\n# TypeError: where(): argument 'other' (position 2) must be Tensor, not float\r\n```\n\n### Versions\n\npytorch: 1.10.1\n\ncc @nairbv @mruberry", "url": "https://github.com/pytorch/pytorch/issues/71058", "state": "open", "labels": [ "triaged", "module: type promotion" ], "created_at": "2022-01-08T15:18:11Z", "updated_at": "2022-01-11T15:36:54Z", "user": "TestSomething22" }, { "repo": "pytorch/pytorch", "number": 70923, "title": "type promotion is broken in `torch.where`", "body": "The [array API specification stipulates](https://data-apis.org/array-api/latest/API_specification/searching_functions.html?highlight=where#id7) that the return value of `torch.where` should undergo regular type promotion. Currently we do not support different dtypes for `x` and `y`:\r\n\r\n```python\r\nimport torch\r\n\r\ncondition = torch.tensor([False, True])\r\nx = torch.ones(2, dtype=torch.float32)\r\ny = torch.zeros(2, dtype=torch.float64)\r\n\r\ntorch.where(condition, x, y)\r\n```\r\n\r\n```\r\nRuntimeError: expected scalar type float but found double\r\n```\r\n\r\nNote that the error message is also misleading since we deal with 1d tensors here. \n\ncc @nairbv @mruberry @rgommers @pmeier @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi", "url": "https://github.com/pytorch/pytorch/issues/70923", "state": "closed", "labels": [ "triaged", "module: type promotion", "module: python array api" ], "created_at": "2022-01-06T14:39:05Z", "updated_at": "2022-01-07T07:50:40Z", "user": "pmeier" }, { "repo": "pytorch/serve", "number": 1389, "title": "how to determine number of workers and batch size to obtain best performance?", "body": "I have one model and 3 gpus. I register my model with the command:\r\ncurl -X POST \"localhost:8444/models?url=yoyo_ai.mar&**batch_size=8**&max_batch_delay=8000&**initial_workers=8**\"\r\n\r\nIn this setup, gpu:0 is assigned 2 workers and others are assigned 3 workers. (2 + 3 + 3)\r\nI make requests with the following code where data_batch is a list holding 64 images (i assume each worker to handle 8 images):\r\n\r\nasync def do_post(session, url, image):\r\n async with session.post(url, data=image) as response:\r\n return await response.text()\r\n\r\nasync def make_predictions(data_stack, model_url):\r\n async with aiohttp.ClientSession() as session:\r\n post_tasks = []\r\n # prepare the coroutines that post\r\n for img in data_stack:\r\n post_tasks.append(do_post(session, model_url, img))\r\n # now execute them all at once\r\n responses = await asyncio.gather(*post_tasks)\r\n return responses\r\n\r\ndef get_predictions(data_batch, model_url):\r\n loop = asyncio.get_event_loop()\r\n predictions = None\r\n try:\r\n predictions = loop.run_until_complete(make_predictions(data_batch, model_url))\r\n finally:\r\n return predictions\r\n\r\nWhile making requests in an endless loop this is the memory usage i get:\r\n![Screenshot from 2022-01-06 10-51-02](https://user-images.githubusercontent.com/45604971/148348602-5ed1ff68-3f71-416d-a31a-e482c5f3bb55.png)\r\n\r\nIf i further increase the batch size to 12 because of high memory usage of gpu:0 torchserve throws exception. Same happens if i keep batch size as 8 but increase number of workers (e.g. 9). This time each gpu gets 3 workers and gpu:0 fails to handle it. On the other hand, if i set the number of workers to 6 and keep batch size as 8, total processing time not become worse compared to 8/8 setup. Meanwhile, either 8/6 or 8/8 setup don't use memory at full capacity. As a final note, gpu utilization keeps going back and forth between %0 and %100 during inference (not at 100% or %80/%90 all time).\r\n\r\nIs there a way to use gpus at full capacity? I wonder how should i register my model with the best batch size and number of workers combination to use gpus optimally. Or do i have a problem at making requests?\r\n\r\nThank you very much for any help", "url": "https://github.com/pytorch/serve/issues/1389", "state": "closed", "labels": [ "help wanted" ], "created_at": "2022-01-06T08:14:27Z", "updated_at": "2022-02-03T22:27:03Z", "user": "orkunozturk" }, { "repo": "pytorch/tutorials", "number": 1781, "title": "tutorials/advanced_source/super_resolution_with_onnxruntime.py is maybe outdated?", "body": "I am working at the moment trough the [tutorial](https://github.com/pytorch/tutorials/blob/master/advanced_source/super_resolution_with_onnxruntime.py) and realized, that the entry notes are not up-to-date. \r\n\r\n- line 19 says, onnx is available/compatible between 3.5 to 3.7:\r\n - I tested installation in a venv with 3.9 without problems\r\n- line 21-22 says, says that the main/master branch is needed:\r\n - I tested the standard imports from line 26 to 32 and all imports worked without a problem.\r\n\r\nI am running a ubuntu 20.04 with torch stable 1.10.1 installed via pip for cuda 10.2.\r\n\r\nI did not finished the tutorial yet and will append further informations while continuing.\r\n\r\nEDIT:\r\nI can confirm: works without any issues", "url": "https://github.com/pytorch/tutorials/issues/1781", "state": "closed", "labels": [ "content", "docathon-h1-2023", "easy" ], "created_at": "2022-01-05T15:29:57Z", "updated_at": "2023-06-02T22:24:09Z", "comments": 2, "user": "MaKaNu" }, { "repo": "pytorch/serve", "number": 1385, "title": "How to decode response after post process?", "body": "Hello. I'm using custom bert model on my custom handler using Korean.\r\n\r\nWhen I request input text, handler encodes it and process like this.\r\n\r\n``` {'body': bytearray(b'[\\n\\t\\t\\t[\"\\xec\\x9a\\x94\\xec\\xa6\\x98 \\xeb\\xb6\\x80\\xeb\\xaa\\xa8\\xeb\\x8b\\x98\\xea\\xb3\\xbc \\xeb\\xa7\\x8e\\xec\\x9d\\xb4 \\xeb\\xb6\\x80\\xeb\\x94\\xaa\\xed\\x98\\x80.\",\\n\\t\\t\\t \"\\xec\\x96\\xb4\\xeb\\x96\\xa4 \\xec\\x9d\\xbc\\xeb\\xa1\\x9c ... ```\r\n\r\n But results in custom model came out with Korean.\r\n\r\nProblem is response.\r\n\r\nAlthought my custom model gives Korean Results, \r\nTorch serve's response is encoded again.\r\n\r\nHow can I fix this?\r\n\r\nThank you.", "url": "https://github.com/pytorch/serve/issues/1385", "state": "closed", "labels": [ "help wanted" ], "created_at": "2022-01-04T01:33:02Z", "updated_at": "2022-01-07T17:32:22Z", "user": "MinsuKim3095" }, { "repo": "pytorch/text", "number": 1476, "title": "How to get all tokens in a Vocab using text", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\n**Motivation**\r\nHi,\r\n\r\nWhen I load a vocab or have built a vocab using torchtext.vocab, I can not print its all token in the Vocab\r\n\r\n\r\n", "url": "https://github.com/pytorch/text/issues/1476", "state": "closed", "labels": [], "created_at": "2022-01-01T06:53:51Z", "updated_at": "2022-01-01T14:07:08Z", "user": "yipliu" }, { "repo": "pytorch/xla", "number": 3271, "title": "How to specify compute capability when building from soruce to support GPU?", "body": "Hello, when I finish building from soruce to support GPU, and run the test script test_train_mp_imagenet.py, a warning is shown:\r\n\r\nTensorFlow was not built with CUDA kernel binaries compatible with compute capability 7.5. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.\r\n\r\nI am wondering how to specify the compute capability when building xla ?\r\n\r\nThanks very much!", "url": "https://github.com/pytorch/xla/issues/3271", "state": "closed", "labels": [ "xla:gpu" ], "created_at": "2021-12-28T06:05:07Z", "updated_at": "2022-02-19T00:36:41Z", "user": "yxd886" }, { "repo": "pytorch/pytorch", "number": 70413, "title": "PyTorch crashes without an error message, when running this code snippet with torch.tensor subclassing & forward hooks (Not sure what the exact cause is, but the code snippet reliably causes it)", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWhile working a project for PyTorch's [Captum](https://github.com/pytorch/captum) library, I came across a bug that I've been struggling to narrow down the cause of. I've done my best to simplify what is happening in the Captum code, and the snippet of code below should reliably reproduce the crash, though I apologies for not being able to narrow it down to smaller snippet of code.\r\n\r\nThe example code uses torch.tensor subclassing and forward hooks, and they appear to be important for causing the crash.\r\n\r\nI have no idea if there should be an error message when running the code, or if there should be no issue at all.\r\n\r\n```\r\nimport torch\r\nfrom torchvision import models\r\nmodel = models.resnet18()\r\n\r\nfrom typing import Type, Callable, List, Tuple, Union\r\nimport numpy as np\r\nfrom types import MethodType\r\n\r\n\r\nclass TestTensor(torch.Tensor):\r\n @staticmethod\r\n def __new__(\r\n cls: Type[\"TestTensor\"],\r\n x: Union[List, np.ndarray, torch.Tensor] = [],\r\n *args,\r\n **kwargs,\r\n ) -> torch.Tensor:\r\n if isinstance(x, torch.Tensor) and x.is_cuda:\r\n x.show = MethodType(cls.show, x)\r\n x.export = MethodType(cls.export, x)\r\n return x\r\n else:\r\n return super().__new__(cls, x, *args, **kwargs)\r\n\r\n @classmethod\r\n def __torch_function__(\r\n cls: Type[\"TestTensor\"],\r\n func: Callable,\r\n types: List[Type[torch.Tensor]],\r\n args: Tuple = (),\r\n kwargs: dict = None,\r\n ) -> torch.Tensor:\r\n if kwargs is None:\r\n kwargs = {}\r\n return super().__torch_function__(func, types, args, kwargs)\r\n\r\n\r\nclass TestTensor2(torch.nn.Module):\r\n\r\n def __init__(self):\r\n super().__init__()\r\n self.test_tensor = torch.randn(3,3,224,224).clamp(0,1)\r\n\r\n def forward(self):\r\n x = self.test_tensor\r\n return TestTensor(x)\r\n\r\n\r\ndef test_hook(target):\r\n\r\n def forward_hook(self, input, output) -> None:\r\n pass\r\n\r\n test_hooks = target.register_forward_hook(forward_hook)\r\n test_hooks.remove()\r\n return image().detach(), torch.randn(5)\r\n\r\n\r\nclass CaptumModuleOutputsHook:\r\n def __init__(self, target_modules) -> None:\r\n self.outputs = dict.fromkeys(target_modules, None)\r\n self.hooks = [\r\n module.register_forward_hook(self._forward_hook())\r\n for module in target_modules\r\n ]\r\n\r\n def _forward_hook(self) -> Callable:\r\n def forward_hook(\r\n module: torch.nn.Module, input: Tuple[torch.Tensor], output: torch.Tensor\r\n ) -> None:\r\n assert module in self.outputs.keys()\r\n self.outputs[module] = output\r\n\r\n return forward_hook\r\n\r\n def consume_outputs(self):\r\n outputs = self.outputs\r\n self.outputs = dict.fromkeys(self.outputs.keys(), None)\r\n return outputs\r\n\r\n def remove_hooks(self) -> None:\r\n for hook in self.hooks:\r\n hook.remove()\r\n\r\n\r\ndef collect_activations(model, target, input_tensor):\r\n layers = CaptumModuleOutputsHook(target)\r\n try:\r\n model(input_tensor)\r\n activations_dict = layers.consume_outputs()\r\n finally:\r\n layers.remove_hooks()\r\n return activations_dict[target[0]]\r\n\r\n\r\ndef trigger_crash(\r\n model,\r\n image,\r\n target,\r\n):\r\n attempts, attempt_losses = [], []\r\n\r\n # Removing this loop somehow prevents the crash from happening\r\n for a in range(1):\r\n imgs, losses = test_hook(target)\r\n attempts.append(imgs.detach()); attempt_losses.append(losses)\r\n final_image, final_losses = torch.cat(attempts, 0), torch.stack(attempt_losses)\r\n\r\n activ = collect_activations(model, [target], final_image) # Crash happens on this line\r\n\r\n # Commenting out these lines of code somehow prevents the crash from happening\r\n comparison_losses = torch.stack([activ.mean()]*3)\r\n sorted_idx = torch.sort(comparison_losses)[1]\r\n best_image = final_image[sorted_idx[0:3]]\r\n best_losses = final_losses[sorted_idx[0:3]]\r\n return best_image, best_losses\r\n\r\n\r\nimage = TestTensor2()\r\ntrigger_crash(model, image, model.layer1)\r\n```\r\n\r\n### Versions\r\n\r\n```\r\nPyTorch version: 1.10.0+cu111\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)\r\nCMake version: version 3.12.0\r\nLibc version: glibc-2.26\r\n\r\nPython version: 3.7.12 (default, Sep 10 2021, 00:21:48) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\nIs CUDA available: False\r\nCUDA runtime version: 11.1.105\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5\r\n/usr/lib/", "url": "https://github.com/pytorch/pytorch/issues/70413", "state": "open", "labels": [ "triaged", "Stale", "tensor subclass" ], "created_at": "2021-12-26T18:33:55Z", "updated_at": "2022-02-26T21:02:46Z", "user": "ProGamerGov" }, { "repo": "pytorch/pytorch", "number": 70411, "title": "How to use custom dataset with SSD", "body": "I am trying to use SSD and retinanet from torchvision on my own dataset. However I cant find any reference on how to use my own dataset and what format requuired. Could any one please advice me \r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/70411", "state": "closed", "labels": [], "created_at": "2021-12-26T12:37:21Z", "updated_at": "2021-12-28T16:19:14Z", "user": "myasser63" }, { "repo": "pytorch/tutorials", "number": 1778, "title": "[Help Wanted] Why take the log function and then apply exp?", "body": "In [line of code](https://github.com/pytorch/tutorials/blob/master/beginner_source/transformer_tutorial.py#L113), you calculate positional encoding for Transformers by taking the log first and then apply the exponential function.\r\n\r\nWould you please elaborate on why you do this instead of directly doing the calculation?\r\n\r\nI'm aware that log transformation can make multiplication become addition, but it seems that this is not the case here.\n\ncc @suraj813", "url": "https://github.com/pytorch/tutorials/issues/1778", "state": "closed", "labels": [ "question", "intro", "docathon-h1-2023", "easy" ], "created_at": "2021-12-24T17:09:56Z", "updated_at": "2024-05-24T18:34:43Z", "user": "Superhzf" }, { "repo": "pytorch/TensorRT", "number": 788, "title": "\u2753 [Question] How do you ....? ", "body": "## \u2753 Question\r\n\r\nHi, could you please explain how this is better than pytorch to Onnx to TensorRT export path?\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/788", "state": "closed", "labels": [ "question" ], "created_at": "2021-12-22T17:36:54Z", "updated_at": "2022-01-04T23:56:04Z", "user": "andrei-pokrovsky" }, { "repo": "pytorch/TensorRT", "number": 786, "title": "\u2753 [Question] How do you ....? ", "body": "## \u2753 Question\r\n\r\nHow can do you use [OpenAI's CLIP](https://github.com/openai/CLIP) \r\n\r\n## What you have already tried\r\n\r\n```\r\nimport clip \r\nfrom torchvision import transforms\r\nimport torch_tensorrt\r\nimport torch\r\n\r\n\r\ndevice = \"cuda:0\"\r\n\r\nbatch_size = 4\r\n\r\nclip_model_name = \"ViT-B/32\"\r\n\r\nscripted_model , preprocess = clip.load(clip_model_name, device, jit=True)\r\n\r\nscripted_model = scripted_model.visual.to(device)\r\n\r\npreprocess = transforms.Compose([\r\n preprocess,\r\n lambda x: x.half()\r\n ])\r\n\r\ntrt_ts_module = torch_tensorrt.compile(scripted_model,\r\n inputs = [\r\n torch_tensorrt.Input( # Specify input object with shape and dtype\r\n shape=[batch_size, 3, 224, 224],\r\n dtype=torch.half) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)\r\n ])\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\nI have build my docker image using base image `21.10` with nvidia driver `470.86`.\r\n\r\n```\r\ndocker build --build-arg BASE=21.10 -f docker/Dockerfile -t torch_tensorrt:latest .\r\n```\r\n\r\nWith the following libraries installed. \r\n\r\n```\r\nnvidia-dlprof-pytorch-nvtx @ file:///nvidia/opt/dlprof/bin/nvidia_dlprof_pytorch_nvtx-1.6.0-py3-none-any.whl\r\nonnx @ file:///opt/pytorch/pytorch/third_party/onnx\r\npytorch-quantization==2.1.0\r\ntorch==1.10.0a0+0aef44c\r\ntorch-tensorrt @ file:///workspace/torch_tensorrt-1.1.0a0%2B733a4b1c-cp38-cp38-linux_x86_64.whl\r\ntorchtext @ file:///opt/pytorch/text\r\ntorchvision @ file:///opt/pytorch/vision\r\nclip @ git+https://github.com/openai/CLIP.git@573315e83f07b53a61ff5098757e8fc885f1703e\r\n```\r\n\r\n## Additional context\r\n\r\nThe error I am getting is: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"benchmark.py\", line 155, in <module>\r\n trt_ts_module = torch_tensorrt.compile(scripted_model,\r\n File \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt/_compile.py\", line 97, in compile\r\n return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py\", line 119, in compile\r\n compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))\r\nRuntimeError: The following operation failed in the TorchScript interpreter.\r\nTraceback of TorchScript, serialized code (most recent call last):\r\n File \"code/__torch__/multimodal/model/multimodal_transformer.py\", line 34, in forward\r\n x2 = torch.add(x1, torch.to(_4, 5, False, False, None), alpha=1)\r\n x3 = torch.permute((_3).forward(x2, ), [1, 0, 2])\r\n x4 = torch.permute((_2).forward(x3, ), [1, 0, 2])\r\n ~~~~~~~~~~~ <--- HERE\r\n _15 = torch.slice(x4, 0, 0, 9223372036854775807, 1)\r\n x5 = torch.slice(torch.select(_15, 1, 0), 1, 0, 9223372036854775807, 1)\r\n File \"code/__torch__/multimodal/model/multimodal_transformer/___torch_mangle_9477.py\", line 8, in forward\r\n def forward(self: __torch__.multimodal.model.multimodal_transformer.___torch_mangle_9477.Transformer,\r\n x: Tensor) -> Tensor:\r\n return (self.resblocks).forward(x, )\r\n ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n def forward1(self: __torch__.multimodal.model.multimodal_transformer.___torch_mangle_9477.Transformer,\r\n x: Tensor) -> Tensor:\r\n File \"code/__torch__/torch/nn/modules/container/___torch_mangle_9476.py\", line 29, in forward\r\n _8 = getattr(self, \"3\")\r\n _9 = getattr(self, \"2\")\r\n _10 = (getattr(self, \"1\")).forward((getattr(self, \"0\")).forward(x, ), )\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n _11 = (_7).forward((_8).forward((_9).forward(_10, ), ), )\r\n _12 = (_4).forward((_5).forward((_6).forward(_11, ), ), )\r\n File \"code/__torch__/multimodal/model/multimodal_transformer/___torch_mangle_9376.py\", line 13, in forward\r\n _0 = self.mlp\r\n _1 = self.ln_2\r\n _2 = (self.attn).forward((self.ln_1).forward(x, ), )\r\n ~~~~~~~~~~~~~~~~~~ <--- HERE\r\n x0 = torch.add(x, _2, alpha=1)\r\n x1 = torch.add(x0, (_0).forward((_1).forward(x0, ), ), alpha=1)\r\n File \"code/__torch__/torch/nn/modules/activation/___torch_mangle_9369.py\", line 34, in forward\r\n _13 = torch.contiguous(k, memory_format=0)\r\n _14 = [-1, int(torch.mul(bsz, CONSTANTS.c0)), _9]\r\n k0 = torch.transpose(torch.view(_13, _14), 0, 1)\r\n ~~~~~~~~~~ <--- HERE\r\n _15 = torch.contiguous(v, memory_format=0)\r\n _16 = [-1, int(torch.mul(bsz, CONSTANTS.c0)), _8]\r\n\r\nTraceback of TorchScript, original code (most recent call last):\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py(4265): multi_head_attention_forward\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/modules/activation.py(985): forward\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl\r\n/root/workspace/multimod", "url": "https://github.com/pytorch/TensorRT/issues/786", "state": "closed", "labels": [ "question" ], "created_at": "2021-12-22T09:10:31Z", "updated_at": "2022-01-25T10:01:54Z", "user": "hfawaz" }, { "repo": "pytorch/pytorch", "number": 70280, "title": "How to create build-in buffers which is writable during onnx inference?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nFirst, I'm sorry that this question may not be strictly relative to a feature request, but it has been posted on discuss.pytorch.org without any replies for one week.\r\n\r\nHi, I try to create a first-in-first-out queue as a pytorch model, export it to onnx and infer with onnxruntime. The queue, with a limited size, updates every time when a new input comes, and returns the updated queue. Codes are very simple:\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\n\r\nclass WavBuffer(nn.Module):\r\n def __init__(self, size=10):\r\n super().__init__()\r\n self.size = size\r\n wavbuf = torch.zeros(size)\r\n self.register_buffer('wavbuf', wavbuf)\r\n\r\n def forward(self, x):\r\n self.wavbuf = torch.cat([self.wavbuf, x])[-self.size:]\r\n return self.wavbuf\r\n\r\nmodel = WavBuffer(10)\r\nx = torch.ones(5)\r\nfor i in range(2):\r\n wavbuf = model(x)\r\n print(wavbuf)\r\n```\r\nAs expected, the outputs are:\r\n```\r\ntensor([0., 0., 0., 0., 0., 1., 1., 1., 1., 1.])\r\ntensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])\r\n```\r\nThen I export the model to onnx format and infer with onnxruntime:\r\n```\r\ntorch.onnx.export(\r\n model, torch.zeros(5), 'model.onnx', verbose=False, input_names=['wav'],\r\n output_names=['wavbuf'], opset_version=11\r\n)\r\n\r\nimport numpy as np\r\nimport onnxruntime\r\n\r\nmodel = onnxruntime.InferenceSession('model.onnx')\r\nx = np.ones(5, dtype=np.float32)\r\ninputs = {model.get_inputs()[0].name: x}\r\nfor i in range(2):\r\n outputs = model.run(None, inputs)\r\n wavbuf = outputs[0]\r\n print(wavbuf)\r\n```\r\nHowever, now the outputs are:\r\n```\r\n[0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]\r\n[0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]\r\n```\r\nI guess that weights in onnx models are not changeable, but is there any solution to create writable build-in buffers during model design and change the buffers in onnx inference? An available example is LSTM, where the hidden states update for each time step. However, it is too difficult for me to its implementation.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/70280", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2021-12-22T02:26:28Z", "updated_at": "2022-01-05T01:46:44Z", "user": "lawlict" }, { "repo": "pytorch/TensorRT", "number": 783, "title": "\u2753 [Question] Is there a way to visualize the TRT model? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nI'm wondering if there is a way to get the TRT model after compilation and visualize it. I trying to compare a PTQ model to a QAT model. I know I might have to do some further optimization just trying to visualize the graphs and see what is going on . Currently using DenseNet169 \r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\nI can visualize an ONNX graph but unsure of TRT\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.0\r\n - CPU Architecture: x86 (Intel Skylake)\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): N/A\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8.10\r\n - CUDA version: 11.4\r\n - GPU models and configuration: Tesla T4\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/783", "state": "closed", "labels": [ "question" ], "created_at": "2021-12-21T16:39:50Z", "updated_at": "2022-05-18T20:34:06Z", "user": "jessicarcassidy" }, { "repo": "pytorch/pytorch", "number": 70244, "title": " [feature request]how to merge many models to one model with shared backbone just use some code ,not a create a new model", "body": " I train some models with different datas ,these models' some parameters are shared ,\r\nwhen i inference the models ,i need merge the models to one model ,i know the shared op ,so ,i want to merge these models\r\nshared op to one op with seperate head only when inference not train.\r\n i don't want to write a new model ,could i write a function to merge the op with same dict name ,and create a new inference model \r\nautomatic .\r\n or maybe any way else is ok \r\n thank you !\r\n", "url": "https://github.com/pytorch/pytorch/issues/70244", "state": "closed", "labels": [], "created_at": "2021-12-21T13:11:35Z", "updated_at": "2021-12-23T16:55:09Z", "user": "designerZhou" }, { "repo": "pytorch/android-demo-app", "number": 222, "title": "how 640*640 to 320*320", "body": "Input 640*640 model to 320*320 model. I changed the relevant parameters and the program flashed back. How do I change it to 320*320 input", "url": "https://github.com/pytorch/android-demo-app/issues/222", "state": "closed", "labels": [], "created_at": "2021-12-21T02:41:41Z", "updated_at": "2021-12-21T05:58:05Z", "user": "mozeqiu" }, { "repo": "pytorch/TensorRT", "number": 779, "title": "\u2753 [Question] Failed to compile trtorch use pre cxx11 abi", "body": "## \u2753 Question\r\n\r\nI'm trying to build trtorch v0.2.0 with pre cxx11 abi\r\nBut I always get the error like below\r\n\r\nINFO: Analyzed target //:libtrtorch (40 packages loaded, 2667 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /root/git_source/Torch-TensorRT-0.2.0/cpp/trtorchc/BUILD:10:10: Linking cpp/trtorchc/trtorchc failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/cpp/trtorchc/trtorchc-2.params\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/cpp/trtorchc/trtorchc-2.params\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function c10::Device::validate(): error: undefined reference to 'c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function c10::Device::validate(): error: undefined reference to 'c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function c10::IValue::toTuple() const &: error: undefined reference to 'c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function c10::IValue::toTensor() const &: error: undefined reference to 'c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function torch::jit::Module::forward(std::vector<c10::IValue, std::allocator<c10::IValue> >): error: undefined reference to 'torch::jit::Object::find_method(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function torch::jit::Module::forward(std::vector<c10::IValue, std::allocator<c10::IValue> >): error: undefined reference to 'torch::jit::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&)'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function main: error: undefined reference to 'torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&)'\r\nbazel-out/k8-opt/bin/cpp/trtorchc/_objs/trtorchc/main.o:main.cpp:function main: error: undefined reference to 'torch::jit::Module::save(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&) const'\r\nbazel-out/k8-opt/bin/cpp/api/_objs/trtorch/trtorch.o:trtorch.cpp:function trtorch::get_build_info[abi:cxx11](): error: undefined reference to 'at::show_config[abi:cxx11]()'\r\nbazel-out/k8-opt/bin/core/_objs/core/compiler.o:compiler.cpp:function c10::ClassType::addOrCheckAttribute(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::shared_ptr<c10::Type>, bool, bool): error: undefined reference to 'c10::ClassType::addAttribute(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::shared_ptr<c10::Type> const&, bool, bool)'\r\n......\r\n\r\n------------------------", "url": "https://github.com/pytorch/TensorRT/issues/779", "state": "closed", "labels": [ "question" ], "created_at": "2021-12-21T02:13:39Z", "updated_at": "2021-12-21T03:03:08Z", "user": "Fans0014" }, { "repo": "pytorch/tensorpipe", "number": 420, "title": "[Question]How to detect pipe(obtained from ctx->connect()) is writable?", "body": "Hi,\r\n\r\nwhen I get a pipe via `ctx->context(address)`, how do I know the pipe is ready for write or read? A return from `ctx->connect()` does not mean the connection has been built, right? If I call `pipe->write()` immediately, such write could fail as the underlying connection has not built yet.", "url": "https://github.com/pytorch/tensorpipe/issues/420", "state": "open", "labels": [], "created_at": "2021-12-19T02:14:39Z", "updated_at": "2022-02-16T01:51:04Z", "user": "Rhett-Ying" }, { "repo": "pytorch/data", "number": 144, "title": "Multiprocessing with any DataPipe writing to local file", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWe need to take extra care all DataPipe that would write to file system when DataLoader2 triggered multiprocessing. If the file name on the local file system is same across multiple processes, it would be a racing condition.\r\nThis is found when TorchText team is using `on_disk_cache` to cache file.\r\nDataLoader needs to know such DataPipe must be sharded with multiprocessing or enforce it into single process.\r\n\r\nAs a workaround, users have to download the file to local file system to prevent writing within DataPipe.\r\n\r\n### Versions\r\n\r\nmain branch", "url": "https://github.com/meta-pytorch/data/issues/144", "state": "closed", "labels": [ "bug", "good first issue", "help wanted", "high priority" ], "created_at": "2021-12-18T03:40:43Z", "updated_at": "2022-05-19T03:59:34Z", "comments": 13, "user": "ejguan" }, { "repo": "pytorch/pytorch", "number": 70099, "title": "Question: what is \"Parameter indices\"?", "body": "I meet the error. I know some variables which do not contribute to loss. How I can know these parameters' name? I don't know whether \"Parameter indices\" help me or not?\r\n\r\n> Parameter indices which did not receive grad for rank 7: 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380\r\n\r\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @mruberry @jbschlosser @walterddr @kshitij12345", "url": "https://github.com/pytorch/pytorch/issues/70099", "state": "open", "labels": [ "oncall: distributed", "Stale" ], "created_at": "2021-12-17T09:34:29Z", "updated_at": "2022-02-15T15:02:44Z", "user": "shoutOutYangJie" }, { "repo": "pytorch/TensorRT", "number": 776, "title": "could not support gelu\uff1f", "body": "I use this docker( nvcr.io/nvidia/pytorch:21.11-py3 ) you suggested to test torch-tensorrt, but can not trans pytorch model to torchscript model. It seems like gelu is not support, but I also use this docker (pytorch-20.12-py3) to trans pytorch model to torchscript model, it can work well.\r\n\r\nFile \"/opt/conda/lib/python3.8/site-packages/torch/jit/_serialization.py\", line 161, in load\r\n cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\nRuntimeError: \r\nArguments for call are not valid.\r\nThe following variants are available:\r\n \r\n aten::gelu(Tensor self, bool approximate) -> (Tensor):\r\n Argument approximate not provided.\r\n \r\n aten::gelu.out(Tensor self, bool approximate, *, Tensor(a!) out) -> (Tensor(a!)):\r\n Argument approximate not provided.\r\n\r\nThe original call is:\r\n\r\ntools/pytorch2torchscript.py(123): pytorch2libtorch\r\ntools/pytorch2torchscript.py(186): <module>\r\nSerialized File \"code/__torch__/torch/nn/modules/activation.py\", line 27\r\n def forward(self: __torch__.torch.nn.modules.activation.GELU,\r\n argument_1: Tensor) -> Tensor:\r\n return torch.gelu(argument_1)\r\n ~~~~~~~~~~ <--- HERE", "url": "https://github.com/pytorch/TensorRT/issues/776", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-12-17T08:38:37Z", "updated_at": "2022-04-01T00:02:17Z", "user": "daeing" }, { "repo": "pytorch/pytorch", "number": 70094, "title": "how to get the pre operator of current opeartor in PyTorch\uff1f", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI want to get the pre operator of current operator in forward? Can pytorch support this now?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_", "url": "https://github.com/pytorch/pytorch/issues/70094", "state": "closed", "labels": [], "created_at": "2021-12-17T07:01:52Z", "updated_at": "2021-12-17T14:36:30Z", "user": "kevinVegBird" }, { "repo": "pytorch/data", "number": 140, "title": "Installing torchdata installs `example` folder as well", "body": "### \ud83d\udc1b Describe the bug\n\nLooks like installing torchdata also installs `examples`. This should probably be removed from `setup.py` so that only the `torchdata` folder gets installed.\r\n\r\nExample of what happens when trying to uninstall torchdata\r\n```\r\nfmassa@devfair0163:~/work/vision_datasets$ pip uninstall torchdata\r\nFound existing installation: torchdata 0.3.0a0+6bad0e5\r\nUninstalling torchdata-0.3.0a0+6bad0e5:\r\n Would remove:\r\n /private/home/fmassa/.conda/envs/xformers/lib/python3.8/site-packages/examples/vision/*\r\n /private/home/fmassa/.conda/envs/xformers/lib/python3.8/site-packages/torchdata-0.3.0a0+6bad0e5.dist-info/*\r\n /private/home/fmassa/.conda/envs/xformers/lib/python3.8/site-packages/torchdata/*\r\nProceed (y/n)?\r\n```\n\n### Versions\n\nLasted one from master", "url": "https://github.com/meta-pytorch/data/issues/140", "state": "closed", "labels": [ "bug" ], "created_at": "2021-12-15T14:09:24Z", "updated_at": "2021-12-16T17:05:20Z", "comments": 0, "user": "fmassa" }, { "repo": "pytorch/TensorRT", "number": 772, "title": "\u2753 [Question] Is there support for optional arguments in model's `forward()`?", "body": "## \u2753 Question\r\n\r\nIs there support for optional arguments in model's `forward()`? For example, I have the following: `def forward(self, x, y: Optional[Tensor] = None):` where `y` is an optional tensor. The return result is `x + y` if `y` is provided, otherwise just `x`.\r\n\r\n## What you have already tried\r\nI added a second `torch_tensorrt.Input()` in the input spec, then at inference time got the error:\r\n`Expected dimension specifications for all input tensors, but found 1 input tensors and 2 dimension specs`\r\n\r\nI then removed the `Optional` annotation and just pass in `None` or the actual tensor for `y`. When `None` is passed in, I got the error: `RuntimeError: forward() Expected a value of type 'Tensor' for argument 'input_1' but instead found type 'NoneType'.`\r\n\r\nI also tried passing in just 1 argument for `x`, and got:\r\n`RuntimeError: forward() is missing value for argument 'input_1'` \r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.0+cu113\r\n - CPU Architecture: \r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): `pip`\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.7.11\r\n - CUDA version: 11.1\r\n - GPU models and configuration: Tesla V100 with 32GB memory\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/772", "state": "closed", "labels": [ "question", "component: core", "No Activity" ], "created_at": "2021-12-14T22:14:55Z", "updated_at": "2023-02-27T00:02:28Z", "user": "lhai37" }, { "repo": "pytorch/data", "number": 132, "title": "[TODO] can this also have a timeout?", "body": "\nThis issue is generated from the TODO line\nhttps://github.com/pytorch/data/blob/f102d25f9f444de3380c6d49bf7aaf52c213bb1f/build/lib/torchdata/datapipes/iter/load/online.py#L113\n\n ", "url": "https://github.com/meta-pytorch/data/issues/132", "state": "closed", "labels": [ "todo" ], "created_at": "2021-12-10T20:09:55Z", "updated_at": "2022-01-07T21:29:12Z", "comments": 0, "user": "VitalyFedyunin" }, { "repo": "pytorch/tensorpipe", "number": 417, "title": "how to install pytensorpipe", "body": "I built tensorpipe with ninja and try to build python package running `python setup.py`, it tells me:\r\n```\r\nmake: *** No rule to make target 'pytensorpipe'. Stop.\r\n```", "url": "https://github.com/pytorch/tensorpipe/issues/417", "state": "closed", "labels": [], "created_at": "2021-12-10T14:21:30Z", "updated_at": "2021-12-10T14:27:30Z", "user": "eedalong" }, { "repo": "pytorch/TensorRT", "number": 771, "title": "\u2753 [Question] Get no indications on the exact code that cause errors?", "body": "## \u2753 Question\r\nHi, thanks for making this amazing tool! I met some errors when converting my model. However, for some of the errors, there is only information about unsupported operators without any indication of the exact code that causes the errors.\r\n\r\nWhy does this happen and are there any potential solutions?\r\n\r\n\r\n![Screen Shot 2021-12-10 at 9 16 32 PM](https://user-images.githubusercontent.com/25219214/145579919-24e5a34a-1287-48f3-8a54-2e9696ec2478.png)\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/771", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-12-10T13:22:59Z", "updated_at": "2022-04-01T00:02:18Z", "user": "DeriZSY" }, { "repo": "pytorch/TensorRT", "number": 767, "title": "\u2753 [Question] Handling non-tensor input of module", "body": "## \u2753 Question\r\nCan `torch_tensorrt.compile` handle non-tensor input of the module (for example boolean and integer)? How should I do it?", "url": "https://github.com/pytorch/TensorRT/issues/767", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-12-09T09:46:12Z", "updated_at": "2022-04-01T00:02:18Z", "user": "DeriZSY" }, { "repo": "pytorch/pytorch", "number": 69610, "title": "[Question] How to extract/expose the complete PyTorch computation graph (forward and backward)?", "body": "How to extract the complete computation graph PyTorch generates?\r\n\r\nHere is my understanding:\r\n1. The forward graph can be generated by `jit.trace` or `jit.script`\r\n2. The backward graph is created from scratch each time `loss.backward()` is invoked in the training loop.\r\n\r\nI am attempting to lower the computation graph generated by PyTorch into GLOW manually for some custom downstream optimization. I am not able to extract the complete computation graph at the framework level (forward AND backward).\r\n\r\nAny help or guidance in this regard is greatly appreciated.\n\ncc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7", "url": "https://github.com/pytorch/pytorch/issues/69610", "state": "open", "labels": [ "module: autograd", "triaged", "oncall: visualization" ], "created_at": "2021-12-08T14:37:00Z", "updated_at": "2025-12-24T06:43:52Z", "user": "anubane" }, { "repo": "pytorch/TensorRT", "number": 765, "title": "\u2753 [Question] Sometimes inference time is too slow.. ", "body": "## \u2753 Question\r\n\r\nThank you for this nice project, I successfully converted [my model](https://github.com/sejong-rcv/MLPD-Multi-Label-Pedestrian-Detection), which feeds multispectral images, using Torch-TensorRT as below. \r\n\r\n```\r\n model = torch.load(model_path)['model']\r\n model = model.to(device)\r\n model.eval()\r\n\r\n \r\n scripted_model = torch.jit.script(model)\r\n \r\n # For static size shape=[1, 3, 224, 224]\r\n \r\n compile_settings = {\r\n \"inputs\": [torch_tensorrt.Input(\r\n min_shape=[1, 3, 512, 640],\r\n opt_shape=[1, 3, 512, 640],\r\n max_shape=[1, 3, 512, 640],\r\n dtype=torch.half),\r\n torch_tensorrt.Input(\r\n min_shape=[1, 1, 512, 640],\r\n opt_shape=[1, 1, 512, 640],\r\n max_shape=[1, 1, 512, 640],\r\n dtype=torch.half\r\n )],\r\n \"enabled_precisions\": {torch.half} # Run with FP16\r\n }\r\n \r\n trt_ts_module = torch_tensorrt.ts.compile(scripted_model, **compile_settings)\r\n \r\n fake_vis_fp16 = torch.ones((1, 3, 512, 640)).half().cuda()\r\n fake_lwir_fp16 = torch.ones((1, 1, 512, 640)).half().cuda()\r\n \r\n fake_vis_fp32 = torch.ones((1, 3, 512, 640)).float().cuda()\r\n fake_lwir_fp32 = torch.ones((1, 1, 512, 640)).float().cuda()\r\n \r\n torch.jit.save(trt_ts_module, \"MLPD_trt_torchscript_module.ts\") # save the TRT embedded Torchscript\r\n```\r\n\r\nThen, I tested the inference time of the model. I found that sometimes it is too slow as below.\r\n\r\n![image](https://user-images.githubusercontent.com/44772344/145186650-619a5883-07d7-4134-b33b-b735ee4f80cd.png)\r\n\r\nHow can i solve this problem..? Performance(Miss-rate) of converted model is the same as performance of original model.\r\n\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 3.7\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda\r\n - Python version: 3.8.12\r\n - CPU Architecture:\r\n - CUDA version: 11.4\r\n - GPU models and configuration: 2080Ti\r\n - Any other relevant information: I used docker image\r\n\r\n## Additional context\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/765", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-12-08T09:57:32Z", "updated_at": "2022-04-01T00:02:19Z", "user": "socome" }, { "repo": "pytorch/vision", "number": 5045, "title": "[Discussion] How do we want to handle `torchvision.prototype.features.Feature`'s?", "body": "This issue should spark a discussion about how we want to handle `Feature`'s in the future. There are a lot of open questions I'm trying to summarize. I'll give my opinion to each of them. You can find the current implementation under `torchvision.prototype.features`.\r\n\r\n## What are `Feature`'s?\r\n\r\n`Feature`'s are subclasses of `torch.Tensor` and their purpose is threefold:\r\n\r\n1. With their type, e.g. `Image`, they information about the data they carry. The prototype transformations (`torchvision.prototype.transforms`) use this information to automatically dispatch an input to the correct kernel.\r\n2. They can optionally carry additional meta data that might be needed for transforming the feature. For example, most geometric transformations can only be performed on bounding boxes if the size of the corresponding image is known.\r\n3. They provide a convenient interface for feature specific functionality, for example transforming the format of a bounding box.\r\n\r\nThere are currently three `Feature`'s implemented\r\n\r\n- `Image`,\r\n- `BoundingBox`, and\r\n- `Label`,\r\n\r\nbut in the future we should add at least three more:\r\n\r\n- `SemanticSegmentationMask`,\r\n- `InstanceSegementationMask`, and\r\n- `Video`.\r\n\r\n## What is the policy of adding new `Feature`'s?\r\n\r\nWe could allow subclassing of `Feature`'s. On the one hand, this would make it easier for datasets to conveniently bundle meta data. For example, the COCO dataset could return a `CocoLabel`, which in addition to the default `Label.category` could also have the `super_category` field. On the other hand, this would also mean that the transforms need to handle subclasses of features well, for example a `CocoLabel` could be treated the same as a `Label`.\r\n\r\nI see two downsides with that:\r\n\r\n1. What if a transform needs the additional meta data carried by a feature subclass? Imagine I've added a special transformation that needs `CocoLabel.super_category`. Although from the surface this now supports plain `Label`'s this will fail at runtime.\r\n2. Documentation custom features is more complicated than documenting a separate field in the sample dictionary of a dataset.\r\n\r\nThus, I'm leaning towards only having a few base classes.\r\n\r\n## From what data should a `Feature` be instantiable?\r\n\r\nSome of the features like `Image` or `Video` have non-tensor objects that carry the data. Should these features know how to handle them? For example should something like `Image(PIL.Image.open(...))` work?\r\n\r\nMy vote is out for yes. IMO this is very convenient and also not an unexpected semantic compared to passing the data directly, e.g. `Image(torch.rand(3, 256, 256))`\r\n\r\n## Should `Feature`'s have a fixed shape?\r\n\r\nConsider the following table:\r\n\r\n| `Feature` | `.shape` |\r\n|-----------------------------|-------------------------------|\r\n| `Image` | `(*, C, H, W)` |\r\n| `Label` | `(*)` |\r\n| `BoundingBox` | `(*, 4)` |\r\n| `SemanticSegmentationMask` | `(*, H, W)` or `(*, C, H, W)` |\r\n| `InstanceSegementationMask` | `(*, N, H, W)` |\r\n| `Video` | `(*, T, C, H, W)` |\r\n\r\n(For `SemanticSegmentationMask` I'm not sure about the shape yet. Having an extra channel dimension makes the tensor unnecessarily large, but it aligns well with segmentation image files, which are usually stored as RGB)\r\n\r\nShould we fix the shape to a single feature, i.e. remove the `*` from the table above, or should we only care about the shape in the last dimensions to be correct?\r\n\r\nMy vote is out for having a flexible shape, since otherwise batching is not possible. For example, if we fix bounding boxes to shape `(4,)` a transformation would need to transform `N` bounding boxes individually, while for shape `(N, 4)` it could make use of parallelism.\r\n\r\nOn the same note, if we go for the flexible shape, do we keep the singular name of the feature? For example, do we regard a batch of images with shape `(B, C, H, W)` still as `Image` or should we go for the plural `Images` in general? My vote is out for always keeping the singular, since I've often seen something like:\r\n\r\n```python\r\nfor image, target in data_loader(dataset, batch_size=4):\r\n ...\r\n```\r\n\r\n## Should `Feature`'s have a fixed dtype?\r\n\r\nThis makes sense for `InstanceSegementationMask` which should always be `torch.bool`. For all the other features I'm unsure. My gut says to use a default dtype, but also allow other dtypes.\r\n\r\n## What meta data should `Feature`'s carry?\r\n\r\nIMO, this really depends on the decision above about the fixed / flexible shapes. If we go for fixed shapes, it can basically carry any information. If we go for flexible shapes instead, we should only have meta data, which is the same for batched features. For example, `BoundingBox.image_size` is fine, but `Label.category` is not.\r\n\r\n## What methods should `Feature`'s provide?\r\n\r\nFor now I've only in", "url": "https://github.com/pytorch/vision/issues/5045", "state": "open", "labels": [ "needs discussion", "prototype" ], "created_at": "2021-12-07T13:17:58Z", "updated_at": "2022-02-11T11:42:36Z", "user": "pmeier" }, { "repo": "pytorch/data", "number": 113, "title": "datapipe serialization support / cloudpickle / parallel support", "body": "I've been looking at how we might go about supporting torchdata within TorchX and with components. I was wondering what the serialization options were for transforms and what that might look like.\r\n\r\nThere's a couple of common patterns that would be nice to support:\r\n\r\n* general data transforms (with potentially distributed preprocessing via torch elastic/ddp)\r\n* data splitting into train/validation sets\r\n* summary statistic computation\r\n\r\nFor the general transforms and handling arbitrary user data we were wondering how we might go about serializing the data pipes and transforms for use in a pipeline with TorchX. \r\n\r\nThere's a couple of options here:\r\n\r\n1. add serialization support to the transforms so you can serialize them (lambdas?)\r\n1. generate a .py file from a provided user function\r\n1. pickle the transform using something like cloudpickle/torch.package and load it in a trainer app\r\n1. ask the user to write a .py file that uses the datapipes as the transform and create a TorchX component (what we currently have)\r\n\r\nHas there been any thought about how to support this well? Is there extra work that should be done here to make this better?\r\n\r\nAre DataPipes guaranteed to be pickle safe and is there anything that needs to be done to support that?\r\n\r\nI was also wondering if there's multiprocessing based datapipes and how that works since this seems comparable. I did see https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py but didn't see any examples on how to use that to achieve a traditional PyTorch dataloader style workers.\r\n\r\nP.S. should this be on the pytorch discussion forums instead? it's half feature request half questions so wasn't sure where best to put it\r\n\r\ncc @kiukchung ", "url": "https://github.com/meta-pytorch/data/issues/113", "state": "open", "labels": [], "created_at": "2021-12-04T00:46:36Z", "updated_at": "2022-12-09T15:34:39Z", "comments": 7, "user": "d4l3k" }, { "repo": "pytorch/TensorRT", "number": 761, "title": "can i server my model with triton inference server", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/761", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-12-03T14:10:51Z", "updated_at": "2024-09-12T16:27:05Z", "user": "leo-XUKANG" }, { "repo": "pytorch/pytorch", "number": 69352, "title": "I want to know how to read the LMDB file once when using DDP", "body": "Hi, I meet a question. I have an LMDB dataset of about 50G. My machine has 100G memory and 8 V100 GPUs of 32GB.\r\nthe format of My dataset is like:\r\n\r\n```\r\nclass MyDataset(Dataset):\r\n def __init__(self, img_lmdb_dir) -> None:\r\n super().__init__()\r\n self.env = lmdb.open( # open LMDB dataset\r\n img_lmdb_dir, readonly=True,\r\n create=False) \r\n self.txn = self.env.begin(buffers=True)\r\n def __len__(self) -> int:\r\n raise NotImplemented\r\n def __getitem__(self, index: int):\r\n ...\r\n return ...\r\n```\r\n\r\nAs you can see, I open an LMDB dataset at the \"init\" method. However, if I use 8 GPUs. The each process will build this dataset and open LMDB dataset 8 times. One LMDB need 50G, so 8 LMDB needs 400G, which is more than my machine's memory.\r\n\r\nSo, I want to know how to use the LMDB file to accelerate to load training data and meanwhile total memory cost is lower than in my environment.\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang", "url": "https://github.com/pytorch/pytorch/issues/69352", "state": "open", "labels": [ "oncall: distributed", "module: dataloader" ], "created_at": "2021-12-03T07:34:16Z", "updated_at": "2022-12-29T14:32:17Z", "user": "shoutOutYangJie" }, { "repo": "pytorch/pytorch", "number": 69283, "title": "how to get required arguments name in forward", "body": "I want to get the required arguments name in different model's forward, removing optional arguments. I used python inspect, but got all inputs' name. I have no idea to deal it. please help", "url": "https://github.com/pytorch/pytorch/issues/69283", "state": "closed", "labels": [], "created_at": "2021-12-02T08:14:49Z", "updated_at": "2021-12-02T17:47:53Z", "user": "TXacs" }, { "repo": "pytorch/pytorch", "number": 69204, "title": "How to assign tensor to tensor", "body": "I have a 3D tensor J, and I want to assign values to it. Below is my code\r\n```\r\nimport torch\r\nJ = torch.eye(2).unsqueeze(0).expand(5, 2, 2)\r\nfor i in range(2):\r\n J[:, i, :] = torch.randn([5, 2])\r\n```\r\nThen there is an error: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.", "url": "https://github.com/pytorch/pytorch/issues/69204", "state": "closed", "labels": [], "created_at": "2021-12-01T10:45:06Z", "updated_at": "2021-12-01T20:33:43Z", "user": "LeZhengThu" }, { "repo": "pytorch/pytorch", "number": 69070, "title": "how to compute the real Jacobian matrix using autograd tool", "body": "I want to compute the real Jacobian matrix instead of the vector-Jacobian product. For example, I have \r\n```f=(f1, f2, f3), f1=x1^2+2*x2+x3, f2=x1+x2^3+x3^2, f3=2*x1+x2^2+x3^3```\r\nThen the Jacobian is ```J=[2*x1, 2, 1; 1, 3*x2^2, 2*x3; 2, 2*x2, 3*x3^2] ```\r\nBut backward() or grad() only gives the vector-Jacobian product. The following is my test code\r\n```\r\nimport torch\r\nimport numpy as np\r\nx = torch.tensor(np.array([[1,2,3]]), requires_grad=True, dtype=torch.float)\r\ny = torch.randn(3) \r\ny[0] = x[0][0]**2+2*x[0][1]+x[0][2]\r\ny[1] = x[0][0]+x[0][1]**3+x[0][2]**2\r\ny[2] = 2*x[0][0]+x[0][1]**2+x[0][2]**3\r\ntorch.autograd.grad(y, x, torch.ones_like(y))\r\n```\r\nThe result is tensor([[5,18,34]]). This is the result of J*[1;1;1] since I put torch.ones_like(y) in the code. Of course, I can use [1,0,0] to get each element of J, but that is too slow. Do we have any faster way to achieve this?\r\n\r\nBTW, when I try to replace torch.ones_like(y) with torch.eye(y.shape[0]), an error occurs: Mismatch in shape: grad_output[0] has a shape of torch.Size([3, 3]) and output[0] has a shape of torch.Size([3]).\r\n\r\n\r\n\r\n\r\n\r\n\n\ncc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7", "url": "https://github.com/pytorch/pytorch/issues/69070", "state": "closed", "labels": [ "module: autograd", "triaged" ], "created_at": "2021-11-30T10:05:01Z", "updated_at": "2021-12-01T19:44:55Z", "user": "LeZhengThu" }, { "repo": "pytorch/pytorch", "number": 69068, "title": "how to build libtorch without mkl?", "body": "## \u2753 Questions and Help\r\n\r\nI download the libtorch(CPU) 1.3.0 from [pytorch](https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.3.0%2Bcpu.zip)\uff0c The dependency library is follow:\r\n\r\n![image](https://user-images.githubusercontent.com/20365125/144023701-6631a8b0-f547-476a-b9ce-a8322c9de41c.png)\r\n\r\nBut the library is not available in my environment due to the gcc version is 4.8.5\uff0c so I builded it from pytorch source and the dependency library is follow:\r\n \r\n![image](https://user-images.githubusercontent.com/20365125/144024436-eba078c3-a7de-4367-a17e-217278e417ed.png)\r\n\r\n```\r\nexport BLAS=Eigen\r\nexport USE_CUDA=False\r\nexport BUILD_TEST=False\r\nexport USE_NINJA=OFF\r\nexport BUILD_CAFFE2_MOBILE=OFF\r\nexport BUILD_CAFFE2_OPS=OFF\r\nexport USE_MKL=OFF\r\nexport USE_MKLDNN=OFF\r\n```\r\n\r\nFor some reason, I want not to depended on mkl, so how to build libtorch without mkl?\r\n\r\n\r\n\n\ncc @malfet @seemethere", "url": "https://github.com/pytorch/pytorch/issues/69068", "state": "closed", "labels": [ "module: build", "triaged" ], "created_at": "2021-11-30T09:52:16Z", "updated_at": "2021-12-01T02:26:05Z", "user": "zhoujinhai" }, { "repo": "pytorch/serve", "number": 1347, "title": "how to use body in json format to predict", "body": "## \ud83d\udcda Documentation\r\n\r\ni have a model named greedy as a demo, and use baseHander \r\n\r\ni can't find the doc to deal with input of predictions api\r\n\r\nexamples all use the file to predict, can i use body for application/json format to predict\r\n\r\nand this is the curl \r\n```bash\r\ncurl --location --request POST 'http://localhost:6080/predictions/greedy' \\\r\n--header 'Content-Type: application/json' \\\r\n--data-raw '{\r\n \"model_name\": \"greedy\",\r\n \"model_version \": 1.0,\r\n \"input\": {\r\n \"data\": [\r\n 1,\r\n 2,\r\n 3\r\n ]\r\n }\r\n}'\r\n```", "url": "https://github.com/pytorch/serve/issues/1347", "state": "closed", "labels": [], "created_at": "2021-11-26T14:36:43Z", "updated_at": "2021-11-26T16:00:39Z", "user": "SpringTY" }, { "repo": "pytorch/TensorRT", "number": 744, "title": "how to convert the pytorch quantized model to trt model ?", "body": "I use the pytorch qat method to train the model and save the quantized model ( int8 ) .\r\nBut when I use torch_tensorrt.ts.compile interface to convert the int8 model to trt , errors happen, such as \"ERROR: [Torch-TensorRT] - **Unsupported operator: quantized::linear**\" , \"**Unsupported operator: quantized::conv2d.new**\" , and so on.\r\n\r\n Dose the torch_tensorrt.ts.compile support pytorch quantized model? how to solve the problem?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/744", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-11-26T11:11:00Z", "updated_at": "2022-03-13T00:02:19Z", "user": "jiinhui" }, { "repo": "pytorch/pytorch", "number": 68925, "title": "How to implement `bucket_by_sequence_length` with IterableDataset and DataLoader", "body": "## How to implement `bucket_by_sequence_length` with IterableDataset and DataLoader?\r\n\r\nI have a custom **IterableDataset** for question answering, which reads training data from a huge file. And I want to bucket the tranining exampels by their sequence length, like `tf.data.Dataset.bucket_by_sequence_length`.\r\n\r\nAny documents or tutorials about this?\r\n\n\ncc @SsnL @VitalyFedyunin @ejguan @NivekT", "url": "https://github.com/pytorch/pytorch/issues/68925", "state": "open", "labels": [ "module: dataloader", "triaged", "module: data" ], "created_at": "2021-11-26T03:06:17Z", "updated_at": "2021-11-30T15:32:48Z", "user": "luozhouyang" }, { "repo": "pytorch/TensorRT", "number": 740, "title": "What's the difference compared to native tensort sdk?", "body": "I used to convert a pytorch model to onnx format,and try to run it using native TensorRT SDK,but I failed for some operators in model is not supported by trt sdk; So if I use Torch-TensorRT to run the model, will I still have the same problem? Is there any more operators added compared to the native trt sdk?\r\n \r\n", "url": "https://github.com/pytorch/TensorRT/issues/740", "state": "closed", "labels": [ "question" ], "created_at": "2021-11-23T08:09:14Z", "updated_at": "2021-11-29T20:14:45Z", "user": "pango99" }, { "repo": "pytorch/android-demo-app", "number": 213, "title": "how to converto torchscript_int8@tracing file to pt file?", "body": "i have a custom model file,ie model.jit,how can i convert to d2go.pt?", "url": "https://github.com/pytorch/android-demo-app/issues/213", "state": "closed", "labels": [], "created_at": "2021-11-23T03:34:57Z", "updated_at": "2022-06-29T08:42:41Z", "user": "cloveropen" }, { "repo": "pytorch/pytorch", "number": 68729, "title": "How to specify the backends when running on CPU", "body": "## \u2753 Questions and Help\r\n\r\n### How to specify the backends when running on CPU.\r\n\r\nHi, I noticed that there are multiple backends available on CPU in pytorch: mkl, mkldnn, openmp. \r\nHow do I know which backend pytorch is using in current model and can I specify the backend?\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/68729", "state": "closed", "labels": [], "created_at": "2021-11-22T13:54:35Z", "updated_at": "2021-11-22T18:53:55Z", "user": "zheng-ningxin" }, { "repo": "pytorch/xla", "number": 3221, "title": "[Question] How to do deterministic training on GPUs.", "body": "## \u2753 Questions and Help\r\nHi, I'm testing torch xla on GPU. The script used is based on the test_train_mp_mnist.py. I changed the data input to be consistent for all workers (use the same dataset, no distributed sampler, turn off shuffle), don't adjust the learning rate, adding deterministic functions, andd adding logic to run with torch native.\r\n\r\nThe loss trends of the standalone torch xla and torch native are mostly aligned, although not bitwise, but given the nature of floating point calculations, the results are mostly satisfactory.\r\n\r\nHowever, I noticed a rather strange behavior, as the torch native results are consistent for each round of the run with two cards, even with the single card bitwise. However, the loss of torch xla with two cards kept changing, and with XLA_SYNC_WAIT on, the consistent results with torch xla standalone was gotten (I'm not sure if it's always that).\r\n\r\nI would like to know why, is there something wrong with my code? Thanks!\r\n\r\n## code\r\n\r\n```\r\nimport args_parse\r\n\r\nFLAGS = args_parse.parse_common_options(\r\n datadir=\"/tmp/mnist-data\",\r\n batch_size=128,\r\n momentum=0.5,\r\n lr=0.01,\r\n target_accuracy=98.0,\r\n num_epochs=18,\r\n)\r\n\r\nimport os\r\nimport shutil\r\nimport sys\r\nimport numpy as np\r\nimport torch\r\nimport random\r\nimport numpy as np\r\nfrom torch.nn.parallel import DistributedDataParallel as DDP\r\nimport torch.distributed as dist\r\n\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\nfrom torchvision import datasets, transforms\r\nimport torch_xla\r\nimport torch_xla.debug.metrics as met\r\nimport torch_xla.distributed.parallel_loader as pl\r\nimport torch_xla.utils.utils as xu\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_multiprocessing as xmp\r\nimport torch_xla.test.test_utils as test_utils\r\n\r\ndef set_deterministic(seed=101):\r\n torch.manual_seed(seed)\r\n random.seed(seed)\r\n np.random.seed(seed)\r\n torch.use_deterministic_algorithms(True)\r\n torch.backends.cudnn.deterministic = True\r\n torch.backends.cudnn.benchmark = False\r\n os.environ['CUBLAS_WORKSPACE_CONFIG']=':4096:8'\r\n os.environ['TF_CUDNN_DETERMINISTIC']='1'\r\n os.environ['TF_DETERMINISTIC_OPS']='1'\r\n xm.set_rng_state(seed)\r\n torch_xla._XLAC._xla_set_use_full_mat_mul_precision(\r\n use_full_mat_mul_precision=True)\r\n\r\n\r\nclass MNIST(nn.Module):\r\n def __init__(self):\r\n super(MNIST, self).__init__()\r\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\r\n self.bn1 = nn.BatchNorm2d(10)\r\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\r\n self.bn2 = nn.BatchNorm2d(20)\r\n self.fc1 = nn.Linear(320, 50)\r\n self.fc2 = nn.Linear(50, 10)\r\n\r\n def forward(self, x):\r\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\r\n x = self.bn1(x)\r\n x = F.relu(F.max_pool2d(self.conv2(x), 2))\r\n x = self.bn2(x)\r\n x = torch.flatten(x, 1)\r\n x = F.relu(self.fc1(x))\r\n x = self.fc2(x)\r\n return F.log_softmax(x, dim=1)\r\n\r\n\r\ndef _train_update(device, x, loss, tracker, writer):\r\n test_utils.print_training_update(\r\n device, x, loss.item(), tracker.rate(), tracker.global_rate(), summary_writer=writer\r\n )\r\n\r\n\r\ndef train_mnist(flags, **kwargs):\r\n\r\n if flags.fake_data:\r\n train_loader = xu.SampleGenerator(\r\n data=(\r\n torch.zeros(flags.batch_size, 1, 28, 28),\r\n torch.zeros(flags.batch_size, dtype=torch.int64),\r\n ),\r\n sample_count=60000 // flags.batch_size // xm.xrt_world_size(),\r\n )\r\n test_loader = xu.SampleGenerator(\r\n data=(\r\n torch.zeros(flags.batch_size, 1, 28, 28),\r\n torch.zeros(flags.batch_size, dtype=torch.int64),\r\n ),\r\n sample_count=10000 // flags.batch_size // xm.xrt_world_size(),\r\n )\r\n else:\r\n train_dataset = datasets.MNIST(\r\n os.path.join(flags.datadir, str(xm.get_ordinal())),\r\n train=True,\r\n download=True,\r\n transform=transforms.Compose(\r\n [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\r\n ),\r\n )\r\n test_dataset = datasets.MNIST(\r\n os.path.join(flags.datadir, str(xm.get_ordinal())),\r\n train=False,\r\n download=True,\r\n transform=transforms.Compose(\r\n [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\r\n ),\r\n )\r\n train_sampler = None\r\n\r\n train_loader = torch.utils.data.DataLoader(\r\n train_dataset,\r\n batch_size=flags.batch_size,\r\n sampler=train_sampler,\r\n drop_last=flags.drop_last,\r\n shuffle=False,\r\n num_workers=flags.num_workers,\r\n )\r\n\r\n test_loader = torch.utils.data.DataLoader(\r\n test_dataset,\r\n batch_size=flags.batch_size,\r\n drop_last=flags.drop_last,\r\n shuffle=False", "url": "https://github.com/pytorch/xla/issues/3221", "state": "closed", "labels": [ "stale", "xla:gpu" ], "created_at": "2021-11-22T09:16:11Z", "updated_at": "2022-04-28T00:10:33Z", "user": "cicirori" }, { "repo": "pytorch/hub", "number": 254, "title": "How to use hub if don't have network?", "body": "Downloading: \"https://github.com/ultralytics/yolov5/archive/master.zip\" to /root/.cache/torch/hub/master.zip\r\nI always stop in last line.\r\nIs there anyway to use hub offline.", "url": "https://github.com/pytorch/hub/issues/254", "state": "closed", "labels": [], "created_at": "2021-11-22T08:44:47Z", "updated_at": "2021-11-22T09:40:01Z", "user": "Skypow2012" }, { "repo": "pytorch/tutorials", "number": 1747, "title": "Is it possible to perform partial conversion of a pytorch model to ONNX?", "body": "I have the following VAE model in pytorch that I would like to convert to ONNX (and eventually to TensorFlow):\r\nhttps://github.com/jlalvis/VAE_SGD/blob/master/VAE/autoencoder_in.py\r\n\r\nI am only interested in the decoding part of the model. Is it possible to convert only the decoder to ONNX?\r\n\r\nThanks in advance :)\n\ncc @BowenBao", "url": "https://github.com/pytorch/tutorials/issues/1747", "state": "closed", "labels": [ "question", "onnx" ], "created_at": "2021-11-19T10:49:56Z", "updated_at": "2023-03-07T17:36:34Z", "user": "ShiLevy" }, { "repo": "pytorch/functorch", "number": 280, "title": "How to update the original model parameters after calling make_functional?", "body": "As per the title, I find that updating the tensors pointed by the `params` returned by `make_functional` does not update the real parameters in the original model.\r\nIs there a way to do this? I find that it would be extremely useful to implement optimization algorithms in a way that is more similar to their mathematical description.\r\n\r\nTo provide more context I add an example script of what standard Gradient Descent should look like in this way:\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nfrom functorch import make_functional\r\n\r\nlearning_rate = 0.1\r\n\r\ndef optstep(params, jacobians):\r\n with torch.no_grad():\r\n for i, param in enumerate(params):\r\n param.add_(jacobians[i], alpha=-learning_rate)\r\n\r\nif __name__ == '__main__':\r\n model = nn.Linear(3, 5)\r\n x, targets = torch.randn(2, 3), torch.randn(2, 5)\r\n criterion = nn.MSELoss()\r\n\r\n print(\"INITIAL LOSS:\", criterion(model(x), targets).item())\r\n # Render the model functional and compute the jacobian \r\n func_model, params = make_functional(model)\r\n def f(*params):\r\n out = func_model(params, x)\r\n return criterion(out, targets)\r\n jacobian = torch.autograd.functional.jacobian(f, params)\r\n\r\n # Ideally would train on the current input \r\n optstep(params, jacobian)\r\n # Now compute the new loss \r\n print(\"NEW LOSS:\", criterion(model(x), targets).item())\r\n```\r\n\r\nExecuting the script shows that the parameters are not updated since the loss doesn't change\r\n```\r\nINITIAL LOSS: 1.2894147634506226\r\nNEW LOSS: 1.2894147634506226\r\n```", "url": "https://github.com/pytorch/functorch/issues/280", "state": "open", "labels": [ "actionable" ], "created_at": "2021-11-19T08:54:25Z", "updated_at": "2022-04-13T22:32:19Z", "user": "trenta3" }, { "repo": "pytorch/TensorRT", "number": 733, "title": "Unable to use any Torch-TensorRT methods", "body": "I'm facing this error: \r\n\r\n> AttributeError: module 'torch_tensorrt' has no attribute 'compile'\r\n\r\nI also get this error when I try to use any other method like Input().\r\n\r\nThis is how I installed Torch-TensorRT: \r\n`pip install torch-tensorrt -f github.com/NVIDIA/Torch-TensorRT/releases`\r\n\r\nCode (from official documentation):\r\n```\r\nimport torch_tensorrt\r\n\r\nmodel = model.eval()\r\ncompile_settings = {\r\n \"input_shapes\": [\r\n {\r\n \"min\": [1, 1, 16, 16],\r\n \"opt\": [1, 1, 32, 32],\r\n \"max\": [1, 1, 64, 64]\r\n },\r\n ],\r\n \"op_precision\": torch.half # Run with fp16\r\n}\r\nenabled_precisions = {torch.float, torch.half}\r\n\r\ntrt_ts_module = torch_tensorrt.compile(model, inputs=compile_settings, enabled_precisions=enabled_precisions) \r\n```\r\n\r\nStack Trace:\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<command-3167120371910218> in <module>\r\n 14 enabled_precisions = {torch.float, torch.half}\r\n 15 \r\n---> 16 trt_ts_module = torch_tensorrt.compile(model, inputs=compile_settings, enabled_precisions=enabled_precisions)\r\n\r\nAttributeError: module 'torch_tensorrt' has no attribute 'compile'\r\n```\r\n\r\nPlease let me know how I can fix this issue.", "url": "https://github.com/pytorch/TensorRT/issues/733", "state": "closed", "labels": [ "question", "No Activity", "channel: windows" ], "created_at": "2021-11-18T20:40:58Z", "updated_at": "2022-10-27T13:01:48Z", "user": "Arjunp24" }, { "repo": "pytorch/TensorRT", "number": 732, "title": "\u2753 [Question] More average batch time for torch-tensorrt compiled model than torchscript model (fp32 mode).", "body": "## \u2753 Question\r\nI am comparing the performances of the torchscript model and the torch-tensorrt compiled model, when I am running in float32 mode, the average batch time is more for torch-tensorrt model. Is this expected?1. I am running the below code to compare torchscript model and torch-tensorrt compiled models, \r\n\r\n```\r\nclass LeNetFeatExtractor(nn.Module):\r\n def __init__(self):\r\n super(LeNetFeatExtractor, self).__init__()\r\n self.conv1 = nn.Conv2d(1, 128, 3)\r\n self.conv2 = nn.Conv2d(128, 16, 3)\r\n\r\n def forward(self, x):\r\n x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))\r\n x = F.max_pool2d(F.relu(self.conv2(x)), 2)\r\n return x\r\n\r\nclass LeNetClassifier(nn.Module):\r\n def __init__(self):\r\n super(LeNetClassifier, self).__init__()\r\n self.fc1 = nn.Linear(16 * 6 * 6, 120)\r\n self.fc2 = nn.Linear(120, 84)\r\n self.fc3 = nn.Linear(84, 10)\r\n\r\n def forward(self, x):\r\n x = torch.flatten(x,1)\r\n x = F.relu(self.fc1(x))\r\n x = F.relu(self.fc2(x))\r\n x = self.fc3(x)\r\n return x\r\n\r\nclass LeNet(nn.Module):\r\n def __init__(self):\r\n super(LeNet, self).__init__()\r\n self.feat = LeNetFeatExtractor()\r\n self.classifer = LeNetClassifier()\r\n\r\n def forward(self, x):\r\n x = self.feat(x)\r\n x = self.classifer(x)\r\n return x\r\n\r\ndef benchmark(model, input_shape=(1024, 1, 32, 32), dtype='fp32', nwarmup=50, nruns=100):\r\n input_data = torch.randn(input_shape)\r\n input_data = input_data.to(\"cuda\")\r\n if dtype=='fp16':\r\n input_data = input_data.half()\r\n \r\n print(\"Warm up ...\")\r\n with torch.no_grad():\r\n for _ in range(nwarmup):\r\n features = model(input_data)\r\n torch.cuda.synchronize()\r\n print(\"Start timing ...\")\r\n timings = []\r\n with torch.no_grad():\r\n for i in range(1, nruns+1):\r\n start_time = time.time()\r\n features = model(input_data)\r\n torch.cuda.synchronize()\r\n end_time = time.time()\r\n timings.append(end_time - start_time)\r\n if i%100==0:\r\n print('Iteration %d/%d, ave batch time %.2f ms'%(i, nruns, np.mean(timings)*1000))\r\n\r\n print(\"Input shape:\", input_data.size())\r\n print(\"Output features size:\", features.size())\r\n \r\n print('Average batch time: %.2f ms'%(np.mean(timings)*1000))\r\n \r\nmodel = LeNet()\r\nmodel.to(\"cuda\").eval()\r\nbenchmark(model, dtype=\"fp32\")\r\ninpt = torch.empty([1,1,32,32]).to(\"cuda\")\r\ntraced_model = torch.jit.trace(model, inpt)\r\nbenchmark(traced_model, dtype=\"fp32\")\r\nscript_model = torch.jit.script(model)\r\nbenchmark(script_model, dtype=\"fp32\")\r\n\r\ncompile_settings = {\r\n \"inputs\": [torch_tensorrt.Input(\r\n min_shape=[1024, 1, 32, 32],\r\n opt_shape=[1024, 1, 33, 33],\r\n max_shape=[1024, 1, 34, 34],\r\n dtype=torch.float\r\n )],\r\n \"enabled_precisions\": {torch.float} # Run with FP16\r\n}\r\n\r\ntrt_ts_module = torch_tensorrt.compile(traced_model, **compile_settings)\r\nbenchmark(trt_ts_module, input_shape=(1024, 1, 32, 32), dtype=\"fp32\")\r\n```\r\n2. Check below my performance comparison results:\r\n```\r\nWarm up ...\r\nStart timing ...\r\nIteration 100/100, ave batch time 39.72 ms\r\nInput shape: torch.Size([1024, 1, 32, 32])\r\nOutput features size: torch.Size([1024, 10])\r\nAverage batch time: 39.72 ms\r\nWarm up ...\r\nStart timing ...\r\nIteration 100/100, ave batch time 39.74 ms\r\nInput shape: torch.Size([1024, 1, 32, 32])\r\nOutput features size: torch.Size([1024, 10])\r\nAverage batch time: 39.74 ms\r\nWarm up ...\r\nStart timing ...\r\nIteration 100/100, ave batch time 39.77 ms\r\nInput shape: torch.Size([1024, 1, 32, 32])\r\nOutput features size: torch.Size([1024, 10])\r\nAverage batch time: 39.77 ms\r\nWARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter\r\nWARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 10.2.2\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Detected invalid timing cache, setup a local cache instead\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - Max value of this profile is not valid\r\nWARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 10.2.2\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 10.2.2\r\nWARNING: [Torch-TensorRT] - TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 10.2.2\r\nWarm up ...\r\nStart timing ...\r\nIteration 100/100, ave batch time 57.29 ms\r\nInput shape: torch.Size([1024, 1, 32, 32])\r\nOutput features size: torch.Size([1024, 10])\r\nAverage batch time: 57.29 ms\r\n\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about Torch-TensorRT can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10\r\n", "url": "https://github.com/pytorch/TensorRT/issues/732", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-11-18T17:58:10Z", "updated_at": "2022-02-28T17:49:23Z", "user": "harishkool" }, { "repo": "pytorch/TensorRT", "number": 730, "title": "Convert YoloV5 models ", "body": "It is my understanding that the new stable release should be able to convert any PyTorch model with fallback to PyTorch when operations cannot be directly converted to TensorRT. I am trying to convert \r\nI am trying to convert YoloV5s6 to TensorRT using the code that you can find below. I believe that it would be great to be able to convert this particular model given its popularity.\r\nDuring the conversion I am encountering some errors. Is this because the model cannot be converted to TorchScript? I also noticed that the model is composed of classes which extends `nn.Module`.\r\nOf course, YoloV5 code can be found here: https://github.com/ultralytics/yolov5\r\nThank you!\r\n\r\n## To Reproduce\r\n\r\n```\r\nimport torch\r\nimport torch_tensorrt\r\nmodel = torch.hub.load(\"ultralytics/yolov5\", \"yolov5s6\")\r\nmodel.eval()\r\ncompile_settings = {\r\n \"inputs\": [torch_tensorrt.Input(\r\n # For static size\r\n shape=[1, 3, 640, 640], # TODO: depends on the model size\r\n # For dynamic size\r\n # min_shape=[1, 3, 224, 224],\r\n # opt_shape=[1, 3, 512, 512],\r\n # max_shape=[1, 3, 1024, 1024],\r\n dtype=torch.half, # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)\r\n )],\r\n # \"require_full_compilation\": False,\r\n \"enabled_precisions\": {torch.half}, # Run with FP16\r\n \"torch_fallback\": {\r\n \"enabled\": True, # Turn on or turn off falling back to PyTorch if operations are not supported in TensorRT\r\n }\r\n}\r\ntrt_ts_module = torch_tensorrt.compile(model, **compile_settings)\r\n```\r\n\r\nOutput:\r\n```\r\nUsing cache found in /home/ubuntu/.cache/torch/hub/ultralytics_yolov5_master\r\nYOLOv5 \ud83d\ude80 2021-11-17 torch 1.10.0+cu113 CUDA:0 (Tesla T4, 15110MiB)\r\nFusing layers... \r\nModel Summary: 280 layers, 12612508 parameters, 0 gradients\r\nAdding AutoShape... \r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/code.py\", line 91, in runcode\r\n exec(code, self.locals)\r\n File \"<input>\", line 21, in <module>\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch_tensorrt/_compile.py\", line 96, in compile\r\n ts_mod = torch.jit.script(module)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_script.py\", line 1258, in script\r\n obj, torch.jit._recursive.infer_methods_to_compile\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_recursive.py\", line 451, in create_script_module\r\n return create_script_module_impl(nn_module, concrete_type, stubs_fn)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_recursive.py\", line 513, in create_script_module_impl\r\n script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_script.py\", line 587, in _construct\r\n init_fn(script_module)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_recursive.py\", line 491, in init_fn\r\n scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_recursive.py\", line 517, in create_script_module_impl\r\n create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_recursive.py\", line 368, in create_methods_and_properties_from_stubs\r\n concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_script.py\", line 1433, in _recursive_compile_class\r\n return _compile_and_register_class(obj, rcb, _qual_name)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/_recursive.py\", line 42, in _compile_and_register_class\r\n ast = get_jit_class_def(obj, obj.__name__)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/frontend.py\", line 201, in get_jit_class_def\r\n is_classmethod=is_classmethod(obj)) for (name, obj) in methods]\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/frontend.py\", line 201, in <listcomp>\r\n is_classmethod=is_classmethod(obj)) for (name, obj) in methods]\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/frontend.py\", line 264, in get_jit_def\r\n return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/frontend.py\", line 302, in build_def\r\n param_list = build_param_list(ctx, py_def.args, self_name, pdt_arg_types)\r\n File \"/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/jit/frontend.py\", line 330, in build_param_list\r\n raise NotSupportedError(ctx_range, _vararg_kwarg_err)\r\ntorch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:\r\n File \"/usr", "url": "https://github.com/pytorch/TensorRT/issues/730", "state": "closed", "labels": [ "question", "No Activity", "component: partitioning" ], "created_at": "2021-11-17T18:37:44Z", "updated_at": "2023-02-27T00:02:29Z", "user": "mfoglio" }, { "repo": "pytorch/TensorRT", "number": 727, "title": "I trained a model by libtorch,how to convert it to tensorrt?", "body": "by libtorch,not by pytorch.\r\nhow to convert the model to tensorrt?", "url": "https://github.com/pytorch/TensorRT/issues/727", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-11-17T07:35:55Z", "updated_at": "2022-02-26T00:01:58Z", "user": "henbucuoshanghai" }, { "repo": "pytorch/vision", "number": 4949, "title": "GPU usage keeps increasing marginally with each inference request", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nI have been trying to deploy the RetinaNet pre-trained model available in torchvision. However, after every inference request with exactly same image, the gpu usage keeps increasing marginally (by roughly 10 MiB, as visible in nvidia-smi). (Same behavior is noticed if I try the same with other Object Detection models like fasterrcnn_resnet50_fpn, fasterrcnn_mobilenet_v3_large_fpn.)\r\n\r\nThe brief code used for inference\r\n```python\r\n\r\n# load the model\r\nself.Model = retinanet_resnet50_fpn(pretrained=True)\r\nself.Model.to(torch.device('cuda'))\r\nself.Model.eval()\r\n\r\n# inference fn\r\n@torch.no_grad()\r\ndef DetectObjects(self, image, threshold=0.4)->dict:\r\n image = convert_image_dtype(torch.stack([image]), dtype=torch.float)\r\n image = image.to(torch.device('cuda'))\r\n results = self.Model(image)[0]\r\n gt = results['scores']>threshold\r\n labels, boxes, scores = results['labels'][gt].cpu().tolist(), results['boxes'][gt].cpu().tolist(), results['scores'][gt].cpu().tolist()\r\n torch.cuda.empty_cache()\r\n return boxes, labels, scores\r\n```\r\n\r\n\r\n### Versions\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.10.0+cu102\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.6 LTS (x86_64)\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: Could not collect\r\nCMake version: version 3.10.2\r\nLibc version: glibc-2.17\r\n\r\nPython version: 3.7.10 (default, Feb 20 2021, 21:21:24) [GCC 5.4.0 20160609] (64-bit runtime)\r\nPython platform: Linux-4.15.0-162-generic-x86_64-with-Ubuntu-18.04-bionic\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 SUPER\r\nNvidia driver version: 465.19.01\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.21.4\r\n[pip3] torch==1.10.0\r\n[pip3] torchvision==0.11.1\r\n[conda] Could not collect\r\n```\r\n\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/4949", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2021-11-16T19:43:49Z", "updated_at": "2024-02-28T15:01:40Z", "user": "shv07" }, { "repo": "pytorch/android-demo-app", "number": 209, "title": "How to add language model in ASR demo", "body": "The wav2vec2 used in the SpeechRecognition example does not have a language model. How to add language model in the demo app\uff1f", "url": "https://github.com/pytorch/android-demo-app/issues/209", "state": "open", "labels": [], "created_at": "2021-11-16T10:55:43Z", "updated_at": "2021-12-10T11:36:07Z", "user": "guijuzhejiang" }, { "repo": "pytorch/pytorch", "number": 68414, "title": "I used libtorch train a model,how to convert it to onnx?", "body": "libtorch trained a model", "url": "https://github.com/pytorch/pytorch/issues/68414", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2021-11-16T07:30:36Z", "updated_at": "2021-11-17T00:03:41Z", "user": "henbucuoshanghai" }, { "repo": "pytorch/torchx", "number": 345, "title": "slurm_scheduler: handle OCI images", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nAdd support for running TorchX components via the Slurm OCI interface.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nSlurm 21.08+ has support for running OCI containers as the environment. This matches well with our other docker/k8s images that we use by default. With workspaces + OCI we can support slurm like the docker based environments.\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nThe new slurm container support doesn't handle the image finding the same way docker/podman does. This means that the images need to be placed on disk in the same way a virutalenv would be supported which would have to be a user configurable path.\r\n\r\nThis also means that we have to interact with docker/buildah to download the images and export them to an OCI image on disk. There's some extra questions about image management to avoid disk space issues etc.\r\n\r\nThe cluster would have to be configured with `nvidia-container-runtime` for use with GPUs.\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\nhttps://slurm.schedmd.com/containers.html\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/345", "state": "open", "labels": [ "enhancement", "module: runner", "slurm" ], "created_at": "2021-11-15T23:25:21Z", "updated_at": "2021-11-15T23:25:21Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/torchx", "number": 344, "title": "workspace notebook UX", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nWe should add some notebook specific integrations to make working with workspace and launching remote jobs first class. This builds upon the workspace support tracked by #333.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nCurrently there's no specific TorchX integrations for running from within notebooks. It's possible but it's not as fleshed out as it could be.\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\n### Jupyter Custom Magics\r\n\r\nWe want to add a custom magic to allow adding files to the workspace.\r\n\r\nhttps://ipython.readthedocs.io/en/stable/config/custommagics.html#defining-custom-magics\r\n\r\n```py\r\nfrom torchx.notebook import register_magics, get_workspace\r\n\r\nregister_magics()\r\n```\r\n\r\n```py\r\n%%workspacefile train.py\r\n\r\nprint(\"train Hello world!\")\r\n```\r\n\r\n```py\r\nfrom torchx.components.utils import python\r\nfrom torchx.runner import get_runner\r\n\r\napp = python(m=\"train\")\r\napp_id = runner.run(app, scheduler=\"local_docker\", workspace=get_workspace())\r\nprint(app_id)\r\nstatus = runner.wait(app_id)\r\nprint(status)\r\n```\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\nThis can already be accomplished by writing out a file and directly calling docker build etc. That's a lot more work on the user and requires having an on disk project so the notebook isn't fully self contained.\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n\r\nWorkspace/canary tracking #333", "url": "https://github.com/meta-pytorch/torchx/issues/344", "state": "open", "labels": [ "enhancement", "module: runner" ], "created_at": "2021-11-15T22:50:59Z", "updated_at": "2021-11-15T22:50:59Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/android-demo-app", "number": 208, "title": "How to run model with grayscale input?", "body": "", "url": "https://github.com/pytorch/android-demo-app/issues/208", "state": "open", "labels": [], "created_at": "2021-11-15T09:59:11Z", "updated_at": "2021-11-15T09:59:11Z", "user": "bartproo" }, { "repo": "pytorch/torchx", "number": 340, "title": "Advanced Pipeline Example Errors on KFP", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\nhttps://pytorch.org/torchx/main/examples_pipelines/kfp/advanced_pipeline.html#sphx-glr-examples-pipelines-kfp-advanced-pipeline-py\r\n\r\n## What does it currently say?\r\nThe pipeline.yaml can be generated and run on Kubeflow\r\n\r\n## What should it say?\r\nUnknown, I believe it is a race condition in the code where the pipeline begins execution before the download of the data is complete.\r\n\r\n## Why?\r\n<img width=\"804\" alt=\"Screen Shot 2021-11-13 at 1 36 46 AM\" src=\"https://user-images.githubusercontent.com/43734688/141608949-e41c0d23-4131-4eb8-82c6-e86aeb579e8e.png\">\r\n\r\n\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/340", "state": "closed", "labels": [], "created_at": "2021-11-13T06:40:22Z", "updated_at": "2022-01-04T05:06:42Z", "comments": 2, "user": "sam-h-bean" }, { "repo": "pytorch/torchx", "number": 339, "title": "separate .torchxconfig for fb/ and oss", "body": "## Description\r\nWe want to have a FB internal .torchxconfig file to specify scheduler_args for internal cluster and a OSS .torchxconfig file to run on public clusters\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/339", "state": "closed", "labels": [], "created_at": "2021-11-12T19:34:39Z", "updated_at": "2021-11-16T00:31:48Z", "comments": 1, "user": "colin2328" }, { "repo": "pytorch/xla", "number": 3212, "title": "How to enable oneDNN optimization\uff1f", "body": "## \u2753 Questions and Help\r\nI am doing training and inference on XLA_CPU, but I find that the training speed is particularly slow. Compared with pytorch, the training speed is about 10 times slower.\r\nAccording to the log, I found that mklcnn acceleration is enabled by default during pytorch training, but when I use xla training, mklcnn is not enabled.\r\n\r\n2021-11-12 16:12:44.683410: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n\r\nHow can I enable mkldnn on xla to get training acceleration?", "url": "https://github.com/pytorch/xla/issues/3212", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2021-11-12T08:26:47Z", "updated_at": "2022-04-16T13:44:03Z", "user": "ZhongYFeng" }, { "repo": "pytorch/TensorRT", "number": 708, "title": "ImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory", "body": "I installed torch-tensorrt via pip: `pip3 install torch-tensorrt -f github.com/NVIDIA/Torch-TensorRT/releases`. And when I try to import it the _ImportError_ raises:\r\n_ImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory_\r\nThe full error:\r\n\r\n```\r\nImportError Traceback (most recent call last)\r\n<ipython-input-13-82536c89b207> in <module>\r\n 13 from vision.utils.misc import str2bool, Timer, freeze_net_layers, store_labels\r\n 14 \r\n---> 15 import torch_tensorrt as torchtrt\r\n 16 \r\n 17 # import pytorch_quantization\r\n\r\n~/.local/lib/python3.6/site-packages/torch_tensorrt/__init__.py in <module>\r\n 9 \r\n 10 from torch_tensorrt._version import __version__\r\n---> 11 from torch_tensorrt._compile import *\r\n 12 from torch_tensorrt._util import *\r\n 13 from torch_tensorrt import ts\r\n\r\n~/.local/lib/python3.6/site-packages/torch_tensorrt/_compile.py in <module>\r\n 1 from typing import List, Dict, Any\r\n----> 2 from torch_tensorrt import _enums\r\n 3 import torch_tensorrt.ts\r\n 4 from torch_tensorrt import logging\r\n 5 import torch\r\n\r\n~/.local/lib/python3.6/site-packages/torch_tensorrt/_enums.py in <module>\r\n----> 1 from torch_tensorrt._C import dtype, DeviceType, EngineCapability, TensorFormat\r\n\r\nImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.10.0\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3 install torch-tensorrt -f github.com/NVIDIA/Torch-TensorRT/releases\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.6\r\n - CUDA version: 10.2\r\n - GPU models and configuration: GeForce RTX 2080 Ti\r\n - Any other relevant information:\r\n", "url": "https://github.com/pytorch/TensorRT/issues/708", "state": "closed", "labels": [ "question" ], "created_at": "2021-11-11T09:04:07Z", "updated_at": "2022-07-29T15:02:57Z", "user": "anvarganiev" }, { "repo": "pytorch/TensorRT", "number": 697, "title": "why https://github.com/NVIDIA/Torch-TensorRT/releases/torch_tensorrt-1.0.0-cp36-cp36m-linux_x86_64.whl depends on cuda 10.2 library", "body": "\r\nwhy https://github.com/NVIDIA/Torch-TensorRT/releases/torch_tensorrt-1.0.0-cp36-cp36m-linux_x86_64.whl depends on cuda 10.2 library\r\n<!-- Your question -->\r\n\r\nwhen I try to install torch-tensorrt and import torch_tensorrt ,It was reported ImportError:libcudart.so.10.2: cannot open shared object file: No such file or directory\r\n\r\n## Environment\r\n\r\n> ImportError Traceback (most recent call last)\r\n><ipython-input-1-291a947ced8e> in <module>\r\n>----> 1 import torch_tensorrt\r\n>\r\n>/usr/local/python3/lib/python3.6/site-packages/torch_tensorrt/__init__.py in <module>\r\n> 9\r\n> 10 from torch_tensorrt._version import __version__\r\n>---> 11 from torch_tensorrt._compile import *\r\n> 12 from torch_tensorrt._util import *\r\n> 13 from torch_tensorrt import ts\r\n>\r\n>/usr/local/python3/lib/python3.6/site-packages/torch_tensorrt/_compile.py in <module>\r\n> 1 from typing import List, Dict, Any\r\n>----> 2 from torch_tensorrt import _enums\r\n> 3 import torch_tensorrt.ts\r\n> 4 from torch_tensorrt import logging\r\n> 5 import torch\r\n>\r\n>/usr/local/python3/lib/python3.6/site-packages/torch_tensorrt/_enums.py in <module>\r\n>----> 1 from torch_tensorrt._C import dtype, DeviceType, EngineCapability, TensorFormat\r\n>\r\n>ImportError: libcudart.so.10.2: cannot open shared object file: No such file or directory\r\n\r\n - PyTorch Version (e.g., 1.0):torch_1.10+cu113\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Centos 7\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3\r\n - Build command you used (if compiling from source): \r\n - Are you using local sources or building from archives:\r\n - Python version: python3.6\r\n - CUDA version: cuda-11.3\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/697", "state": "closed", "labels": [ "question" ], "created_at": "2021-11-10T12:48:15Z", "updated_at": "2021-11-10T17:02:41Z", "user": "ylz1104" }, { "repo": "pytorch/torchx", "number": 336, "title": "[docs] add context/intro to each docs page", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\nEx: https://pytorch.org/torchx/main/basics.html\r\n\r\nand some other pages\r\n\r\n## What does it currently say?\r\ndoesn't currently have an intro about the page and how it fits in context, just jumps right into the documentation\r\n\r\n## What should it say?\r\n<!-- the proposed new documentation -->\r\n\r\n## Why?\r\n<!-- (if not clear from the proposal) why is the new proposed documentation more correct/improvement over the existing one? -->\r\n\r\nWe got some good feedback from the documentation folks about adding context to each page so if someone gets linked to it they're not totally lost. This matches some of the user feedback we've received so would be good to update this.\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/336", "state": "closed", "labels": [ "documentation" ], "created_at": "2021-11-08T23:26:47Z", "updated_at": "2021-11-11T18:33:18Z", "comments": 1, "user": "d4l3k" }, { "repo": "pytorch/pytorch", "number": 67965, "title": "how to set the quantized data type in QAT", "body": "When I use the qat and extract the intermedia layer's output I find it's quint8\r\n![MicrosoftTeams-image](https://user-images.githubusercontent.com/32367611/140634280-ce018cf8-2d53-4ee1-98b3-97cf44f9e54f.png)\r\n.\r\nThis datatype will be the input to the next layer I think.\r\nBut the weights are qint8 type. multiple a qint8 with quin8. Does this make sense?\r\n\r\n\r\nCurrently I use the default way to do the qat prepration:\r\n`model_new.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')`\r\n\r\nIt seems the observer controls the data type. is there a way to set the data type and do QAT preparation?\n\ncc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo", "url": "https://github.com/pytorch/pytorch/issues/67965", "state": "closed", "labels": [ "oncall: quantization" ], "created_at": "2021-11-07T06:03:27Z", "updated_at": "2021-11-09T13:45:28Z", "user": "mathmax12" }, { "repo": "pytorch/tutorials", "number": 1742, "title": "ddp_pipeline", "body": "I ran the code as is on the cluster. Gives\r\n\r\n`RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.`\r\n\r\nWhat could be wrong? Also, is there any way to run this code in Jupyter? By the way, the Colab [notebook](https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/8976a0b7cba4d8c4bc2a28205b91a7da/ddp_pipeline.ipynb) doesn't run and gives and error\r\n\r\n`Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/r/roman-koshkin/miniconda3/envs/tranformer/lib/python3.8/multiprocessing/spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"/home/r/roman-koshkin/miniconda3/envs/tranformer/lib/python3.8/multiprocessing/spawn.py\", line 126, in _main\r\n self = reduction.pickle.load(from_parent)\r\nAttributeError: Can't get attribute 'run_worker' on <module '__main__' (built-in)>\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/r/roman-koshkin/miniconda3/envs/tranformer/lib/python3.8/multiprocessing/spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"/home/r/roman-koshkin/miniconda3/envs/tranformer/lib/python3.8/multiprocessing/spawn.py\", line 126, in _main\r\n self = reduction.pickle.load(from_parent)\r\nAttributeError: Can't get attribute 'run_worker' on <module '__main__' (built-in)>`", "url": "https://github.com/pytorch/tutorials/issues/1742", "state": "closed", "labels": [], "created_at": "2021-11-06T01:13:50Z", "updated_at": "2022-09-28T15:11:42Z", "comments": 2, "user": "RomanKoshkin" }, { "repo": "pytorch/pytorch", "number": 67757, "title": "How to build libtorch on aarch64 machine?", "body": "I used command lines below to build libtorch on aarch64 machine.\r\n```shell\r\ngit clone https://github.com/pytorch/pytorch --recursive\r\ncd pytorch\r\npip3 install pyyaml # \u7f3a\u5931\u76f8\u5173\u4f9d\u8d56\uff0c\u8fdb\u884c\u5b89\u88c5\uff0c\u5982\u6709\u5176\u4ed6\u7f3a\u5931\uff0c\u4f9d\u6b21\u5b89\u88c5\u5373\u53ef\r\nexport USE_CUDA=False # \u4f7f\u7528cpu\r\nexport BUILD_TEST=False # \u4e0d\u7f16\u8bd1\u6d4b\u8bd5\u90e8\u5206\r\npython3 ../tools/build_libtorch.py # \u4f1a\u81ea\u52a8\u521b\u5efabuild\u6587\u4ef6\u5939\uff0c\u5e76\u8fdb\u884c\u76f8\u5173\u7f16\u8bd1\r\n```\r\nBut when I test it like [example](https://pytorch.org/cppdocs/installing.html),it shows some error.\r\n![image](https://user-images.githubusercontent.com/7894966/140027833-0e27c9b0-578d-4dde-bb8d-1dd88e2516b7.png)\r\nI see that size of my libtorch_cpu.so is only 100Mb,far less than the official one in cpu. But I dont konw whats wrong with it.", "url": "https://github.com/pytorch/pytorch/issues/67757", "state": "closed", "labels": [], "created_at": "2021-11-03T08:19:34Z", "updated_at": "2021-11-04T13:43:36Z", "user": "zihaoliao" }, { "repo": "pytorch/pytorch", "number": 67596, "title": "How to upgrade the NCCL version of pytorch 1.7.1 from 2.7.8 to 2.11.4? ", "body": "\r\n I have installed version 2.11.4 in wsl2 and can pass the nccl-tests. However, when training the model, pytorch 1.7.1 still calls NCCL 2.7.8. In addition to rebuilding, is there a way for pytorch 1.7.1 to call NCCL 2.11.4 in the system instead of calling the compiled version NCCL 2.7.8\uff1f", "url": "https://github.com/pytorch/pytorch/issues/67596", "state": "closed", "labels": [], "created_at": "2021-10-31T09:01:45Z", "updated_at": "2021-11-02T11:41:14Z", "user": "cascgu" }, { "repo": "pytorch/android-demo-app", "number": 198, "title": "How to reduce the size of pt file ", "body": "Thanks for your Image Segmentation deepLab v3. I have used it to implement an android file, but the size is about 150MB. Can you enlighten me how can I reduce the size. Thanks. ", "url": "https://github.com/pytorch/android-demo-app/issues/198", "state": "open", "labels": [], "created_at": "2021-10-31T07:36:36Z", "updated_at": "2021-10-31T07:36:36Z", "user": "jjlchui" }, { "repo": "pytorch/vision", "number": 4802, "title": "How to monitor and when to retrain the object detection model in production?", "body": "I recently moved regression model to production and I\u2019m monitoring the model drift and data drift using statistical tests, based on their distributions i retrain the model. \n\nCould you please tell me how to monitoring the object detection model and detect drifts ? \n\n\nDo you use statistical test to detect drifts? If yes, how do you do that? And which value you as an input to detect?\n\n\nPlease guide me? Even if you could provide any relevant article also would help me\n\nThanks in advance\ud83d\ude0a", "url": "https://github.com/pytorch/vision/issues/4802", "state": "closed", "labels": [], "created_at": "2021-10-30T12:45:47Z", "updated_at": "2021-10-31T14:33:09Z", "user": "IamExperimenting" }, { "repo": "pytorch/vision", "number": 4795, "title": "[docs] Pretrained model docs should explain how to specify cache dir and norm_layer", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nhttps://pytorch.org/vision/stable/models.html?highlight=resnet18#torchvision.models.resnet18\r\n\r\nshould document:\r\n- how to set cache dir for downloaded models. many university systems have tight quota for home dir that prohibits clogging it with weights. it is explained at the very top of very long document (`TORCH_MODEL_ZOO`) but it would be nice to duplicate it / link to this from every pretrained method\r\n\r\n- how to set `norm_layer = torchvision.ops.misc.FrozenBatchNorm2d` since this is a very frequent need for fine-tuning\r\n\r\n- how to replace stride with dilation for ResNet and to what layers it applies and what it can help achieving\r\n\r\nCurrently docs just specify `**kwargs: Any` which isn't very helpful\r\n\r\n### Versions\r\n\r\nN/A", "url": "https://github.com/pytorch/vision/issues/4795", "state": "open", "labels": [], "created_at": "2021-10-29T11:50:17Z", "updated_at": "2021-11-13T21:46:45Z", "user": "vadimkantorov" }, { "repo": "pytorch/torchx", "number": 316, "title": "[torchx/cli] Implement a torchx \"template\" subcommand that copies the given builtin", "body": "Torchx cli maintains a list of builtin components that are available via `torchx builtin` cmd. The builtin components are the patterns that are configured to execute one or another use-case. Users can use these components without the need to manage their own, e.g. \r\n\r\n```\r\ntorchx run -s local_cwd dist.ddp --script main.py\r\n```\r\n\r\nwould run user `main.py` script in a distributed manner.\r\n\r\nIt is better for users to own their own components for production use-cases. \r\nTorch copy command enables users to create initial templetized version of their components from the existing builtin components. Users then can modify the code however they want.\r\n\r\nExample of usage\r\n\r\n```\r\n# torchx/components/dist.py\r\n\r\ndef ddp(..., nnodes=1):\r\n return AppDef(..., roles=[Role(name=\"worker\", num_replicas=nnodes)])\r\n\r\ntorchx copy dist.ddp \r\n\r\n# Output:\r\n\r\n\r\ndef ddp(..., nnodes=1):\r\n return AppDef(..., roles=[Role(name=\"worker\", num_replicas=nnodes)])\r\n\r\n```\r\n\r\nTorchx copy will print the corresponding component to the stdout, so users can inspect the source code and copy it via:\r\n\r\n```\r\ntorchx copy dist.ddp > my_component.py\r\n```", "url": "https://github.com/meta-pytorch/torchx/issues/316", "state": "closed", "labels": [ "enhancement", "cli" ], "created_at": "2021-10-28T20:28:07Z", "updated_at": "2021-11-03T21:27:12Z", "comments": 0, "user": "aivanou" }, { "repo": "pytorch/tutorials", "number": 1735, "title": "Missing tutorial on using the transformer decoder layer?", "body": "Hi, i'm new with transformers.\r\nFor research purpose, with a colleague, I'm trying to implement a transformer for anomaly detection in human pose.\r\nThe transformer setting we need is very similar to an autoencoder, where the encoder generates a sort of latent representation and the decoder output is just a model attempt to reconstruct the input.\r\n\r\nWe were looking for transformer tutorials where both nn.TransformerEncoder and nn.TransformerDecoder are used, but we couldn't find anyone. Are we missing something or pytorch literally didn't provide any tutorial except the ones with just the using of the encoder?", "url": "https://github.com/pytorch/tutorials/issues/1735", "state": "closed", "labels": [], "created_at": "2021-10-28T16:27:27Z", "updated_at": "2022-03-17T16:15:07Z", "comments": 0, "user": "AndreaLombax" }, { "repo": "pytorch/pytorch", "number": 67438, "title": "how to use torch.jit.script with toch.nn.DataParallel", "body": "### \ud83d\udc1b Describe the bug\n\nnet = torch.nn.DataParallel(net)\r\nnet.load_state_dict(state1,False)\r\nwith torch.jit.optimized_execution(True):\r\n net_jit = torch.jit.script(net)\r\n\r\n\r\ntorch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:\n\n### Versions\n\ntorch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:", "url": "https://github.com/pytorch/pytorch/issues/67438", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-10-28T12:13:51Z", "updated_at": "2022-11-22T11:58:22Z", "user": "anliyuan" }, { "repo": "pytorch/TensorRT", "number": 685, "title": "\u2753 [Question] TRtorch v0.1.0 does support aten::divonly?", "body": "## \u2753 Question\r\nI am trying to compile my model.\r\nHowever compiler stops owing to a error.\r\nDoss TRTorch v0.1.0 support `aten::divonly`?\r\nAlso, does the newer TRTorch support `aten::divonly`?\r\n\r\n## What you have already tried\r\nI searched the error messages at the internet.\r\n\r\n## Environment\r\npytorch 1.6\r\nTRTorch 0.1.0\r\n\r\n## Additional context\r\nThe error message is this one.\r\n```\r\n> compiled_cpp_mod = trtorch._C.compile_graph(module._c, _parse_compile_spec(compile_spec))\r\nE RuntimeError: [enforce fail at core/conversion/evaluators/NodeEvaluatorRegistry.cpp:56] Expected schema to be true but got false\r\nE Evaluator for aten::divonly runs on certain schemas, but schema for node is not retrievable\r\n```", "url": "https://github.com/pytorch/TensorRT/issues/685", "state": "closed", "labels": [ "feature request", "question", "No Activity" ], "created_at": "2021-10-27T16:34:18Z", "updated_at": "2022-02-15T00:01:49Z", "user": "yoshida-ryuhei" }, { "repo": "pytorch/pytorch", "number": 67338, "title": "how to get the rank list in a new group", "body": "## \u2753 Questions and Help\r\nHow to get the rank list in a new group? I just find the `distributed.get_rank` and `distributed.get_world_size()` but not `get_rank_list` API.\r\nThanks :)\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang", "url": "https://github.com/pytorch/pytorch/issues/67338", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2021-10-27T16:30:02Z", "updated_at": "2021-11-05T02:45:16Z", "user": "hclearner" }, { "repo": "pytorch/TensorRT", "number": 682, "title": "\u2753 [Question] is there in8 quantization support with python?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nI did quantize to FP16 by using python. but i didn't find way to do that with int8 \r\nlet me know if there is support", "url": "https://github.com/pytorch/TensorRT/issues/682", "state": "closed", "labels": [ "question" ], "created_at": "2021-10-26T21:30:44Z", "updated_at": "2021-10-26T21:51:26Z", "user": "yokosyun" }, { "repo": "pytorch/tutorials", "number": 1727, "title": "your \"numpy_extensions_tutorial.py \" example", "body": "Hello,\r\nI would like to use the example in your `numpy_extensions_tutorial.py ` coda, but it appears ti computes on a single channel.\r\nDo you happen to know how I can compute it on several channels?\r\nThanks!", "url": "https://github.com/pytorch/tutorials/issues/1727", "state": "closed", "labels": [ "question" ], "created_at": "2021-10-25T12:09:33Z", "updated_at": "2023-03-06T22:59:39Z", "user": "lovodkin93" }, { "repo": "pytorch/pytorch", "number": 67157, "title": "What is the replacement in PyTorch>=1.8 for `torch.rfft` in PyTorch <=1.6?", "body": "Hi,\r\n\r\nI have been working on a project since last year and at that time, the PyTorch version was 1.6. I was using `f1 = torch.rfft(input, signal_ndim=3)` in that version. However, after PyTorch 1.8, `torch.rfft` has been removed. I was trying to use `f2=torch.fft.rfftn(input)` as the replacement, but both the real and imaginary parts of the output f2 are totally different from the output from `torch.rfft`. \r\n\r\nI am wondering what is the correct replacement now in PyTorch>=1.8 for torch.rfft in PyTorch 1.6?\r\n\r\nThanks in advance!\r\n\r\nBest,\r\nSongyou", "url": "https://github.com/pytorch/pytorch/issues/67157", "state": "closed", "labels": [], "created_at": "2021-10-24T14:49:29Z", "updated_at": "2021-10-24T15:23:27Z", "user": "pengsongyou" }, { "repo": "pytorch/pytorch", "number": 67156, "title": "[ONNX] How to gathering on a tensor with two-dim indexing?", "body": "Hi,\r\n\r\nHow can I perform the following **without** getting a Gather node in my onnx graph? As the Gather node gives me an error in TensorRT 7.\r\n\r\n```\r\nx = data[:, x_indices, y_indices]\r\n```\r\ndata is tensor of size[32, 64, 1024]\r\nx_indices is tensor of size [50000,] -> range of indices 0 to 31\r\ny_indices is tensor of size [50000,] -> range of indices 0 to 1023\r\n\r\nThe size of tensor x would be [32,50000]\r\n\r\nThanks in advance.", "url": "https://github.com/pytorch/pytorch/issues/67156", "state": "closed", "labels": [ "module: onnx" ], "created_at": "2021-10-24T13:06:58Z", "updated_at": "2021-10-26T15:33:37Z", "user": "yasser-h-khalil" }, { "repo": "pytorch/data", "number": 81, "title": "Improve debuggability", "body": "## \ud83d\ude80 Feature\r\nCurrently, when iteration on DataPipe starts and Error is raised, the traceback would report at each `__iter__` method pointing to the DataPipe Class file.\r\nIt's hard to figure out which part of DataPipe is broken, especially when multiple same DataPipe calls exist in the pipeline.\r\n\r\nAs normally developer would iterate over the sequence of DataPipe for debugging, we can't rely on DataLoader to handle this case.\r\n\r\nI am not sure how to reference `self` object from each `Iterator` instance. https://docs.python.org/3/reference/expressions.html?highlight=generator#generator-iterator-methods\r\n\r\n(I guess this is also one thing we need to think about singleton iterator should be able to reference back to the object)", "url": "https://github.com/meta-pytorch/data/issues/81", "state": "closed", "labels": [], "created_at": "2021-10-22T18:12:36Z", "updated_at": "2022-03-16T19:42:11Z", "comments": 0, "user": "ejguan" }, { "repo": "pytorch/pytorch", "number": 67013, "title": "How to use torch.distributions.multivariate_normal.MultivariateNormal in multi-gpu mode", "body": "## \u2753 Questions and Help\r\n\r\nIn single gpu mode,MultivariateNormal can run correctly, but when i switch to multi-gpu mode, always get the error:\r\n\r\nG = torch.exp(m.log_prob(Delta))\r\n File \"xxxxx\", line 210, in log_prob\r\n M = _batch_mahalanobis(self._unbroadcasted_scale_tril, diff)\r\n File \"xxxxx\", line 57, in _batch_mahalanobis\r\n M_swap = torch.triangular_solve(flat_x_swap, flat_L, upper=False)[0].pow(2).sum(-2) # shape = b x c\r\nRuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasStrsmBatched( handle, side, uplo, trans, diag, m, n, alpha, A, lda, B, ldb, batchCount)\r\n\r\nthe code is:\r\nmean = torch.zeros(2).to(x.device)\r\ncov = torch.eye(2).to(x.device)\r\nm = MultivariateNormal(mean, cov * self.sigma**2)\r\n\r\nI would be very grateful if you could give some suggestions\r\n\n\ncc @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @fritzo @neerajprad @alicanb", "url": "https://github.com/pytorch/pytorch/issues/67013", "state": "closed", "labels": [ "module: cuda", "triaged", "module: linear algebra" ], "created_at": "2021-10-21T10:36:41Z", "updated_at": "2023-11-30T13:45:58Z", "user": "SkylerHuang" }, { "repo": "pytorch/android-demo-app", "number": 195, "title": "The Performance of the Deployed Model on Android is Far from What on the PC", "body": "Hi, \r\n I trained one model to detect the steel rebar base on yolov5x model. The testing result is good on PC. And I followed the guide (https://github.com/pytorch/android-demo-app/pull/185) to convert the model to torchscript model (ptl) and integrate it to the demo app. Then the demo app could work and output the result, but there is huge gap between the results on PC and app, see below pic for comparison. \r\n\r\nResult on PC (confidence thresh is 0.25)\r\n![image](https://user-images.githubusercontent.com/15626897/138205624-c55fc0c9-65d9-47fe-b9ad-2e41fb467f5b.png)\r\nResult on App(confidence thresh is 0.2)\r\n![image](https://user-images.githubusercontent.com/15626897/138205741-346258cf-ce09-4d41-b62f-eb6afba8d0b0.png)\r\nI also tuned the --optimze when export the model\r\npython3 /workspace/src/github/yolov5/export.py --weight runs/train/exp32/weights/best.pt --include torchscript --optimize\r\nBut there is no significant difference after the tuning. So far have no more clue to figure out ...\r\n\r\nAny tips or suggestion is appreciated, thanks!\r\n", "url": "https://github.com/pytorch/android-demo-app/issues/195", "state": "closed", "labels": [], "created_at": "2021-10-21T03:21:12Z", "updated_at": "2021-12-10T07:20:54Z", "user": "joeshow79" }, { "repo": "pytorch/pytorch", "number": 66916, "title": "how to install torch version1.8.0 with cuda 11.2", "body": "## \u2753 Questions and Help\r\n\r\nhow to install torch version1.8.0 with cuda 11.2\r\n", "url": "https://github.com/pytorch/pytorch/issues/66916", "state": "closed", "labels": [], "created_at": "2021-10-20T01:09:16Z", "updated_at": "2021-10-21T15:16:54Z", "user": "ZTurboX" }, { "repo": "pytorch/pytorch", "number": 66873, "title": "Add documentation for how to work with PyTorch in Windows SSH", "body": "Our Windows machines require all dependencies to be installed before you could do anything with PyTorch (like run tests).\r\n\r\nWe should document how someone could get to a stage where they can work with PyTorch, or provide a script to automate this process.\r\n\r\nMoreover, our Windows scripts need cleaning up in general, but that's more tracked with https://github.com/pytorch/pytorch/issues/65718\n\ncc @brianjo @mruberry", "url": "https://github.com/pytorch/pytorch/issues/66873", "state": "closed", "labels": [ "module: docs", "triaged", "better-engineering" ], "created_at": "2021-10-19T15:25:18Z", "updated_at": "2022-02-28T20:45:59Z", "user": "janeyx99" }, { "repo": "pytorch/torchx", "number": 277, "title": "Improve docs page toctree index", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\nhttps://pytorch.org/torchx\r\n\r\n## What does it currently say?\r\nNo issues with the documentation. This calls for a revamped indexing of the toctree in the torchx docs page\r\n\r\n## What should it say?\r\n\r\nMake the toctree be:\r\n1. Usage:\r\n - Basic Concepts\r\n - Installation\r\n - 10 Min Tutorial (Hello World should be renamed to this)\r\n2. Examples:\r\n - Application\r\n - Component -> (links to) list of builtins (4. below)\r\n - Pipelines\r\n\r\n3. Best Practices:\r\n - Application\r\n - Component\r\n\r\n3. Application (Runtime)\r\n - Overview\r\n - HPO\r\n - Tracking\r\n\r\n4. Components\r\n - Train\r\n - Distributed\r\n - ...\r\n\r\n6. Runner (Schedulers)\r\n - Localhost\r\n - Kubernetes\r\n - Slurm\r\n\r\n7. Pipelines\r\n - Kubeflow\r\n\r\n8. API\r\n - torchx.specs\r\n - torchx.runner\r\n - torchx.schedulers\r\n - torchx.pipelines\r\n \r\n 9. Experimental\r\n - (beta) torchx.config\r\n\r\n## Why?\r\nThe proposed toctree is a better organization compared to the one we have today. It better organizes parallels between app, component, and piplines. Logically lays out sections so that it reads better top to bottom\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/277", "state": "closed", "labels": [], "created_at": "2021-10-18T22:30:22Z", "updated_at": "2021-10-20T22:13:50Z", "comments": 0, "user": "kiukchung" }, { "repo": "pytorch/torchx", "number": 250, "title": "[torchx/configs] Make runopts, Runopt, RunConfig, scheduler_args more consistent", "body": "## Description\r\nConsolidate redundant names, classes, and arguments that represent scheduler `RunConfig`.\r\n\r\n## Motivation/Background\r\nCurrently there are different names for what essentially ends up being the additional runtime options for the [`torchx.scheduler`](https://pytorch.org/torchx/latest/schedulers.html) (see [`dryrun(..., cfg: RunConfig))`](https://pytorch.org/torchx/latest/schedulers.html).\r\n\r\nThis runconfig is:\r\n\r\n1. has class type `torchx.specs.api.RunConfig` (dataclass)\r\n2. function argument name `cfg` or `runcfg` in most places in the scheduler and runner source code\r\n3. passed from the `torchx run` cli as `--scheduler_args`\r\n\r\nAdditionally each scheduler has what is called a `runopts`, which are the runconfig options that the scheduler advertises and takes (see runopts for [local_scheduler](https://github.com/pytorch/torchx/blob/main/torchx/schedulers/local_scheduler.py#L543)).\r\n\r\nThe difference between `RunConfig` and `runopts` is that the `RunConfig` object is simply a holder for the user-provided config key-value pairs while `runopts` is the schema (type, default, is_required, help string) of the configs that it takes. Think of `runopts` being the `argparse.ArgumentParser` of the Scheduler if it were a cli tool, and `RunConfig` the `sys.argv[1:]` (but instead of an array it is a map).\r\n\r\n## Detailed Proposal\r\nThe proposal is to clean up the nomenclature as follows:\r\n\r\n1. Deprecate `--scheduler_args` option in torchx cli and instead call it `--cfg` (consistent with the parameter names in the Scheduler API).\r\n2. Change the section name In the runner INI config files from `[$profile.scheduler_args.$sched_name]` to `[$profile.$scheduler_name.cfg]` (e.g. `[default.scheduler_args.local_cwd]` would become `[default.local_cwd.cfg]`)\r\n3. Rename [`Runopt`](https://github.com/pytorch/torchx/blame/431c0e2131bfae738eb17a00afa83c36a932cec6/torchx/specs/api.py#L512) to `runopt` (to be consistant with `runopts` which is a holder for runopt by name)\r\n\r\n## Alternatives\r\n(not really an alternative but other deeper cleanups considered)\r\n\r\n1. changing the `cfg` parameter name in Scheduler and Runner interfaces to be `runconfig` (consistent with `RunConfig`) or alternatively changing `RunConfig` to `RunCfg`. This is going to be a huge codemod, hence I've decided to live with it and change the rest of the settings to match `cfg`.\r\n2. `RunConfig` is simply a wrapper around a regular python `Dict[str, ConfigValue]` (`ConfigValue` is a type alias not an actual class) and does not provide any additional functionality on top of the dict other than a prettyprint `__repr__()`. Considering just dropping the `RunConfig` dataclass and using `Dict[str, ConfigValue]` directly (also requires a huge codemod)\r\n\r\n## Additional context/links\r\nSee hyperlinks above.\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/250", "state": "closed", "labels": [], "created_at": "2021-10-14T15:03:56Z", "updated_at": "2021-10-14T19:14:18Z", "comments": 0, "user": "kiukchung" }, { "repo": "pytorch/pytorch", "number": 66511, "title": "Add a config to PRs where we assume there is only 1 GPU available", "body": "We recently had a gap in PR coverage where we did not catch when a test case attempted to access an invalid GPU from this PR https://github.com/pytorch/pytorch/pull/65914. We should capture that in PR testing somehow to catch these early next time.\r\n\r\nAction:\r\nMake our tests run on only one \"available\" GPU.\r\nWe could take advantage of this and split up our tests to run on separate GPUs when they're available!\n\ncc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry", "url": "https://github.com/pytorch/pytorch/issues/66511", "state": "closed", "labels": [ "module: ci", "module: tests", "triaged" ], "created_at": "2021-10-12T21:42:19Z", "updated_at": "2021-11-15T22:37:13Z", "user": "janeyx99" }, { "repo": "pytorch/pytorch", "number": 66418, "title": "How to implement dynamic sampling of training data?", "body": "Hi, thank you for your work.\r\n\r\nNow I have multiple train datasets including real data and synthetic data. When sampling data during training, It is necessary to ensure that the ratio of real data samples to synthetic data samples in a batch is 1:1~1:3. How to achieve this operation?\r\n\r\nLooking forward to your answer, thank you.\n\ncc @SsnL @VitalyFedyunin @ejguan @NivekT", "url": "https://github.com/pytorch/pytorch/issues/66418", "state": "closed", "labels": [ "module: dataloader", "triaged" ], "created_at": "2021-10-11T12:10:16Z", "updated_at": "2021-10-13T02:38:14Z", "user": "Danee-wawawa" }, { "repo": "pytorch/tutorials", "number": 1705, "title": "StopIteration Error in torch.fx tutorial with TransformerEncoderLayer", "body": "I\u2019m trying to run the fx profiling tutorial in tutorials/fx_profiling_tutorial.py at master \u00b7 pytorch/tutorials \u00b7 GitHub 1 on a single nn.TransformerEncoderLayer as opposed to the resnet in the example and I keep running into a StopIteration error. Why is this happening? All I did was replace the resnet with a transformer encoder layer. \n\ncc @eellison @suo @gmagogsfm @jamesr66a @msaroufim @SherlockNoMad @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/1705", "state": "closed", "labels": [ "question", "fx", "easy", "docathon-h2-2023" ], "created_at": "2021-10-09T06:28:59Z", "updated_at": "2023-11-07T00:41:23Z", "user": "lkp411" }, { "repo": "pytorch/functorch", "number": 192, "title": "Figure out how to market functorch.vmap over torch.vmap that is in PyTorch nightly binaries", "body": "Motivation:\r\n- Many folks are using torch.vmap instead of functorch.vmap and basing their initial impressions off of it. We'd like them to use functorch.vmap instead, especially if we do a beta release of functorch out-of-tree.\r\n\r\nConstraints:\r\n- Features that rely on it (torch.autograd.functional.jacobian, torch.autograd.grad(batched_grad=True) should probably continue to work.\r\n\r\nPotential solutions:\r\n- Have the torch.vmap API in the nightly binaries error out and tell users to use functorch.vmap...\r\n\r\nThoughts? cc @Chillee @albanD @soulitzer", "url": "https://github.com/pytorch/functorch/issues/192", "state": "closed", "labels": [], "created_at": "2021-10-08T13:55:19Z", "updated_at": "2022-02-03T14:55:34Z", "user": "zou3519" }, { "repo": "pytorch/android-demo-app", "number": 189, "title": "how to loadMoudule by Absolutepath", "body": "Hello, I use the objectdetection app to run normally. There are some new requirements. The. **Pt file is relatively large**. I don't want to include it in the app, but want to **load it directly from the local**.\r\nI use Android Python version 1.8. I found that the pytorch_android class in his jar package does not implement the method of loading locally, **but the nativepeer class contains\r\nNativePeer(String moduleAbsolutePath, Device device) {}**\r\nMy idea is to write a method in pytorch_android and call the above function.\r\nHowever, I **failed to replace the jar package's class** with the local pytorchandroid. Class. It will always be restored. If I delete it locally, it will still be downloaded. I don't know what I should do now.thanks\r\n", "url": "https://github.com/pytorch/android-demo-app/issues/189", "state": "open", "labels": [], "created_at": "2021-10-08T12:09:07Z", "updated_at": "2021-10-08T12:09:07Z", "user": "dota2mhxy" }, { "repo": "pytorch/pytorch", "number": 66309, "title": "How to find the source kernel code of cumsum (gpu)", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nHi,\r\n As the title, how could I find the source kernel code of cumsum (gpu). I just find the cumsum (cpu). Thanks.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/66309", "state": "closed", "labels": [], "created_at": "2021-10-08T09:53:03Z", "updated_at": "2021-10-08T17:47:39Z", "user": "foreveronehundred" }, { "repo": "pytorch/audio", "number": 1837, "title": "ERROR: Could not find a version that satisfies the requirement torchaudio (from versions: none) ERROR: No matching distribution found for torchaudio", "body": "### \ud83d\udc1b Describe the bug\n\nERROR: Could not find a version that satisfies the requirement torchaudio>=0.5.0 (from asteroid) (from versions: none)\r\nERROR: No matching distribution found for torchaudio>=0.5.0 (from asteroid)\r\n21:40:18-root@Desktop:/pr/Neural/voicefixer_main# pip3 install torchaudio\r\n\n\n### Versions\n\npython collect_env.py \r\nTraceback (most recent call last):\r\n File \"/media/sd/Projects/Neural/voicefixer_main/collect_env.py\", line 16, in <module>\r\n import torch\r\n File \"/home/plab/.local/lib/python3.9/site-packages/torch/__init__.py\", line 196, in <module>\r\n from torch._C import *\r\nRuntimeError: module compiled against API version 0xe but this version of numpy is 0xd\r\n\r\n21:45:45-root@Desktop:/pr/Neural/voicefixer_main# pip3 install numpy\r\nRequirement already satisfied: numpy in /usr/lib/python3/dist-packages (1.19.5)\r\n\r\nPlease fix torchaudio to install. Thanks!", "url": "https://github.com/pytorch/audio/issues/1837", "state": "closed", "labels": [ "question" ], "created_at": "2021-10-07T21:17:48Z", "updated_at": "2023-07-31T18:37:00Z", "user": "clort81" }, { "repo": "pytorch/data", "number": 44, "title": "KeyZipper improvement", "body": "Currently multiple stacked `KeyZipper` would create a recursive data structure:\r\n```py\r\ndp = KeyZipper(dp, ref_dp1, lambda x: x)\r\ndp = KeyZipper(dp, ref_dp2, lambda x: x[0])\r\ndp = KeyZipper(dp, ref_dp3, lambda x: x[0][0])\r\n```\r\nThis is super annoying if we are using same key for each `KeyZipper`. At the end, it yields `(((dp, ref_dp1), ref_dp2), ref_dp3)\r\n\r\nWe should either accept multiple reference DataPipe for KeyZipper to preserve same key, or have some expand or collate function to convert result to `(dp, (ref_dp1, ref_dp2, ref_dp3))`\r\n\r\n\r\n- If we take multiple reference DataPipe and ref_key_fn, we need to figure out how to ensure buffer not blown up.\r\n\r\n\r\ncc: @VitalyFedyunin @NivekT ", "url": "https://github.com/meta-pytorch/data/issues/44", "state": "closed", "labels": [], "created_at": "2021-10-05T17:02:00Z", "updated_at": "2021-10-22T14:45:38Z", "comments": 1, "user": "ejguan" }, { "repo": "pytorch/pytorch", "number": 65992, "title": "How to use `MASTER_ADDR` in a distributed training script", "body": "## \u2753 Questions and Help\r\n\r\nhttps://pytorch.org/docs/stable/elastic/run.html\r\n\r\n> `MASTER_ADDR` - The FQDN of the host that is running worker with rank 0; used to initialize the Torch Distributed backend.\r\n\r\nThe document says `MASTER_ADDR` is the hostname of the master node. But the hostname may not be resolved by other nodes. What's the use case of `MASTER_ADDR`?\r\n\r\nFor example, in AWS, the hostname of the master node is `ip-172-30-2-12` which is not recognized by other nodes.\r\n\r\nOn \"ip-172-30-2-12\":\r\n```sh\r\n(dev) \u279c ~ hostname\r\nip-172-30-2-12\r\n```\r\nOn another machine:\r\n```sh\r\n(dev) \u279c ~ getent hosts ip-172-30-2-12\r\n\r\n(dev) \u279c ~\r\n```\r\n\r\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang", "url": "https://github.com/pytorch/pytorch/issues/65992", "state": "closed", "labels": [ "oncall: distributed", "module: elastic" ], "created_at": "2021-10-01T09:19:31Z", "updated_at": "2025-02-04T08:17:51Z", "user": "jasperzhong" }, { "repo": "pytorch/serve", "number": 1262, "title": "What is the Proper Model Save Method?", "body": "The example given in the documentation shows downloading and archiving a pre-existing model from Pytorch. But if serving a custom-built model, what is the correct save method?\r\n\r\nFor example, on the Save/Loading Documentation, there are several save methods:\r\n\r\nhttps://pytorch.org/tutorials/beginner/saving_loading_models.html\r\n\r\nShould the model artifacts be saved as simply torch.save(model, PATH) or should it be saved as torch.save(model.state_dict(), PATH)?\r\n\r\nThank you.", "url": "https://github.com/pytorch/serve/issues/1262", "state": "closed", "labels": [], "created_at": "2021-10-01T06:02:40Z", "updated_at": "2021-10-01T06:42:16Z", "user": "CerebralSeed" }, { "repo": "pytorch/pytorch", "number": 65915, "title": "I am getting undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE error. This I am getting when I am trying to \"import torch from nemo.collections import nlp\". I am trying to use pytorch ngc container 21.05. I tried to import torch before the nemo extension. Please suggest how I can resolve this.", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/65915", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2021-09-30T12:08:43Z", "updated_at": "2021-09-30T12:17:54Z", "user": "gangadharsingh056" }, { "repo": "pytorch/pytorch", "number": 65816, "title": "How to install PyTorch on ppc64le with pip?", "body": "I am going to build a virtual environment (python -m venv) and install PyTorch on a ppc64le machine. But there is no package in pip to install PyTorch, however it is available in conda. But I wanted not to use conda because I need to install some specific packages and versions. So, how can I install PyTorch on a ppc64le(IBM Power8) machine?", "url": "https://github.com/pytorch/pytorch/issues/65816", "state": "closed", "labels": [], "created_at": "2021-09-29T13:02:10Z", "updated_at": "2021-10-01T18:00:20Z", "user": "M-Amrollahi" }, { "repo": "pytorch/pytorch", "number": 65689, "title": "Questions in use pack_padded_sequence: how to pack Multiple tensor?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n- I have some problem in use pack_padded_sequence. I have some 2-dimensional data in varible length,but i can not use pack_padded_sequence to dealwith it.\r\n- \r\n```python\r\n#mydata seems like this\r\nimg=torch.from_numpy(np.array([[[1,2,3,4,5],[1,2,3,4,5]],[[1,2,3,4,0],[1,2,3,4,0]],[[1,2,3,0,0],[1,2,3,0,0]],[[1,2,0,0,0],[1,2,0,0,0]]]))\r\na=[[1,2,3,4,5],[1,2,3,4,5]]\r\nb=[[1,2,3,4],[1,2,3,4]]\r\nc=[[1,2,3],[1,2,3]]\r\n#so i deal with it by padding \r\nimg=torch.from_numpy(np.array([[[1,2,3,4,5],[1,2,3,4,5]],[[1,2,3,4,0],[1,2,3,4,0]],[[1,2,3,0,0],[1,2,3,0,0]],[[1,2,0,0,0],[1,2,0,0,0]]]))\r\nlabel=torch.from_numpy(np.array([[1],[2],[3],[3]]))\r\nlenght=torch.from_numpy(np.array([4,4,3,2]))\r\n\r\ntrain_ids=Data.TensorDataset(img,label,lenght)\r\ntrain_loader = Data.DataLoader(dataset=train_ids, batch_size=1, shuffle=False)\r\nfor step,(b_x, b_y,b_len) in enumerate(train_loader):\r\n X= torch.nn.utils.rnn.pack_padded_sequence(b_x,b_len, batch_first=True,enforce_sorted=False)\r\n #it canot be like [[1,2,3,4],[1,2,3,4]] what could i do\r\n\r\n print(step,X)\r\n X, _ =nn.utils.rnn.pad_packed_sequence(X, batch_first=True)\r\n```\r\n", "url": "https://github.com/pytorch/pytorch/issues/65689", "state": "closed", "labels": [], "created_at": "2021-09-27T13:02:55Z", "updated_at": "2021-09-27T23:31:06Z", "user": "jingxingzhi" }, { "repo": "pytorch/pytorch", "number": 65682, "title": "How to export split to ONNX with dynamic split_size?", "body": "## \u2753 Questions and Help\r\nI need to implement dynamic tensor split op in work. But when I want to export this split op to ONNX with dynamic split_size, it seems not work. \r\n\r\nI am new to ONNX. Anyone can help me? Thanks a lot.\r\n\r\n## To Reproduce\r\n\r\n```python\r\nimport torch\r\n\r\ndummy_input = (torch.tensor([1, 4, 2, 7, 3]), torch.tensor([1, 2, 2]))\r\n\r\nclass Split(torch.nn.Module):\r\n def forward(self, x, l):\r\n return x.split(l.cpu().numpy().tolist(), dim=-1)\r\n \r\nmodel = Split()\r\n\r\nwith torch.no_grad():\r\n torch.onnx.export(\r\n model, dummy_input, 'split.onnx', verbose=False, opset_version=13,\r\n input_names=['a', 'b'],\r\n output_names=['c'],\r\n dynamic_axes={'a': [0], 'b': [0], 'c': [0]}\r\n )\r\n```\r\nwhen I use the onnx model, it seems not to work. I get this error:\r\n[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:b\r\n\r\n```python\r\nimport onnxruntime as ort\r\n\r\nmodel_path = './split.onnx'\r\nsess = ort.InferenceSession(model_path)\r\n\r\na = torch.tensor([4, 2, 3, 4])\r\nb = torch.tensor([1, 3])\r\nsess.run(['c'], {'a':a.numpy(), 'b':b.numpy()})\r\n```\r\nTensor b seems can not be used as an input, but I do need a parameter to represent the dynamic split_size.\r\n\r\n## Environment\r\n- PyTorch Version (e.g., 1.0): 1.8.1+cu111\r\n- Python version: 3.8\r\n\n\ncc @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/65682", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2021-09-27T09:27:15Z", "updated_at": "2022-10-27T20:57:22Z", "user": "Wwwwei" }, { "repo": "pytorch/torchx", "number": 199, "title": "Installation from source examples fail", "body": "## \ud83d\udcda Documentation\r\n\r\n## Link\r\n<!-- link to the problematic documentation -->\r\nhttps://github.com/pytorch/torchx#source\r\n\r\n## What does it currently say?\r\n<!-- copy paste the section that is wrong -->\r\n```bash\r\n# install torchx sdk and CLI from source\r\n$ pip install -e git+https://github.com/pytorch/torchx.git\r\n```\r\n\r\n## What should it say?\r\n<!-- the proposed new documentation -->\r\nNo idea.\r\n\r\n## Why?\r\n<!-- (if not clear from the proposal) why is the new proposed documentation more correct/improvement over the existing one? -->\r\nOn Linux:\r\n```\r\n(venv) sbyan % pip --version\r\npip 21.2.4 from /mnt/shared_ad2_mt1/sbyan/git/PrivateFederatedLearning/venv/lib64/python3.6/site-packages/pip (python 3.6)\r\n(venv) sbyan % pip install -e git+https://github.com/pytorch/torchx.git\r\nERROR: Could not detect requirement name for 'git+https://github.com/pytorch/torchx.git', please specify one with #egg=your_package_name\r\n```\r\n\r\n\r\nOn MacOS:\r\n```\r\n(venv) smb % pip --version\r\npip 21.2.4 from /Users/smb/Work/GIT/PrivateFederatedLearning/venv/lib/python3.9/site-packages/pip (python 3.9)\r\n(venv) smb % pip install -e git+https://github.com/pytorch/torchx.git\r\nERROR: Could not detect requirement name for 'git+https://github.com/pytorch/torchx.git', please specify one with #egg=your_package_name\r\n```\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/199", "state": "closed", "labels": [], "created_at": "2021-09-24T16:07:12Z", "updated_at": "2021-10-01T02:28:58Z", "comments": 2, "user": "stevebyan" }, { "repo": "pytorch/torchx", "number": 197, "title": "Documentation feedback", "body": "## \ud83d\udcda Documentation\r\n\r\nAt a high level the repo really needs a glossary of terms in a single page otherwise easy to forget what they mean when you get to a new page. Lots of content can be deleted specifically the example application notebooks don't add anything relative to the pipeline examples.\r\n\r\nGeneral list of feedback - not sure how to fix yet so opening issue instead of PR\r\n\r\n### https://pytorch.org/torchx/latest/quickstart.html\r\n\r\n* Add a link to where built in are defined in the code\r\n\r\n\r\n### https://pytorch.org/torchx/latest/cli.html\r\n\r\n* App bundle is never defined - is it the app ID? in the docs `echo_c944ffb2`?\r\n\r\n### https://pytorch.org/torchx/latest/configure.html\r\n\r\n* One thing wasn't too clear is a resource basically number of CPUs and GPUs? Would be helpful to add some helper enum which includes something higher level like a V100 machine for provisioning\r\n\r\n### https://pytorch.org/torchx/latest/examples_apps/datapreproc/component.html#sphx-glr-examples-apps-datapreproc-component-py\r\n\r\n* Example has no main function\r\n\r\n### https://pytorch.org/torchx/latest/examples_apps/datapreproc/datapreproc.html#sphx-glr-examples-apps-datapreproc-datapreproc-py\r\n\r\nI'm not sure what the entire Application Examples notebooks do? May be best to refactor together with pipeline examples? Let me know I can send a PR to delete\r\n\r\n\r\n### https://pytorch.org/torchx/latest/examples_pipelines/kfp/intro_pipeline.html#sphx-glr-examples-pipelines-kfp-intro-pipeline-py\r\n\r\n* Make it clearer that pipeline.yaml is generated and not an input - I kept looking for it in the source directory\r\n\r\n### https://pytorch.org/torchx/latest/components/base.html\r\n\r\nWhy is torch.elastic mentioned here? Whole repo feels like it needs a glossary. For example by image do you mean docker image?\r\n\r\n### https://pytorch.org/torchx/latest/components/hpo.html\r\n\r\n* Just sat TBD - delete for now?\r\n\r\n### https://pytorch.org/torchx/latest/components/utils.html\r\n\r\n* Have a link to supported utils in the codebase\r\n\r\n### https://pytorch.org/torchx/latest/schedulers/kubernetes.html\r\n\r\n* This says coming soon but looks like feature is available?\r\n\r\n### https://pytorch.org/torchx/latest/beta.html\r\n\r\nAlso empty just remove\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/197", "state": "closed", "labels": [], "created_at": "2021-09-23T18:01:21Z", "updated_at": "2021-09-27T17:37:49Z", "comments": 4, "user": "msaroufim" }, { "repo": "pytorch/tutorials", "number": 1692, "title": "UserWarning during Datasets & DataLoaders Tutorial", "body": "Hi,\r\n\r\nI am following the 'Introduction to PyTorch' tutorial. During [Datasets & DataLoaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) I copied the following:\r\n\r\n```import torch\r\nfrom torch.utils.data import Dataset\r\nfrom torchvision import datasets\r\nfrom torchvision.transforms import ToTensor\r\nimport matplotlib.pyplot as plt\r\n\r\ntraining_data = datasets.FashionMNIST(\r\n root=\"data\",\r\n train=True,\r\n download=True,\r\n transform=ToTensor()\r\n)\r\ntest_data = datasets.FashionMNIST(\r\n root=\"data\",\r\n train=False,\r\n download=True,\r\n transform=ToTensor()\r\n)\r\n```\r\n\r\nWhich led me to the following warning:\r\n\r\n```/home/jelle/PycharmProjects/pytoch_learning/venv/lib/python3.9/site-packages/torchvision/datasets/mnist.py:498:\r\n UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means\r\n you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the\r\n array to protect its data or make it writeable before converting it to a tensor. This type of warning will be \r\n suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:180.)\r\n return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)\r\n ```\r\n\r\nIt seems this issue has been solved before? \r\n[47160](https://github.com/pytorch/pytorch/issues/47160)\r\n\r\nI am using\r\n\r\n- Python 3.9\r\n- torch 1.9.1\r\n- torchvision 0.10.1\r\n- numpy 1.21.2\r\n- Pycharm 2021.2\r\n- Ubuntu 21.4\r\n\r\nI installed using a virtual environment and the guide on https://pytorch.org/get-started/locally/\r\n`pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html`\r\n\r\nThis warning was also given during the 'quickstart' tutorial. The warning seems to have no further effect on the tutorial.\r\nWhy is the warning not mentioned in the tutorial?\n\ncc @suraj813", "url": "https://github.com/pytorch/tutorials/issues/1692", "state": "closed", "labels": [ "intro", "docathon-h1-2023", "easy" ], "created_at": "2021-09-23T06:07:54Z", "updated_at": "2023-06-08T17:07:12Z", "comments": 12, "user": "Jelle-Bijlsma" }, { "repo": "pytorch/TensorRT", "number": 632, "title": "\u2753 [Question] Unknown type name '__torch__.torch.classes.tensorrt.Engine'", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nwhen I tried to load trt module which saved with python\r\n```\r\ntorch::jit::load(trtorch_path);\r\n```\r\nI got this error \r\n\r\n```\r\nterminate called after throwing an instance of 'torch::jit::ErrorReport'\r\n what(): \r\nUnknown type name '__torch__.torch.classes.tensorrt.Engine':\r\nSerialized File \"code/__torch__/models/backbone.py\", line 4\r\n __parameters__ = []\r\n __buffers__ = []\r\n __torch___models_backbone_Backbone_trt_engine_ : __torch__.torch.classes.tensorrt.Engine\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n def forward(self_1: __torch__.models.backbone.Backbone_trt,\r\n input_0: Tensor) -> Tensor:\r\n```\r\n\r\nI linked libtrtorch.so as bellow\r\n```\r\ntarget_link_libraries(\r\n ${ProjectTargetLibName}\r\n PRIVATE\r\n ${TORCH_LIBRARIES}\r\n /opt/trtorch/lib/libtrtorch.so\r\n)\r\n```\r\nmaybe I need to compile trt_module in c++ instead of python??\r\n\r\n## What you have already tried\r\n\r\nI can save trt_module and load and run with python without any problem\r\n\r\n\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n\r\n## Environment\r\n - PyTorch 1.9\r\n - Libtorch 1.9\r\n - OS (Linux):\r\n - CUDA 11.0:\r\n - TensorRT 8.0\r\n - TRTorch 0.4(libtrtorch-v0.4.0-cudnn8.2-tensorrt8.0-cuda11.1-libtorch-1.9.0.tar.gz)\r\n", "url": "https://github.com/pytorch/TensorRT/issues/632", "state": "closed", "labels": [ "question" ], "created_at": "2021-09-22T08:45:45Z", "updated_at": "2021-09-27T14:38:21Z", "user": "yokosyun" }, { "repo": "pytorch/pytorch", "number": 65446, "title": "libtorch compile problem. How to get the correct protobuf version? what PROTOBUF_VERSION <3011000 and 3011004 <PROTOBUF_MIN_PROTOC_VERSION?", "body": "How to get the correct protobuf version?\r\n\r\nWhen using libtorch to compile, using PROTOBUF_VERSION <3011000 and 3011004 <PROTOBUF_MIN_PROTOC_VERSION will report an error, which version should be used? 3011000 also did not show any content, thank you\r\n\r\nlibtorch ==libtorch-cxx11-abi-shared-with-deps-1.9.0+cu102.zip\r\ntorchvision = v0.10.0\r\nCompiled protobuf version = 3.18.0\r\n\r\nerror info:\r\n![image](https://user-images.githubusercontent.com/22077027/134283505-374d684d-7b62-4c57-a97b-bd57a8ce50d5.png)\r\n\r\n\r\n![image](https://user-images.githubusercontent.com/22077027/134283532-aa08517a-a43a-48e0-b16a-be76a109b7b6.png)\r\n\r\n\r\n\r\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.\r\n1.\r\n1.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n\ncc @malfet @seemethere", "url": "https://github.com/pytorch/pytorch/issues/65446", "state": "open", "labels": [ "module: build", "module: protobuf", "triaged" ], "created_at": "2021-09-22T04:28:27Z", "updated_at": "2021-09-22T14:27:18Z", "user": "ahong007007" }, { "repo": "pytorch/pytorch", "number": 65312, "title": "How to get the same RandomResizedCrop result of img and gt", "body": "my code snippet is below\r\n![TIM\u56fe\u724720210919214656](https://user-images.githubusercontent.com/54461374/133929974-1baa0031-3562-45ea-95b9-68555206a1eb.png)\r\n![TIM\u56fe\u724720210919214702](https://user-images.githubusercontent.com/54461374/133929964-acf0cb30-dbeb-4d6e-934e-b5af92b00d5d.png)\r\n\r\nNow ,the img ,gt do different RandomResizedCrop. But I want they do the same, because the img and gt must be correspond.\r\nHow should I modify the code?", "url": "https://github.com/pytorch/pytorch/issues/65312", "state": "closed", "labels": [], "created_at": "2021-09-19T13:54:59Z", "updated_at": "2021-09-21T03:17:23Z", "user": "HaoRan-hash" }, { "repo": "pytorch/vision", "number": 4446, "title": "Question: FFmpeg dependency", "body": "Sorry, I am little bit of a newbie on this subject. I notice at Torchvision 0.9+ that ffmpeg >= 4.2 is a hard dependency for Linux conda distributions. At Torchvision <= 0.8.2, this was not a dependency. We all know ffmpeg licensing and the other dependencies it pulls in is problematic in some scenarios.\r\n\r\nQuestion 1: Is it possible to build Torchvision for Linux without ffmpeg? What breaks?\r\nQuestion 2: If I manually alter the .bz2 file from Anaconda.org to remove the ffmpeg dependency, what breaks?\r\n\r\nAny other suggestions?", "url": "https://github.com/pytorch/vision/issues/4446", "state": "closed", "labels": [ "question", "topic: build" ], "created_at": "2021-09-19T11:21:21Z", "updated_at": "2021-09-24T19:36:12Z", "user": "rwmajor2" }, { "repo": "pytorch/vision", "number": 4445, "title": "How to use TenCrop and FiveCrop on video", "body": "Hi everyone,\r\nI am wanting to use TenCrop and FiveCrop on video but I have no idea how to do this.\r\nCan you tell me how to do it?\r\nSorry, I am new to this field.\r\nThank you very much!", "url": "https://github.com/pytorch/vision/issues/4445", "state": "open", "labels": [ "question", "module: video" ], "created_at": "2021-09-18T16:08:03Z", "updated_at": "2021-09-19T12:33:10Z", "user": "DungVo1507" }, { "repo": "pytorch/pytorch", "number": 65199, "title": "How to register a Module as one custom OP when export to onnx", "body": "## \u2753 How to register a Module as one custom OP when export to onnx\r\n\r\nThe custom modules may be split to multiple OPs when using `torch.onnx.export`. In many cases, we can manually optimize these OPs into a custom OP(with a custom node in onnx), and handle it by a plugin in TensorRT. Is there any way to register a Module as one custom OP when export to onnx?\n\ncc @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/65199", "state": "closed", "labels": [ "module: onnx" ], "created_at": "2021-09-17T06:01:28Z", "updated_at": "2024-02-01T02:38:25Z", "user": "OYCN" }, { "repo": "pytorch/torchx", "number": 184, "title": "Update builtin components to use best practices + documentation", "body": "Before stable release we want to do some general cleanups on the current built in components.\r\n\r\n- [ ] all components should default to docker images (no /tmp)\r\n- [ ] all components should use `python -m` entrypoints to make it easier to support all environments by using python's resolution system\r\n- [ ] update the component best practice documentation to indicate above\r\n\r\nSlurm image handling will be revisited later to make it easier to deal with virtualenvs and the local paths.", "url": "https://github.com/meta-pytorch/torchx/issues/184", "state": "closed", "labels": [ "documentation", "enhancement", "module: components" ], "created_at": "2021-09-16T23:16:52Z", "updated_at": "2021-09-21T20:59:19Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/TensorRT", "number": 626, "title": "\u2753 [Question] How to install trtorchc? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\nHow to install trtorchc?\r\n## What you have already tried\r\nI use Dockerfile.21.07 build the docker. I found the trtorchc can't be used.\r\nSo I run `bazel build //cpp/bin/trtorchc --cxxopt=\"-DNDEBUG` to build the trtorchc.\r\nHowever, it doesn't work.\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/626", "state": "closed", "labels": [ "question" ], "created_at": "2021-09-16T10:32:47Z", "updated_at": "2021-09-16T15:54:00Z", "user": "shiyongming" }, { "repo": "pytorch/pytorch", "number": 65132, "title": "How to reference a tensor variable from a superclass of `torch.Tensor`?", "body": "Consider I have the following code where I subclass `torch.Tensor`. I'd like to avoid using `self.t_` and instead access the tensor variable in the superclass. Though, when looking at the PyTorch code, I don't seem to identify how that can be done. Your help is appreciated.\r\n\r\n```\r\nclass XLATensor(torch.Tensor):\r\n def __init__(self, data, **kwargs):\r\n self.t_ = torch.as_tensor(data, dtype=torch.float32, **kwargs)\r\n```\r\n\r\nThis [`document`](https://docs.google.com/document/d/1u5kJ18HKnoJ-i8shymt__wRrnw-eYII3c-AHUlnky-s/edit?resourcekey=0-THFZXxHHehVBA-oBsLU0Jw#heading=h.9iwsbbqufptx) contains more details regarding my question.\r\n\r\nCC @albanD, @wconstab, @ezyang \r\n\n\ncc @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/65132", "state": "open", "labels": [ "triaged", "module: xla" ], "created_at": "2021-09-16T07:31:05Z", "updated_at": "2021-09-16T21:35:37Z", "user": "miladm" }, { "repo": "pytorch/pytorch", "number": 64939, "title": "BC CI error message should link to some information about how to squash the warning", "body": "## \ud83d\ude80 Feature\r\nThe BC CI error message says:\r\n```\r\nThe PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. \r\n```\r\n\r\nI know the change is wanted, but I don't remember how to actually \"add the change so that the BC mechanism stops complaining\". This information can easily be found by searching the codebase or looking for similar PRs, but it would be nice if we just linked directly to a note or a wiki page or something so that we don't have to go searching around every time.\r\n\r\n## Motivation\r\n\r\nSave devs (old and new) some time when reading the message!\r\n\r\n## Pitch\r\n\r\nThe error message should be improved with a \"see this link for more details\"\r\n\r\n## Alternatives\r\n\r\nNot sure\r\n\n\ncc @ezyang @seemethere @malfet @lg20987 @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/64939", "state": "open", "labels": [ "module: ci", "triaged", "better-engineering" ], "created_at": "2021-09-13T17:33:53Z", "updated_at": "2021-09-13T17:38:17Z", "user": "zou3519" }, { "repo": "pytorch/pytorch", "number": 64904, "title": "How to use python to implement _VF.lstm", "body": "## How to use python to implement _VF.lstm\r\n\r\nHello! When I want to modify the calculation formula of the LSTM\uff0cI found the calculation process in nn.LSTM is realized by _VF.lstm. I found the \"_VF.lstm\" is written by C++, and I can't find the RNN.cpp in my computer.\r\n\r\nSo I wanna implement _VF.lstm by using python, can u help me? \r\nLooking forward to your reply\uff01 Thanks a lot\uff01\r\n", "url": "https://github.com/pytorch/pytorch/issues/64904", "state": "closed", "labels": [], "created_at": "2021-09-13T06:41:39Z", "updated_at": "2021-09-14T02:37:26Z", "user": "TimothyLiuu" }, { "repo": "pytorch/pytorch", "number": 64793, "title": "How to get \"finfo\" in C++ torchlib like that in pytorch", "body": "I am using C++ torchlib, but I don't know what to do it in c++ like that in pytorch:\r\n```python \r\n min_real = torch.finfo(self.logits.dtype).min\r\n# or\r\n min_real = torch.finfo(self.logits.dtype).tiny\r\n\r\n```\n\ncc @yf225 @glaringlee", "url": "https://github.com/pytorch/pytorch/issues/64793", "state": "closed", "labels": [ "module: cpp", "triaged" ], "created_at": "2021-09-10T01:37:53Z", "updated_at": "2021-09-13T01:13:27Z", "user": "dbsxdbsx" }, { "repo": "pytorch/TensorRT", "number": 620, "title": "\u2753 [Question] Is it possible to install TRTorch with CUDA 11.1 support on aarch64?", "body": "# \u2753 Question\r\nis there a particular reason why there is no pre-built wheel file for the combination of CUDA11.1 + aarch64\r\n\r\n# What you have already tried\r\nI have tried to install wheel files for CUDA 10.2 aarch64 but it obviously didn't work because it tried to find the CUDA 10.2 libraries.\r\n\r\n# Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n- PyTorch Version (e.g., 1.0): 1.9.0\r\n- CPU Architecture: 8.6\r\n- OS (e.g., Linux): Linux\r\n- How you installed PyTorch (conda, pip, libtorch, source): pip\r\n- Build command you used (if compiling from source): pip install\r\n- Are you using local sources or building from archives: building from archives\r\n- Python version: 3.6\r\n- CUDA version: 11.1\r\n- GPU models and configuration: Nvida RTX 6000\r\n- Any other relevant information: N/A\r\n\r\nMy question would be, is there a particular reason why there is no pre-built wheel file for the combination of CUDA11.1 + aarch64?\r\n\r\nThank you!", "url": "https://github.com/pytorch/TensorRT/issues/620", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-09-09T00:45:37Z", "updated_at": "2021-12-20T00:01:58Z", "user": "lppllppl920" }, { "repo": "pytorch/text", "number": 1386, "title": "how to make clear what torchtext._torchtext module do\uff0cwhen i import something from the module.", "body": "## \ud83d\udcda Documentation\r\n\r\n**Description**\r\nyesterday\uff0ci learn Vectors and Vocab from torchtext 0.5. But today, i update it to torchtext0.10 by pip install --upgrade,and then, i found Vocab is changed.\r\nNow, when you use the method 'torchtext.vocab.build_vocab_from_iterator' to create instance of Vocab, you will call the method 'torchtext._torchtext.Vocab'.\r\nThen, i want to know clear what torchtext._torchtext module do, so i find the file named '_torchtext.pyd'. But it is unreadable by humans. Why torchtext write it to '.pyd' file? How to really know about it but not just invoke it?\r\nThanks.", "url": "https://github.com/pytorch/text/issues/1386", "state": "open", "labels": [], "created_at": "2021-09-05T07:41:13Z", "updated_at": "2021-09-13T20:57:01Z", "user": "wn1652400018" }, { "repo": "pytorch/xla", "number": 3114, "title": "How to aggregate the results running on multiple tpu cores", "body": "## \u2753 Questions and Help\r\nHi, How can we aggregate the results or say combine all the predictions and use it further.\r\n\r\nI understand this could be a issue addressed earlier, if yes please share some links related to this\r\n```\r\ndef _run():\r\n <model loading, training arguments and etc >\r\n # using hugging face Trainer fn\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=data_train,\r\n tokenizer=tokenizer,\r\n eval_dataset=data_val,\r\n compute_metrics=compute_metrics,\r\n\r\n )\r\n trainer.train()\r\n results = trainer.evaluate()\r\n return results\r\n\r\ndef _mp_fn(rank, flags):\r\n results = _run()\r\n\r\nFLAGS={}\r\nxmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method='fork')\r\n```", "url": "https://github.com/pytorch/xla/issues/3114", "state": "closed", "labels": [], "created_at": "2021-09-03T05:13:35Z", "updated_at": "2021-09-04T08:21:31Z", "user": "pradeepkr12" }, { "repo": "pytorch/serve", "number": 1227, "title": "How to torch-model-archiver directory with its content?", "body": "I'm trying to generate .mar file which contain some extra files including a directory. I'm not sure how can I add that. Here is what I'm trying to archive:\r\n\r\n\r\n```bash\r\nmy_model/\r\n\u251c\u2500\u2500 [4.0K] 1_Pooling\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 [ 190] config.json\r\n\u251c\u2500\u2500 [ 696] config.json\r\n\u251c\u2500\u2500 [ 122] config_sentence_transformers.json\r\n\u251c\u2500\u2500 [ 168] handler.py\r\n\u251c\u2500\u2500 [ 276] model_setup_config.json\r\n\u251c\u2500\u2500 [ 229] modules.json\r\n\u251c\u2500\u2500 [ 87M] pytorch_model.bin\r\n\u251c\u2500\u2500 [ 53] sentence_bert_config.json\r\n\u251c\u2500\u2500 [ 112] special_tokens_map.json\r\n\u251c\u2500\u2500 [455K] tokenizer.json\r\n\u251c\u2500\u2500 [ 591] tokenizer_config.json\r\n\u2514\u2500\u2500 [226K] vocab.txt\r\n```\r\n\r\nHere is my `torch-model-archiver` command which is replacing `my_model/config.json` with `my_model/1_Pooling/config.json` in model.mar file.\r\n\r\n```bash\r\n$ torch-model-archiver \\\r\n--model-name my_model \\\r\n--version 1.0 \\\r\n--serialized-file ../pytorch_model.bin \\\r\n--handler ../handler.py \\\r\n--extra-files \"../config.json,../config_sentence_transformers.json,../modules.json,../sentence_bert_config.json,../special_tokens_map.json,../tokenizer.json,../tokenizer_config.json,../vocab.txt,../1_Pooling/config.json,../model_setup_config.json\"\r\n```\r\n\r\nHow can I keep the `1_Pooling/` directory as it is in .mar file with all its content? ", "url": "https://github.com/pytorch/serve/issues/1227", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2021-09-01T19:44:08Z", "updated_at": "2021-09-01T21:06:36Z", "user": "spate141" }, { "repo": "pytorch/pytorch", "number": 64334, "title": "How to add nan value judgment for variable t0_1 in fused_clamp kernel generated by torch/csrc/jit/tensorexpr/cuda_codegen.cpp.", "body": "For this python program:\r\n\r\n```import torch\r\n\r\ntorch._C._jit_set_profiling_executor(True)\r\ntorch._C._jit_set_profiling_mode(True)\r\ntorch._C._jit_override_can_fuse_on_cpu(True)\r\ntorch._C._jit_override_can_fuse_on_gpu(True)\r\ntorch._C._debug_set_fusion_group_inlining(False)\r\ntorch._C._jit_set_texpr_fuser_enabled(True)\r\n\r\ndef func2(a, b):\r\n return torch.clamp(a + b, min=0, max=2)\r\n\r\ndevice = 'cuda'\r\na = torch.randn(4, 4, dtype=torch.float, device=device, requires_grad=True)\r\nnan = torch.tensor(float('nan'), dtype=torch.float, device=device)\r\n\r\nscripted_fn = torch.jit.script(func2)\r\nscript_outputs = scripted_fn(a,nan)\r\nopt_script_outputs = scripted_fn(a,nan)\r\n\r\nprint(script_outputs.detach().cpu())\r\nprint(opt_script_outputs.detach().cpu())\r\n```\r\n\r\nthis program will call 2 cuda kernel:\r\nclamp_kernel_cuda is show as follow,\r\n\r\n```\r\nvoid clamp_kernel_cuda(TensorIterator& iter, Scalar min_value, Scalar max_value) {\r\n AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, iter.dtype(), \"clamp_cuda\", [&]() {\r\n auto lower = min_value.to<scalar_t>();\r\n auto upper = max_value.to<scalar_t>();\r\n gpu_kernel(iter, [=]GPU_LAMBDA(scalar_t v) -> scalar_t {\r\n // Propagate nan, which doesn't propagate automatically for ROCm\r\n if (_isnan(v)) {\r\n return v;\r\n } else {\r\n return ::min(::max(v, lower), upper);\r\n }\r\n });\r\n });\r\n}\r\n```\r\nand jit codegen kernel is show as follow,\r\n\r\n```\r\nextern \"C\" __global__\r\nvoid fused_clamp(float* t0, float* aten_clamp) \r\n{\r\n if (512 * blockIdx.x + threadIdx.x<16 ? 1 : 0) {\r\n float t0_1 = t0[512 * blockIdx.x + threadIdx.x];\r\n aten_clamp[512 * blockIdx.x + threadIdx.x] = (t0_1<0.f ? 0.f : t0_1)>2.f ? 2.f : (t0_1<0.f ? 0.f : t0_1);\r\n }\r\n }\r\n}\r\n```\r\nMy question is why not to add nan value judgment for variable t0_1, expect generated kernel is show as follows,\r\n\r\n```\r\nextern \"C\" __global__\r\nvoid fused_clamp(float* t0, float* aten_clamp) \r\n{\r\n if (512 * blockIdx.x + threadIdx.x<16 ? 1 : 0) {\r\n float t0_1 = t0[512 * blockIdx.x + threadIdx.x];\r\n aten_clamp[512 * blockIdx.x + threadIdx.x] = isnan(t0_1) ? t0_1 : ((t0_1<0.f ? 0.f : t0_1)>2.f ? 2.f : (t0_1<0.f ? 0.f : t0_1));\r\n }\r\n }\r\n}\r\n```\r\nFor AMD device(ROCM), if not nan value judgment, fused_clamp kernel will return 0 without nan. If we want to generate fused_clamp kernel as above, how to modify code in torch/csrc/jit/tensorexpr/kernel.cpp:979, \r\n```\r\n\r\n```\r\n", "url": "https://github.com/pytorch/pytorch/issues/64334", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2021-09-01T02:35:39Z", "updated_at": "2021-09-01T02:53:24Z", "user": "HangJie720" }, { "repo": "pytorch/functorch", "number": 106, "title": "how to install torch>=1.10.0.dev", "body": "when I run this command\r\npip install --user \"git+https://github.com/facebookresearch/functorch.git\"\r\n\r\nERROR: Could not find a version that satisfies the requirement torch>=1.10.0.dev", "url": "https://github.com/pytorch/functorch/issues/106", "state": "open", "labels": [], "created_at": "2021-08-31T09:28:39Z", "updated_at": "2021-11-05T15:45:05Z", "user": "agdkyang" }, { "repo": "pytorch/pytorch", "number": 64247, "title": "How to optimize jit-script model performance (backend device is gpu)", "body": "I have lots of script models, Now I want to optimize their performance, the bacnkend is gpu && project is written in c++ api(torch::jit::load). \r\nDoes there any ways to do this optimize?\r\nRecently, I find that pytorch support cuda-graph now, maybe this should be a way to optimize performance.\r\nBut there are few documents about how to use this feature in c++. Can you give me some examples about how to use cuda-graph in c++ api with script model? \r\nThanks very much! :)\r\n", "url": "https://github.com/pytorch/pytorch/issues/64247", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2021-08-31T05:31:44Z", "updated_at": "2021-08-31T05:31:46Z", "user": "fwz-fpga" }, { "repo": "pytorch/pytorch", "number": 64206, "title": "Document how to generate Pybind bindings for C++ Autograd", "body": "## \ud83d\ude80 Feature\r\nhttps://pytorch.org/tutorials/advanced/cpp_autograd.html provides a good example on how to define your own function with a forward and backward pass, along with registering it with the autograd system. However, it lacks any information on how to actually link this module to use in Python / Pytorch / Pybind. There is this one example that shows it in a hacky way (https://pytorch.org/tutorials/advanced/cpp_extension.html), but I want something I can include directly in Pybind.\r\n\r\nEg. something like this which doesnt work\r\n```\r\n py::class_<LinearFunction, std::shared_ptr<LinearFunction>>(\r\n m, \"LinearFunction\")\r\n .def(py::init<>())\r\n .def(\"forward\", &LinearFunction::forward)\r\n .def(\"backward\", &LinearFunction::backward);\r\n```\r\n\n\ncc @yf225 @glaringlee", "url": "https://github.com/pytorch/pytorch/issues/64206", "state": "open", "labels": [ "module: cpp", "triaged" ], "created_at": "2021-08-30T18:12:12Z", "updated_at": "2021-08-31T14:20:23Z", "user": "yaadhavraajagility" }, { "repo": "pytorch/tutorials", "number": 1662, "title": "seq2seq with character encoding", "body": "hi, i am hoping to build a seq2seq model with attention with character level encoding. idea is to build a model which can predict a correct name by handling all sort of spelling mistakes (qwerty keyboard error, double typing, omitting word etc.) . my test data will few example of mistyped words mapping to correct word. i am hoping i can mostly follow this https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html and just change the input and output tensors to contain the character tensors..similar to a tutorial which predicts country by looking at last name. ( i think first one in nlp series)..does this approach make sense? is there any other thing i need to consider or better model to consider.\n\ncc @pytorch/team-text-core @Nayef211", "url": "https://github.com/pytorch/tutorials/issues/1662", "state": "closed", "labels": [ "question", "Text", "module: torchtext" ], "created_at": "2021-08-30T04:10:01Z", "updated_at": "2023-03-06T23:54:02Z", "user": "manish-shukla01" }, { "repo": "pytorch/vision", "number": 4332, "title": "Customize the number of input_channels in MobileNetv3_Large", "body": "I would like to know how to customize the MobileNetV3_Large torchvision model to accept single-channel inputs with number of classes = 2.\r\n\r\nAs mentioned in some of the PyTorch discussion forums, I have tried\r\n`model_ft.conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)`\r\nwhich works for ResNet models but not MobileNetV3.\r\n\r\nAlso tried,\r\n`model.input_channels=1`\r\n\r\nI keep getting the error \r\n`Given groups=1, weight of size [16, 3, 3, 3], expected input[12, 1, 512, 512] to have 3 channels, but got 1 channels instead`\r\n\r\nPlease provide suggestions for customizing the MobileNet model to accept single channel input, as is possible in the case of ResNet. Repeating my input tensor to have 3 channels is not a solution I would be interested in. ", "url": "https://github.com/pytorch/vision/issues/4332", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-29T10:48:29Z", "updated_at": "2021-08-31T09:41:23Z", "user": "ananda1996ai" }, { "repo": "pytorch/pytorch", "number": 64094, "title": "Document how to disable python tests on CI through issues", "body": "## \ud83d\udcda Documentation\r\n\r\nWe should document the use of issues to disable tests in a public wiki.\r\n\n\ncc @ezyang @seemethere @malfet @walterddr @lg20987 @pytorch/pytorch-dev-infra", "url": "https://github.com/pytorch/pytorch/issues/64094", "state": "closed", "labels": [ "module: ci", "triaged", "better-engineering", "actionable" ], "created_at": "2021-08-27T14:36:39Z", "updated_at": "2021-10-11T21:54:25Z", "user": "janeyx99" }, { "repo": "pytorch/hub", "number": 222, "title": "torch.hub shouldn't assume model dependencies have __spec__ defined", "body": "**Problem**\r\nI'm using torch.hub to load a model that has the `transformers` library as a dependency, however, the last few versions of `transformes` haven't had `__spec__` defined. Currently, this gives an error with torch.hub when trying to load the model and checking that the dependencies exist with `importlib.util.find_spec(name)` inside `_check_module_exists()` ([source code](https://github.com/pytorch/pytorch/blob/b0396e39f41da9f61c61ed8758b5e9505a370ebc/torch/hub.py#L198)).\r\n\r\n**Solution**\r\nDon't check for `__spec__` when checking that a module exists.", "url": "https://github.com/pytorch/hub/issues/222", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-27T13:59:40Z", "updated_at": "2021-08-27T18:03:24Z", "user": "laurahanu" }, { "repo": "pytorch/tutorials", "number": 1660, "title": "Visualizing the results from trained model", "body": "I wanted to know how to test any images on the pre-trained model from this tutorial : https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together \r\n\r\n1) So given i maybe just have an image, how do i feed it to the model?\r\n2) How exactly did you arrive to these results? (image shown below)\r\n![image](https://user-images.githubusercontent.com/65582456/131035518-a9666522-c96b-4785-a291-cab0b2335810.png)", "url": "https://github.com/pytorch/tutorials/issues/1660", "state": "closed", "labels": [ "question", "torchvision" ], "created_at": "2021-08-26T21:04:24Z", "updated_at": "2023-02-23T22:48:10Z", "user": "jspsiy" }, { "repo": "pytorch/serve", "number": 1217, "title": "How to cache inferences with torchserve", "body": "Reference architecture showcasing how to cache inferences from torchserve\r\n\r\nSo potentially the `inference` handler would reach from some cloud cache or KV store\r\n\r\nThe benefit of this is it'd dramatically reduce latency for common queries\r\n\r\nProbably a good level 3-4 bootcamp task for a specific kind of KV store like Redis or specific cloud cache in AWS.", "url": "https://github.com/pytorch/serve/issues/1217", "state": "closed", "labels": [ "good first issue" ], "created_at": "2021-08-26T18:00:51Z", "updated_at": "2021-10-07T04:36:17Z", "user": "msaroufim" }, { "repo": "pytorch/vision", "number": 4312, "title": "Hidden torch.flatten block in ResNet module", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nThis bug arise when last 2 layers (avg pool and fc) are changed to nn.Identity\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nimport torch\r\nimport torch.nn as nn\r\nimport torchvision\r\n\r\nresnet = torchvision.models.resnet18()\r\nresnet.avgpool = nn.Identity()\r\nresnet.fc = nn.Identity()\r\n\r\nimg = torch.randn([1,3,256,256])\r\n\r\nresnet(img).shape\r\n\r\n**Result:** torch.Size([1, 32768])\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Expected result:** torch.Size([1, 8, 8, 512])\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nhttps://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py\r\nline 243 to delete", "url": "https://github.com/pytorch/vision/issues/4312", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-25T13:09:31Z", "updated_at": "2021-08-25T13:18:34Z", "user": "1paragraph" }, { "repo": "pytorch/TensorRT", "number": 597, "title": "\u2753 [Question] request a converter: aten::lstm", "body": "ERROR: [TRTorch] - Requested converter for aten::lstm, but no such converter was found\r\nThanks", "url": "https://github.com/pytorch/TensorRT/issues/597", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-08-23T12:38:05Z", "updated_at": "2021-12-02T00:01:46Z", "user": "gaosanyuan" }, { "repo": "pytorch/TensorRT", "number": 596, "title": "\u2753 [Question] module 'trtorch' has no attribute 'Input'", "body": "Why the installed trtorch has no attribute 'Input'? Thanks\r\ntrtorch version: 0.3.0", "url": "https://github.com/pytorch/TensorRT/issues/596", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-23T12:17:20Z", "updated_at": "2021-08-23T16:32:26Z", "user": "gaosanyuan" }, { "repo": "pytorch/TensorRT", "number": 586, "title": "\u2753 [Question] Not faster vs torch::jit ", "body": "## \u2753 Question\r\nI run my model used TrTorch and torch::jit both on fp16 with C++ API, but Trtorch is not faster than JIT.\r\nWhat can I do to get the reason? \r\n\r\nSome information may be helpful. \r\n1. I used two plugins to just call the libtorch function (inverse and grid_smapler).\r\n2. I fix some bugs by change the pytorch model code discript at #585 #584 \r\n3. I compile trtorch model and run it all with C++ API.\r\n4. the compile code is :\r\n```\r\nstd::cout << \"Load ts detector model ...\" << std::endl;\r\n torch::jit::Module ts_detector_model;\r\n try{\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n ts_detector_model = torch::jit::load(TS_DETECTOR_PATH);\r\n }\r\n catch (const c10::Error& e){\r\n std::cerr << \"Error loading the model \\n\";\r\n return -1;\r\n }\r\n\r\n // convert trt detector\r\n std::cout << \"Convert trt detector model ...\" << std::endl;\r\n ts_detector_model.to(at::kCUDA);\r\n ts_detector_model.eval();\r\n\r\n std::vector<trtorch::CompileSpec::Input> inputs_d = {\r\n trtorch::CompileSpec::Input(std::vector<int64_t>({1, 3, 256, 256}), trtorch::CompileSpec::DataType::kHalf)};\r\n auto info_d = trtorch::CompileSpec(inputs_d);\r\n info_d.enabled_precisions.insert(trtorch::CompileSpec::DataType::kHalf);\r\n\r\n auto trt_detector_model = trtorch::CompileGraph(ts_detector_model, info_d);\r\n \r\n // generator complied like above\r\n ......\r\n```\r\n\r\n\r\nthe runtim code :\r\n```\r\ntorch::jit::Module trt_detector_model;\r\n try{\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n trt_detector_model = torch::jit::load(TRT_DETECTOR_PATH);\r\n }\r\n catch (const c10::Error& e){\r\n std::cerr << \"error loading the model \\n\";\r\n return -1;\r\n }\r\n trt_detector_model.to(at::kCUDA);\r\n trt_detector_model.eval();\r\n\r\n torch::jit::Module trt_generator_model;\r\n try{\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n trt_generator_model = torch::jit::load(TRT_GENERATOR_PATH);\r\n }\r\n catch (const c10::Error& e){\r\n std::cerr << \"error loading the model \\n\";\r\n return -1;\r\n }\r\n trt_generator_model.to(at::kCUDA);\r\n trt_generator_model.eval();\r\n\r\n std::cout << \"Run trt model ... \" << std::endl;\r\n auto in0 = torch::ones({1, 3, 256, 256}, {torch::kCUDA}).to(torch::kFloat16);\r\n std::cout << \"Run detector ... \" << std::endl;\r\n auto out0_ = trt_detector_model.forward({in0});\r\n auto out0 = out0_.toTuple()->elements()[1].toTensor();\r\n std::cout << \"====\\tdetector out mean and std\\t====\" << std::endl;\r\n std::cout << at::mean(out0) << \"\\n\" << at::std(out0) << std::endl;\r\n\r\n auto in1 = torch::ones({1, 3, 256, 256}, {torch::kCUDA}).to(torch::kFloat16);\r\n auto in2 = torch::ones({1, 10, 2}, {torch::kCUDA}).to(torch::kFloat16);\r\n auto in3 = torch::ones({1, 10, 2}, {torch::kCUDA}).to(torch::kFloat16);\r\n auto in4 = torch::ones({1, 10, 2, 2}, {torch::kCUDA}).to(torch::kFloat16);\r\n std::cout << \"Run generator ... \" << std::endl;\r\n auto out1 = trt_generator_model.forward({in1, in2, out0.to(torch::kFloat16), in3, in4}).toTensor();\r\n std::cout << \"====\\tgenerator out mean and std\\t====\" << std::endl;\r\n```\r\n\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n> I build the Trtorch recently(21.8.19) use the master branch.\r\n\r\n - TRTorch Version (8.0.1.6): \r\n - PyTorch Version (libtorch 1.9.0):\r\n - OS (Ubuntu):\r\n - How you installed PyTorch (`libtorch):\r\n - Build command you used (bazel build //:libtrtorch --compilation_mode opt):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.8\r\n - CUDA version: 11.1\r\n - GPU models and configuration: GTX3070\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/586", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-08-19T13:06:37Z", "updated_at": "2022-03-10T00:02:16Z", "user": "JuncFang-git" }, { "repo": "pytorch/vision", "number": 4292, "title": "about train fcn questions.", "body": "thanks for your great work!\r\n\r\nI have read https://github.com/pytorch/vision/blob/master/references/segmentation/train.py script. And there are some questions.\r\n\r\n1. why use aux_classifier for fcn, are there any references ?\r\n2. why is the learning rate of aux_classifier ten times that of base lr?https://github.com/pytorch/vision/blob/master/references/segmentation/train.py#L131\r\n3. If I want to fine-train fcn, how to set the appropriate learning rate? I use default learning rate 0.01 to train PASCAL VOC2012 with res50_rcn, the result is bad(first epoch, mean IOU=10.1). If not train model, directly evaluate, mean IoU=69. \r\n\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/4292", "state": "closed", "labels": [ "question", "topic: object detection" ], "created_at": "2021-08-19T08:51:34Z", "updated_at": "2021-08-19T11:50:05Z", "user": "WZMIAOMIAO" }, { "repo": "pytorch/xla", "number": 3090, "title": "How to concatenate all the predicted labels in XLA?", "body": "## \u2753 Questions and Help\r\nHi does anyone knows how to get all predicted labels from all 8 cores of XLA and concatenate them together?\r\n\r\nSay I have a model:\r\n`outputs = model(ids, mask, token_type_ids)`\r\n`_, pred_label = torch.max(outputs.data, dim = 1)`\r\n\r\nIf I do\r\n`all_predictions_np = pred_label.cpu().detach().numpy().tolist()`\r\n\r\napparently, this only sends the result to CPU from TPU core:0. How can I get all 8 cores and concatenate them together in the same list? I am not sure if `xm.all_gather() ` is used in this case?", "url": "https://github.com/pytorch/xla/issues/3090", "state": "closed", "labels": [], "created_at": "2021-08-19T01:13:39Z", "updated_at": "2021-08-19T20:51:58Z", "user": "gabrielwong1991" }, { "repo": "pytorch/serve", "number": 1203, "title": "How to add a custom Handler?", "body": "## \ud83d\udcda Documentation\r\n\r\nHow to add custom handlers python files friendly and automatic?\r\n\r\nBy now, I understand that is necessary to modify `pytorch/serve` source code. is that correct?\r\n\r\nIn my case, I need a custom handler with\r\ninput: numpy array or json or list of numbers\r\noutput: numpy array or json or list of numbers\r\n\r\nI just want to do the inference in `pytorch/serve` because I have complex preprocess and postprocess in other microservice", "url": "https://github.com/pytorch/serve/issues/1203", "state": "closed", "labels": [], "created_at": "2021-08-18T01:43:40Z", "updated_at": "2021-08-18T20:01:54Z", "user": "pablodz" }, { "repo": "pytorch/pytorch", "number": 63395, "title": "How to efficiently (without looping) get data from tensor predicted by a torchscript in C++?", "body": "I am calling a torchscript (neural network serialized from Python) from a C++ program:\r\n\r\n```\r\n // define inputs\r\n int batch = 3; // batch size\r\n int n_inp = 2; // number of inputs\r\n double I[batch][n_inp] = {{1.0, 1.0}, {2.0, 3.0}, {4.0, 5.0}}; // some random input\r\n std::cout << \"inputs\" \"\\n\"; // print inputs\r\n for (int i = 0; i < batch; ++i)\r\n { \r\n std::cout << \"\\n\";\r\n for (int j = 0; j < n_inp; ++j)\r\n {\r\n std::cout << I[i][j] << \"\\n\";\r\n }\r\n }\r\n\r\n // prepare inputs for feeding to neural network\r\n std::vector<torch::jit::IValue> inputs;\r\n inputs.push_back(torch::from_blob(I, {batch, n_inp}, at::kDouble));\r\n\r\n // deserialize and load scriptmodule\r\n torch::jit::script::Module module;\r\n module = torch::jit::load(\"Net-0.pt\");\r\n\r\n // do forward pass\r\n auto outputs = module.forward(inputs).toTensor();\r\n```\r\n\r\nUsually, to get data from the outputs, the following (element-wise) operation is performed:\r\n\r\n```\r\n // get data from outputs\r\n std::cout << \"outputs\" << \"\\n\";\r\n int n_out = 1;\r\n double outputs_data[batch][n_out];\r\n for (int i = 0; i < batch; i++) \r\n {\r\n for (int j = 0; j < n_out; j++)\r\n {\r\n outputs_data[i][j] = outputs[i][j].item<double>();\r\n std::cout << outputs_data[i][j] << \"\\n\";\r\n }\r\n }\r\n```\r\n\r\nHowever, such looping using .item is highly inefficient (in the actual code I will have millions of points predicted at each time step). I want to get data from outputs directly (without looping over elements). I tried:\r\n\r\n ```\r\nint n_out = 1;\r\n double outputs_data[batch][n_out];\r\n outputs_data = outputs.data_ptr<double>();\r\n```\r\nHowever, it is giving the error:\r\n\r\n```\r\nerror: incompatible types in assignment of \u2018double*\u2019 to \u2018double [batch][n_out]\u2019\r\n outputs_data = outputs.data_ptr<double>();\r\n ^\r\n```\r\nNote, that type of outputs_data is fixed to double and cannot be changed.\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/63395", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2021-08-17T12:42:58Z", "updated_at": "2021-11-28T03:54:22Z", "user": "aiskhak" }, { "repo": "pytorch/hub", "number": 218, "title": "DeeplabV3-Resnet101. Where is the mIOU calculation and postprocessing code?", "body": "The following link mentions mIOU = 67.4\r\nhttps://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/\r\n\r\nIs there any codebase where we can refer the evaluation and postprocessing code?", "url": "https://github.com/pytorch/hub/issues/218", "state": "closed", "labels": [], "created_at": "2021-08-16T11:32:49Z", "updated_at": "2021-10-18T11:42:40Z", "user": "ashg1910" }, { "repo": "pytorch/TensorRT", "number": 580, "title": "\u2753 [Question] How to convert nvinfer1::ITensor into at::tensor?", "body": "## \u2753 Question\r\nHi, \r\nHow to convert nvinfer1::ITensor into at::tensor? Like #146 \r\n\r\n## What you have already tried\r\n\r\nI want to do some operations use libtorch on the nvinfer1::ITensor. So, can I convert nvinfer1::ITensor into at::tensor? Or I must write a custom converter with the libtorch function?\r\n@xsacha @aaronp24 @itsliupeng @lukeyeager @elezar \r\n", "url": "https://github.com/pytorch/TensorRT/issues/580", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-16T09:54:00Z", "updated_at": "2021-08-18T03:02:03Z", "user": "JuncFang-git" }, { "repo": "pytorch/pytorch", "number": 63304, "title": "How to build a release version libtorch1.8.1 on windows", "body": "\r\n## Issue description\r\nI am doing some work with libtorch on Windows 10 recently. I want to build the libtorch library since I have addedd some new features. The build work was done successfully on develop environment. However, when I copy the exe(including dependent DLLs) to another PC(running env, mentioned below), it could not run normally. I have also built the corresponding python package and it could run normally on the develop environment. Also, it shows that the official release version of 1.8.1 has more libs than mine. It seems that the official version has more torch_cuda*.dll than mine and their size is relatively larger than mine.\r\nI just want to know how to configure the env and can build nearly the same as the official one.\r\n\r\nSelf-build release version\r\n2021/08/12 11:22 242,176 asmjit.dll\r\n2021/08/10 21:06 92,664 asmjit.lib\r\n2021/08/12 11:22 417,792 c10.dll\r\n2021/08/10 21:09 303,982 c10.lib\r\n2021/08/12 14:02 10,382,510 c10d.lib\r\n2021/08/12 11:25 246,784 c10_cuda.dll\r\n2021/08/10 21:09 27,942 c10_cuda.lib\r\n2021/08/12 14:03 20,131,328 caffe2_detectron_ops_gpu.dll\r\n2021/08/10 22:22 34,660 caffe2_detectron_ops_gpu.lib\r\n2021/08/12 14:02 70,144 caffe2_module_test_dynamic.dll\r\n2021/08/10 22:21 24,130 caffe2_module_test_dynamic.lib\r\n2021/08/12 11:25 15,872 caffe2_nvrtc.dll\r\n2021/08/10 21:09 1,850 caffe2_nvrtc.lib\r\n2021/08/12 11:21 18,316 clog.lib\r\n2021/08/12 11:21 118,076 cpuinfo.lib\r\n2021/08/12 11:25 323,147,004 dnnl.lib\r\n2021/08/12 11:22 3,282,432 fbgemm.dll\r\n2021/08/10 21:06 1,156,374 fbgemm.lib\r\n2021/08/12 11:22 15,456,424 gloo.lib\r\n2021/08/12 11:22 36,464,038 gloo_cuda.lib\r\n2021/08/12 14:00 135,168 jitbackend_test.dll\r\n2021/08/10 22:19 21,206 jitbackend_test.lib\r\n2015/01/22 09:23 1,146,272 libiomp5md.dll\r\n2021/08/12 11:20 5,220,954 libprotobuf-lite.lib\r\n2021/08/12 11:21 36,811,506 libprotobuf.lib\r\n2021/08/12 11:21 38,708,546 libprotoc.lib\r\n2021/08/12 11:25 323,147,004 mkldnn.lib\r\n2021/08/12 11:21 142,874 pthreadpool.lib\r\n2021/08/12 13:59 9,728 torch.dll\r\n2021/08/10 22:18 1,832 torch.lib\r\n2021/08/12 14:00 339,456 torchbind_test.dll\r\n2021/08/10 22:19 21,154 torchbind_test.lib\r\n2021/08/12 12:07 201,914,368 torch_cpu.dll\r\n2021/08/10 21:41 17,095,310 torch_cpu.lib\r\n2021/08/12 13:59 149,539,840 torch_cuda.dll\r\n2021/08/12 13:58 3,024,940 torch_cuda.lib\r\n2021/08/12 11:22 9,728 torch_global_deps.dll\r\n2020/09/18 19:02 195,072 uv.dll\r\n2021/08/12 11:21 5,984,034 XNNPACK.lib\r\n\r\nOfficial release version\r\n2021/03/24 11:07 241,664 asmjit.dll\r\n2021/03/24 11:07 92,664 asmjit.lib\r\n2021/03/24 11:08 418,304 c10.dll\r\n2021/03/24 11:08 303,982 c10.lib\r\n2021/03/24 12:44 10,209,446 c10d.lib\r\n2021/03/24 11:08 249,344 c10_cuda.dll\r\n2021/03/24 11:08 27,942 c10_cuda.lib\r\n2021/03/24 12:47 20,971,008 caffe2_detectron_ops_gpu.dll\r\n2021/03/24 12:47 34,660 caffe2_detectron_ops_gpu.lib\r\n2021/03/24 12:45 69,632 caffe2_module_test_dynamic.dll\r\n2021/03/24 12:45 24,130 caffe2_module_test_dynamic.lib\r\n2021/03/24 11:08 15,872 caffe2_nvrtc.dll\r\n2021/03/24 11:08 1,850 caffe2_nvrtc.lib\r\n2021/03/24 11:07 18,300 clog.lib\r\n2021/03/24 11:07 117,714 cpuinfo.lib\r\n2020/09/16 11:17 113,329,664 cublas64_11.dll\r\n2020/09/16 11:17 214,235,648 cublasLt64_11.dll\r\n2020/09/16 13:05 431,616 cudart64_110.dll\r\n2020/11/01 04:08 222,720 cudnn64_8.dll\r\n2020/11/01 04:52 146,511,360 cudnn_adv_infer64_8.dll\r\n2020/11/01 05:06 95,296,512 cudnn_adv_train64_8.dll\r\n2020/11/01 05:06 705,361,408 cudnn_cnn_infer64_8.dll\r\n2020/11/01 05:16 81,943,552 cudnn_cnn_train64_8.dll\r\n2020/11/01 04:15 323,019,776 cudnn_ops_infer64_8.dll\r\n2020/11/01 04:28 37,118,464 cudnn_ops_train64_8.dll\r\n2020/09/16 13:05 234,427,904 cufft64_10.dll\r\n2020/09/16 13:05 258,560 cufftw64_10.dll\r\n2020/09/16 13:05 55,511,040 curand64_10.dll\r\n2020/09/16 13:05 681,608,704 cusolver64_11.dll\r\n2020/09/16 13:05 388,617,216 cusolverMg64_11.dll\r\n2020/09/16 13:05 233,562,624 cusparse64_11.dll\r\n2021/03/24 11:08 319,083,802 dnnl.lib\r\n2021/03/24 11:07 3,280,384 fbgemm.dll\r\n2021/03/24 11:07 1,156,374 fbgemm.lib\r\n2021/03/24 11:08 342,016 fbjni.dll\r\n2021/03/24 11:08 1,191,002 fbjni.lib\r\n2021/03/24 11:07 15,140,756 gloo.lib\r\n2021/03/24 11:07 31,536,174 gloo_cuda.lib\r\n2020/06/23 14:03 1,961,328 libiomp5md.dll\r\n2020/06/23 14:03 110,448 libiompstubs5md.dll\r\n2021/03", "url": "https://github.com/pytorch/pytorch/issues/63304", "state": "closed", "labels": [ "module: build", "module: windows", "module: docs", "module: cpp", "triaged" ], "created_at": "2021-08-16T07:30:26Z", "updated_at": "2023-12-19T06:56:17Z", "user": "RocskyLu" }, { "repo": "pytorch/TensorRT", "number": 579, "title": "\u2753 memcpy d2d occupies a lot time of inference (resnet50 model after trtorch)", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\nI use trtorch to optimize resnet50 model on the IMAGENET as follows\r\n<img width=\"998\" alt=\"\u622a\u5c4f2021-08-16 10 34 32\" src=\"https://user-images.githubusercontent.com/46394627/129503893-4d252f02-07d4-448a-ac9c-7f64f15aa30a.png\">\r\n Unfortunately, i found that the memcpy d2d occupies a lot time of inference when i'm testing the performance of optimized model\r\n<img width=\"1420\" alt=\"\u622a\u5c4f2021-08-16 10 23 43\" src=\"https://user-images.githubusercontent.com/46394627/129504130-66f60eae-d9eb-4384-bcbd-53c6e83f4d0e.png\">\r\nIn the above figure, we can observe that the memcpy d2d occurs at the beginning of the inference and result in about 10% time consumption. I haven't found the reason yet. I did not have this phenomenon when I used tensorrt engine.\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.6.0):\r\n - OS (e.g., Linux):linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):pip\r\n - Python version:3.6.0\r\n - CUDA version:10.0\r\n - GPU models and configuration:T4*1 16G\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/579", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-16T02:51:22Z", "updated_at": "2021-08-19T12:21:32Z", "user": "zhang-xh95" }, { "repo": "pytorch/vision", "number": 4276, "title": "I want to convert model resnet to onnx, how to do?", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n", "url": "https://github.com/pytorch/vision/issues/4276", "state": "closed", "labels": [], "created_at": "2021-08-14T04:17:42Z", "updated_at": "2021-08-16T08:41:46Z", "user": "xinsuinizhuan" }, { "repo": "pytorch/TensorRT", "number": 576, "title": "\u2753 [Question] How can i write a converter just use a Libtorch function? ", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\nHi,\r\nI am trying to write some converter like \"torch.inverse\", \"F.grid_sample\", but it's really difficult for me. So, I want to skip that using just some libtorch function.\r\nFor example, I want to build a converter just use torch::inverse.\r\n![image](https://user-images.githubusercontent.com/76929740/129295419-43256861-c4a9-4f5f-a0fc-345cf01df7e0.png)\r\nBut I got some errors like this :\r\n![image](https://user-images.githubusercontent.com/76929740/129295506-b48255a1-4ce4-4511-8786-642f8a8072e3.png)\r\n\r\n\r\nSo, how can i write a converter just use a Libtorch function? \r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (1.8.1):\r\n - OS (ubuntu18.04):\r\n - How you installed PyTorch ( `pip`, `libtorch`):\r\n - Build command you used (trtorchv0.3.0 form release):\r\n - Python version: 3.8\r\n - CUDA version: 11.1\r\n - GPU models and configuration:GTX3070\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/576", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-13T02:18:42Z", "updated_at": "2021-08-19T11:49:02Z", "user": "JuncFang-git" }, { "repo": "pytorch/pytorch", "number": 63182, "title": "Improve doc about docker images and how to run them locally", "body": "Updated the wiki page https://github.com/pytorch/pytorch/wiki/Docker-image-build-on-CircleCI\r\n\r\n- [x] Document the new ecr_gc job\r\n- [x] How to get images from AWS ECR\r\n- [x] Document how to use the docker image and run `build` and `test` locally", "url": "https://github.com/pytorch/pytorch/issues/63182", "state": "closed", "labels": [ "module: docs", "triaged", "hackathon" ], "created_at": "2021-08-12T20:58:43Z", "updated_at": "2021-08-13T00:53:01Z", "user": "zhouzhuojie" }, { "repo": "pytorch/torchx", "number": 132, "title": "Add Torchx Validate command", "body": "## Description\r\nTorchx allows users to develop their own components. Torchx component is defined as a python function with several restrictions as described in https://pytorch.org/torchx/latest/quickstart.html#defining-your-own-component\r\n\r\nThe `torchx validate` cmd will help users to develop the components.\r\n\r\n\r\n`torchx validate ~/my_component.py:func` whether the component is a valid component or not. If component is not valid, the command will print the detailed message explaining what is wrong with the function. \r\n", "url": "https://github.com/meta-pytorch/torchx/issues/132", "state": "closed", "labels": [ "enhancement", "cli" ], "created_at": "2021-08-12T18:38:52Z", "updated_at": "2022-01-22T00:32:22Z", "comments": 2, "user": "aivanou" }, { "repo": "pytorch/TensorRT", "number": 575, "title": "\u2753 [Question] How to build latest TRTorch with Pytorch 1.9.0", "body": "## \u2753 Question\r\n\r\nI am trying to build latest TRTorch with Torch 1.9.0 but I am getting some issue. I follow the instruction from [here](https://github.com/NVIDIA/TRTorch/blob/master/README.md)\r\n\r\nAlso followed https://nvidia.github.io/TRTorch/tutorials/installation.html but not able to build. Please help!\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version: `1.9.0`\r\n - CPU Architecture: `x86_64`\r\n - OS: `Linux` (Ubuntu 18.04)\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): `pip` from [here](https://pytorch.org/get-started/locally/)\r\n - Build command you used (if compiling from source): `bazel build //:libtrtorch --compilation_mode opt` and `cd py && sudo python3.7 setup.py install`\r\n - Are you using local sources or building from archives: `local`\r\n - Python version: `3.7`\r\n - CUDA version: `11.1`\r\n - CUDNN version: `8.1`\r\n - TensorRT: `7.2.3.4`\r\n - Bazel: `4.0.0`\r\n - GPU models and configuration: `RTX 2070 super 8GB`\r\n - Any other relevant information:\r\n\r\n## Additional context\r\nAttached the error I got while I run `bazel build //:libtrtorch --compilation_mode opt` or `cd py && sudo python3.7 setup.py install`\r\n```\r\nuser@test ~/Documents/TRTorch/py\r\n \u2514\u2500 (master) $ sudo python3.7 setup.py install\r\nrunning install\r\nbuilding libtrtorch\r\nINFO: Analyzed target //cpp/api/lib:libtrtorch.so (18 packages loaded, 2249 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /home/user/Documents/TRTorch/core/lowering/passes/BUILD:10:11: Compiling core/lowering/passes/op_aliasing.cpp failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 62 argument(s) skipped)\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 62 argument(s) skipped)\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox\r\nIn file included from ./core/util/prelude.h:10:0,\r\n from core/lowering/passes/op_aliasing.cpp:3:\r\n./core/util/trt_util.h: In function 'std::ostream& nvinfer1::operator<<(std::ostream&, const nvinfer1::EngineCapability&)':\r\n./core/util/trt_util.h:90:38: error: 'kSTANDARD' is not a member of 'nvinfer1::EngineCapability'\r\n case nvinfer1::EngineCapability::kSTANDARD:\r\n ^~~~~~~~~\r\n./core/util/trt_util.h:92:38: error: 'kSAFETY' is not a member of 'nvinfer1::EngineCapability'\r\n case nvinfer1::EngineCapability::kSAFETY:\r\n ^~~~~~~\r\n./core/util/trt_util.h:94:38: error: 'kDLA_STANDALONE' is not a member of 'nvinfer1::EngineCapability'\r\n case nvinfer1::EngineCapability::kDLA_STANDALONE:\r\n ^~~~~~~~~~~~~~~\r\nTarget //cpp/api/lib:libtrtorch.so failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 8.640s, Critical Path: 6.17s\r\nINFO: 8 processes: 8 internal.\r\nFAILED: Build did NOT complete successfully\r\n```\r\n\r\nWorkspace modified file: \r\n```\r\nworkspace(name = \"TRTorch\")\r\n\r\nload(\"@bazel_tools//tools/build_defs/repo:http.bzl\", \"http_archive\")\r\nload(\"@bazel_tools//tools/build_defs/repo:git.bzl\", \"git_repository\")\r\n\r\nhttp_archive(\r\n name = \"rules_python\",\r\n sha256 = \"778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f\",\r\n url = \"https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz\",\r\n)\r\n\r\nload(\"@rules_python//python:pip.bzl\", \"pip_install\")\r\n\r\nhttp_archive(\r\n name = \"rules_pkg\",\r\n sha256 = \"038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d\",\r\n urls = [\r\n \"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n \"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz\",\r\n ],\r\n)\r\n\r\nload(\"@rules_pkg//:deps.bzl\", \"rules_pkg_dependencies\")\r\n\r\nrules_pkg_dependencies()\r\n\r\ngit_repository(\r\n name = \"googletest\",\r\n commit = \"703bd9caab50b139428cea1aaff9974ebee5742e\",\r\n remote = \"https://github.com/google/googletest\",\r\n shallow_since = \"1570114335 -0400\",\r\n)\r\n\r\n# CUDA should be installed on the system locally\r\nnew_local_repository(\r\n name = \"cuda\",\r\n build_file = \"@//third_party/cuda:BUILD\",\r\n path = \"/usr/local/cuda-11.1/\",\r\n)\r\n\r\nnew_local_repository(\r\n name = \"cublas\",\r\n build_file = \"@//third_party/cublas:BUILD\",\r\n path = \"/usr\",\r\n)\r\n#####################################################################", "url": "https://github.com/pytorch/TensorRT/issues/575", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-12T09:29:10Z", "updated_at": "2021-08-16T02:33:12Z", "user": "rajusm" }, { "repo": "pytorch/pytorch", "number": 63140, "title": "[documentation] torch.distributed.elastic: illustrate how to write load_checkpoint and save_checkpoint in Train Script ", "body": "https://pytorch.org/docs/master/elastic/train_script.html\r\n\r\nIf users want to run elastic jobs, he/she needs to write some logic to load and save checkpoints. And maybe `State` like this https://github.com/pytorch/elastic/blob/master/examples/imagenet/main.py#L196 should be defined.\r\n\r\nIt is not clear in the documentation, it will be better to document it.\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd @cbalioglu @gcramer23 @brianjo @mruberry", "url": "https://github.com/pytorch/pytorch/issues/63140", "state": "open", "labels": [ "module: docs", "triaged", "module: elastic", "oncall: r2p" ], "created_at": "2021-08-12T08:54:25Z", "updated_at": "2022-06-03T20:47:29Z", "user": "gaocegege" }, { "repo": "pytorch/vision", "number": 4270, "title": "annotation_path parameter in torchvision.datasets.UCF101 is not clear.", "body": "## \ud83d\udcda Documentation\r\n\r\nPlease describe what kind of files should be in annotation_path, and what the files should contain. It is not obvious.\n\ncc @pmeier", "url": "https://github.com/pytorch/vision/issues/4270", "state": "open", "labels": [ "question", "module: datasets", "module: documentation" ], "created_at": "2021-08-12T03:40:59Z", "updated_at": "2021-08-13T16:50:25Z", "user": "damtharvey" }, { "repo": "pytorch/torchx", "number": 130, "title": "components: copy component", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nAdds a basic copy io component that uses fsspec to allow ingressing data or copying from one location to another.\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nWe previously had a simple copy component using the old Python style component classes. We deleted that since it didn't use the new style component definitions. \r\n\r\nHaving a copy component is generally useful and we should use it for data ingress in the KFP advanced example.\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nio.py\r\n```py\r\ndef copy(from: string, to: string, image: string = \"\") -> specs.AppDef: ...\r\n```\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/130", "state": "closed", "labels": [ "enhancement", "module: components" ], "created_at": "2021-08-11T20:15:42Z", "updated_at": "2021-09-13T18:09:16Z", "comments": 1, "user": "d4l3k" }, { "repo": "pytorch/torchx", "number": 128, "title": "components: tensorboard component", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nIt would be nice to have a tensorboard component that could be used as either a mixin as a new role or standalone. This would make it easy to launch a job and monitor it while it's running.\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nAdd a new built in component `tensorboard` to `components/metrics.py`. This would provide a component with the interface:\r\n\r\n```\r\ndef tensorboard(logdir: string, duration: float, image: string = \"<image>\"): \r\n \"\"\"\r\n Args:\r\n duration: number of hours to run the container for\r\n \"\"\"\r\n```\r\n\r\n### Lifetime\r\n\r\nThere's a bit of consideration here on how to manage the lifetime of the tensorboard role. Ideally it would be tied to the other containers but practically we can't support that on most schedulers. Launching it as a standalone component with a fixed duration i.e. 8 hours is likely going to be the best supported and should be good enough. Tensorboard is quite lightweight so having it run longer than necessary shouldn't be a big deal. \r\n\r\nThere may be better ways of handling this though. Volcano allows for flexible policies and we could allow for containers that get killed on first sucessful (0 exit code) replica.\r\n\r\nIt also could be good to watch a specific file. tensorboard uses a remote path so we could add in a `watch_file` arg with a specific path that the manager can long poll on to detect shutdown. The app would have to know to write out a `foo://bar/done` or `foo://bar/model.pt` that the component can poll on for termination purposes.\r\n\r\n### fsspec\r\n\r\nOne other painpoint is that tensorboard uses it's own filesystem interface that has relatively view implementations. It is extensible but other components use fsspec which could cause confusion for users. \r\n\r\nThere is an issue about this on tensorboard but it's quite new https://github.com/tensorflow/tensorboard/issues/5165 \r\n\r\nWe could write our own fsspec tensorboard adapter if necessary and provide it as part of a custom docker image.\r\n\r\n### Docker images\r\n\r\nThere's not a specific docker image we can use to provide tensorboard right now. It's possible to use `tensorflow/tensorflow` but that doesn't contain boto3 so no s3 support or other file systems. We may want to provide our own cutdown tensorboard container that can be used with the component.\r\n\r\n### Role\r\n\r\nWe also want to provide tensorboard as a role so you can have it run as a companion to the main training job. You can then easily include the tensorboard role as an extra role in your AppDef and use it as is.\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\nCurrently you can launch tensorboard via KFP UI or via the command line. This requires an extra step and in the case of KFP you can only do that after the job has run.\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/128", "state": "closed", "labels": [ "enhancement", "module: components" ], "created_at": "2021-08-11T19:39:28Z", "updated_at": "2021-11-02T17:49:39Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/android-demo-app", "number": 177, "title": "How to compress the model size when use the API: module._save_for_lite_interpreter", "body": "I want to deploy the model on IOS.\r\nWhen I deploy the model to android, the following code can work.\r\n mobile = torch.jit.trace(model, input_tensor)\r\n mobile.save(path)\r\nI get a model which size is 23.4MB\r\n\r\nWhen i deploy the model to IOS, i must use the following API:\r\n from torch.utils.mobile_optimizer import optimize_for_mobile\r\n optimized_scripted_module = optimize_for_mobile(mobile)\r\n optimized_scripted_module._save_for_lite_interpreter(optimized_path)\r\nI get the model which size is 45.7MB\r\n\r\nThe model size of the second method is almost twice as big as the previous one, i know the second approach are doing some optimization on the model, but how can I use the second method to get a model which size is as similar as the first one? ", "url": "https://github.com/pytorch/android-demo-app/issues/177", "state": "open", "labels": [], "created_at": "2021-08-11T10:17:03Z", "updated_at": "2022-02-11T14:21:17Z", "user": "kunlongsolid" }, { "repo": "pytorch/vision", "number": 4264, "title": "ImportError: cannot import name '_NewEmptyTensorOp' from 'torchvision.ops.misc'", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## ImportError\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Git clone the repository of [SOLQ](https://github.com/megvii-research/SOLQ)\r\n2. Update the dataset you want to use.\r\n3. Update the data paths in the file SOLQ/datasets/coco.py\r\n4. RUn the bash file **configs/r50_solq_train.sh**\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\nIt should now show the error and move further to run the SOL-Q model.\r\n\r\n## Environment\r\n\r\nPyTorch version: 1.9.0+cu102\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)\r\nCMake version: version 3.12.0\r\nLibc version: glibc-2.26\r\n\r\nPython version: 3.7.11 (default, Jul 3 2021, 18:01:19) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\nIs CUDA available: True\r\nCUDA runtime version: 11.0.221\r\nGPU models and configuration: GPU 0: Tesla T4\r\nNvidia driver version: 460.32.03\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.5\r\n[pip3] torch==1.9.0+cu102\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchtext==0.10.0\r\n[pip3] torchvision==0.10.0+cu102\r\n[conda] Could not collect\r\n\r\n\r\n## Additional context\r\n\r\nWhen running the script ```!bash configs/r50_solq_train.sh``` it, shows ImportError like shown below :\r\n``` \r\nTraceback (most recent call last):\r\n File \"main.py\", line 22, in <module>\r\n import datasets\r\n File \"/content/SOLQ/datasets/__init__.py\", line 13, in <module>\r\n from .coco import build as build_coco\r\n File \"/content/SOLQ/datasets/coco.py\", line 23, in <module>\r\n from util.misc import get_local_rank, get_local_size\r\n File \"/content/SOLQ/util/misc.py\", line 36, in <module>\r\n from torchvision.ops.misc import _NewEmptyTensorOp\r\n``` ", "url": "https://github.com/pytorch/vision/issues/4264", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-10T05:04:23Z", "updated_at": "2021-08-12T09:19:48Z", "user": "sagnik1511" }, { "repo": "pytorch/torchx", "number": 121, "title": "cli: support fetching logs from all roles", "body": "## Description\r\n<!-- concise description of the feature/enhancement -->\r\n\r\nCurrently you have to specify which role you want to fetch logs when using `torchx log`. Ideally you could just specify the job name to fetch all of them.\r\n\r\n```\r\ntorchx log kubernetes://torchx_tristanr/default:sh-hxkkr/sh\r\n```\r\n\r\n## Motivation/Background\r\n<!-- why is this feature/enhancement important? provide background context -->\r\n\r\nThis reduces friction for users with single role jobs when trying to fetch logs. It's very common I forget to add the role and then have to run the command again with the role. There's no technical limitation here and it removes friction for the user.\r\n\r\n\r\n## Detailed Proposal\r\n<!-- provide a detailed proposal -->\r\n\r\nThis would require updating the log CLI to support iterating over all roles and fetching logs from all the replicas. https://github.com/pytorch/torchx/blob/master/torchx/cli/cmd_log.py#L81\r\n\r\nThis doesn't require any changes to the scheduler implementations and is purely a CLI improvement.\r\n\r\n\r\n## Alternatives\r\n<!-- discuss the alternatives considered and their pros/cons -->\r\n\r\nWe could instead change the CLI to automatically select the role when there's only one role in a job. That would improve the UX a fair bit while also preventing tons of log spam for complex jobs.\r\n\r\n## Additional context/links\r\n<!-- link to code, documentation, etc. -->\r\n", "url": "https://github.com/meta-pytorch/torchx/issues/121", "state": "closed", "labels": [ "enhancement" ], "created_at": "2021-08-09T20:22:07Z", "updated_at": "2021-09-23T18:09:56Z", "comments": 0, "user": "d4l3k" }, { "repo": "pytorch/xla", "number": 3076, "title": "What is xm.RateTracker? Why there is no document for this class?", "body": "`xm.RateTracker()` is used in the example script. But I can' find any document for this class(even the doc string does not exist).\r\nWhat is this class?\r\n## \u2753 Questions and Help\r\nhttps://github.com/pytorch/xla/blob/81eecf457af5db09a3131a00864daf1ca5b8ed20/test/test_train_mp_mnist.py#L123", "url": "https://github.com/pytorch/xla/issues/3076", "state": "closed", "labels": [], "created_at": "2021-08-09T15:07:03Z", "updated_at": "2021-08-11T01:54:53Z", "user": "DayuanJiang" }, { "repo": "pytorch/examples", "number": 925, "title": "How many data does fast neural style need ?", "body": "Hi, I am recently implementing fast neural style with your example but I don't have much disk space for coco dataset instead I used my own dataset which contains 1200 images and the result is not good at all (a totally distorted picture, the style is 'starry night').\r\n\r\n![image](https://user-images.githubusercontent.com/47134502/128693050-5d60f199-d309-42f7-8e5f-f238ddf6ab2b.png)\r\n\r\nHere is my setting,\r\n```\r\nimage_size = 224\r\ncontent_weight = 1e5\r\nstyle_weight = 1e10\r\nlr = 1e-3\r\nepoches = 2\r\nbatch_size = 2 (4 will OOM)\r\nstyle_layer = ['1_2','2_2','3_3','4_3']\r\ncontent_layer = ['2_2']\r\n```\r\n\r\nOther questions like \r\n1. why do we need centercrop in transformation, it crops the whole resized picture?\r\n2. why do we mul 255 then div 255 to batch?\r\n\r\nThanks in advance!", "url": "https://github.com/pytorch/examples/issues/925", "state": "closed", "labels": [], "created_at": "2021-08-09T10:27:12Z", "updated_at": "2022-03-09T21:16:55Z", "comments": 1, "user": "gitE0Z9" }, { "repo": "pytorch/TensorRT", "number": 566, "title": "\u2753 [Question] How can i build a Makefile for this example? ", "body": "## \u2753 Question\r\nHi, \r\nI want to run the official [example ](https://github.com/NVIDIA/TRTorch/blob/master/examples/sample_rt_app/main.cpp) with a Makefile. But there is always something wrong. So, could you give me the Makefile that successfully links to the .so file?\r\n\r\n## Environment\r\n\r\n - PyTorch Version (1.8):\r\n - OS (Ubuntu18.04):\r\n - How you installed PyTorch (`pip`, `libtorch`)\r\n - Python version: 3.8\r\n - CUDA version: 11.3\r\n - GPU models and configuration: GTX3070\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/566", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-09T09:51:43Z", "updated_at": "2021-08-12T01:15:41Z", "user": "JuncFang-git" }, { "repo": "pytorch/pytorch", "number": 62951, "title": "when call `torch.onnx.export()`, the graph is pruned by default ? how to cancel pruning", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\nFor example:\r\n```python\r\nimport torch\r\n\r\nhidden_dim1 = 10\r\nhidden_dim2 = 5\r\ntagset_size = 2\r\n\r\nclass MyModel(torch.nn.Module):\r\n def __init__(self):\r\n super(MyModel, self).__init__()\r\n self.line1 = torch.nn.Linear(hidden_dim1, hidden_dim2)\r\n self.line2 = torch.nn.Linear(hidden_dim2, tagset_size)\r\n\r\n def forward(self, x, y):\r\n out1 = self.line1(x)\r\n out2 = self.line2(y)\r\n return out1\r\n\r\nX = torch.randn(20, hidden_dim1)\r\nY = torch.randn(hidden_dim1, hidden_dim2)\r\ninputs = (X, Y)\r\n\r\nmodel = MyModel()\r\nf = './model.onnx'\r\ntorch.onnx.export(model, inputs, f,\r\n opset_version=9,\r\n example_outputs=None,\r\n input_names=[\"X\"], output_names=[\"Y\"],verbose=True)\r\n```\r\n\r\n\r\n```bash\r\ngraph(%X : Float(20, 10, strides=[10, 1], requires_grad=0, device=cpu),\r\n %line1.weight : Float(5, 10, strides=[10, 1], requires_grad=1, device=cpu),\r\n %line1.bias : Float(5, strides=[1], requires_grad=1, device=cpu)):\r\n %Y : Float(20, 5, strides=[5, 1], requires_grad=1, device=cpu) = onnx::Gemm[alpha=1., beta=1., transB=1](%X, %line1.weight, %line1.bias) # /root/.conda/envs/torch1.9/lib/python3.6/site-packages/torch/nn/functional.py:1847:0\r\n return (%Y)\r\n```\r\n#### How every, the exported graph doesn't contain `line2` , maybe because the output of MyModel is not depend on `out2 = self.line2(y)` ? I guess the graph is pruned by default.\r\n\r\n**What should I do if I want to not do pruning?**\r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\n\r\nI want to do something for `self.named_parameters()` in `model.forward()`, eg.\r\n\r\n```python\r\ndef check_parameters():\r\n # do something for parameters by calling \r\n # some ops including OP1, OP2 and so on\r\n return\r\n\r\nclass MyModel(torch.nn.Module):\r\n def __init__(self):\r\n super(MyModel, self).__init__()\r\n self.line = torch.nn.Linear(hidden_dim1, hidden_dim2)\r\n\r\n def forward(self, x, y):\r\n out = self.line1(x)\r\n check_parameters()\r\n return out\r\n```\r\n\r\nHow every, the exported graph doesn't contain `OP1, OP2` , maybe because the output of MyModel is not depend on `check_parameters()` ? I guess the graph is pruned by default.\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\n\ncc @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/62951", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-needs-info" ], "created_at": "2021-08-08T14:31:20Z", "updated_at": "2021-09-10T08:09:02Z", "user": "liym27" }, { "repo": "pytorch/serve", "number": 1186, "title": "[Question] GPU memory", "body": "Hi! Say I have about 10 models and a single GPU is it possible to load a model object for a specific task at the request time and then completely free up the memory for a different model object? For instance, completely deleting it and then reinitialize it when needed. I know this will increase the response time but the crucial part is the amount of VRAM left for the inference.", "url": "https://github.com/pytorch/serve/issues/1186", "state": "closed", "labels": [ "question", "triaged_wait" ], "created_at": "2021-08-06T17:35:15Z", "updated_at": "2021-08-16T20:56:08Z", "user": "p1x31" }, { "repo": "pytorch/java-demo", "number": 26, "title": "How to compile from command line (using javac instead of gradle)?", "body": "Hi, could you maybe help with the following?\r\n\r\nI want to show a very simple example of running a jitted model, and using gradle seems like quite some overhead ... Is there a way to just use `javac` with a classpath (or some other setup)?\r\n\r\nI've been trying \r\n\r\n```\r\njavac -cp ~/libtorch/lib src/main/java/demo/App.java\r\n```\r\n\r\nbut that does not work: \r\n\r\n```\r\nsrc/main/java/demo/App.java:3: error: cannot find symbol\r\nimport org.pytorch.IValue;\r\n```\r\n\r\n\r\nIn addition, having stumbled over https://www.graphics-muse.org/wp/?p=136, I've tried the hack of putting App.java in a package`org.pytorch`, but this does not work either.\r\n\r\nMany thanks!", "url": "https://github.com/pytorch/java-demo/issues/26", "state": "closed", "labels": [], "created_at": "2021-08-06T12:44:05Z", "updated_at": "2021-11-04T13:15:33Z", "user": "skeydan" }, { "repo": "pytorch/TensorRT", "number": 562, "title": "\u2753 [Question] How can i get libtrtorchrt.so? ", "body": "## \u2753 Question\r\n\r\nThanks for your contribution. \r\nI can't get the \"libtrtorchrt.so\" described in the following document after completing the trtorch. So, how can I get it?\r\n![image](https://user-images.githubusercontent.com/76929740/128478080-d7ba65c1-413e-4072-88bf-972daf826fe8.png)\r\n\r\n\r\n## What you have already tried\r\n\r\n complete the trtorch as the github guide\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version ( 1.8.1):\r\n - OS ( Linux):\r\n - How you installed PyTorch (`pip`, `libtorch`):\r\n - Build command you used (bazel build //:libtrtorch -c opt):\r\n - Python version: 3.8\r\n - CUDA version: 11.3\r\n - GPU models and configuration: GTX3070\r\n", "url": "https://github.com/pytorch/TensorRT/issues/562", "state": "closed", "labels": [ "question" ], "created_at": "2021-08-06T08:14:01Z", "updated_at": "2021-08-06T10:04:25Z", "user": "JuncFang-git" }, { "repo": "pytorch/vision", "number": 4257, "title": "R-CNN predictions change with different batch sizes", "body": "## \ud83d\udc1b Bug\r\n\r\nEven when using `model.eval()` I get different predictions when changing the batch size. I've found this issue when working on a project with Faster R-CNN and my own data, but I can replicate it in the tutorial \"TorchVision Object Detection Finetuning Tutorial\" (https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html), which uses Mask R-CNN. \r\n\r\n## To Reproduce\r\n\r\nSteps to replicate the issue:\r\n1. Open collab version: https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb\r\n2. Run all cells\r\n3. Insert a new cell at the bottom with the code below and run it:\r\n```\r\ndef get_device():\r\n if torch.cuda.is_available():\r\n return torch.device('cuda') \r\n else:\r\n return torch.device('cpu')\r\n\r\ndef predict(model, image_tensors):\r\n \"\"\"\r\n Generate model's prediction (bounding boxes, scores and labels) for a batch \r\n of image tensors\r\n \"\"\"\r\n model.eval()\r\n with torch.no_grad():\r\n predictions = model([x.to(get_device()) for x in image_tensors])\r\n return predictions\r\n\r\ndef generate_preds(model, batch_size):\r\n \"\"\"\r\n Create dataloader for test dataset with configurable batch size.\r\n Generate predictions. Return a list of predictions per sample.\r\n \"\"\"\r\n dataloader = torch.utils.data.DataLoader(\r\n dataset_test, batch_size=batch_size, shuffle=False, num_workers=2,\r\n collate_fn=utils.collate_fn)\r\n all_pred = []\r\n for batch in dataloader: \r\n image_tensors, targets = batch\r\n predictions = predict(model, image_tensors)\r\n all_pred += predictions\r\n return all_pred\r\n\r\n# Generate two sets of predictions, only change is batch size\r\npreds1 = generate_preds(model, 1)\r\npreds8 = generate_preds(model, 8)\r\nassert len(preds1) == len(preds8)\r\n\r\n# Investigate first five samples:\r\nfor x in range(5):\r\n print(f\"\\nSample {x}:\")\r\n print(\"-Boxes\")\r\n print(preds1[x][\"boxes\"])\r\n print(preds8[x][\"boxes\"])\r\n print(\"-Scores\")\r\n print(preds1[x][\"scores\"])\r\n print(preds8[x][\"scores\"])\r\n print(\"-Labels\")\r\n print(preds1[x][\"labels\"])\r\n print(preds8[x][\"labels\"])\r\n```\r\nThe code above generates two sets of predictions for the test set. The first one is generated with a batch size 1 and the second with a batch size 8. The output that I get when I run that cell:\r\n```\r\nSample 0:\r\n-Boxes\r\ntensor([[ 61.2343, 37.6461, 197.8525, 325.6508],\r\n [276.4769, 23.9664, 290.8987, 73.1913]], device='cuda:0')\r\ntensor([[ 59.1616, 36.3829, 201.7858, 331.4406],\r\n [276.4261, 23.7988, 290.8489, 72.8123],\r\n [ 81.2091, 37.6342, 192.8113, 217.8009]], device='cuda:0')\r\n-Scores\r\ntensor([0.9989, 0.5048], device='cuda:0')\r\ntensor([0.9988, 0.6410, 0.1294], device='cuda:0')\r\n-Labels\r\ntensor([1, 1], device='cuda:0')\r\ntensor([1, 1, 1], device='cuda:0')\r\n\r\nSample 1:\r\n-Boxes\r\ntensor([[ 90.7305, 60.1291, 232.4859, 341.7854],\r\n [245.7694, 56.3715, 305.2585, 349.5301],\r\n [243.0723, 16.5198, 360.2888, 351.5983]], device='cuda:0')\r\ntensor([[ 91.1201, 59.8146, 233.0968, 342.2685],\r\n [245.7369, 56.6024, 305.2173, 349.3939],\r\n [241.1119, 32.6983, 362.4162, 346.0358]], device='cuda:0')\r\n-Scores\r\ntensor([0.9976, 0.9119, 0.1945], device='cuda:0')\r\ntensor([0.9975, 0.9128, 0.1207], device='cuda:0')\r\n-Labels\r\ntensor([1, 1, 1], device='cuda:0')\r\ntensor([1, 1, 1], device='cuda:0')\r\n\r\nSample 2:\r\n-Boxes\r\ntensor([[281.1774, 53.5141, 428.7436, 330.3915],\r\n [139.6456, 23.7953, 264.7703, 330.2114]], device='cuda:0')\r\ntensor([[281.7463, 53.2942, 429.3290, 327.9640],\r\n [138.7147, 23.8612, 264.6823, 332.3202]], device='cuda:0')\r\n-Scores\r\ntensor([0.9969, 0.9947], device='cuda:0')\r\ntensor([0.9968, 0.9945], device='cuda:0')\r\n-Labels\r\ntensor([1, 1], device='cuda:0')\r\ntensor([1, 1], device='cuda:0')\r\n\r\nSample 3:\r\n-Boxes\r\ntensor([[175.3683, 34.3320, 289.3029, 306.8307],\r\n [ 76.7871, 15.4444, 187.0855, 299.1662],\r\n [ 0.0000, 45.9045, 51.3796, 222.0583],\r\n [319.1224, 53.0593, 377.1693, 232.7251],\r\n [260.2587, 55.8976, 309.0191, 229.4261],\r\n [ 70.2029, 27.2173, 126.4584, 234.3767],\r\n [ 38.0638, 55.5370, 65.4132, 164.1965],\r\n [ 98.7189, 91.5356, 172.5915, 295.5404],\r\n [ 70.1933, 56.1804, 103.6161, 218.4743]], device='cuda:0')\r\ntensor([[175.1848, 36.0377, 288.8358, 305.3505],\r\n [ 76.8171, 15.7485, 187.4645, 299.5779],\r\n [ 0.0000, 45.9045, 51.3796, 222.0582],\r\n [319.1060, 53.0140, 377.3391, 232.7926],\r\n [260.2587, 55.8976, 309.0191, 229.4261],\r\n [ 70.2030, 27.2173, 126.4584, 234.3767],\r\n [ 38.0638, 55.5370, 65.4132, 164.1965],\r\n [ 70.1933, 56.1804, 103.6161, 218.4743]], device='cuda:0')\r\n-Scores\r\ntensor([0.9968, 0.9959, 0.9942, 0.9937, 0.9271, 0.8133, 0.4273, 0.1163, 0.0884],\r\n device='cuda:0')\r\ntensor([0.9974, 0.9965, 0.9942, 0.9937, 0.9271, 0.8133, 0.4273, 0.0884],\r\n device='cuda:0')\r\n-Labels\r\ntensor([1, 1, 1, 1, 1, 1, 1, 1, 1], devic", "url": "https://github.com/pytorch/vision/issues/4257", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2021-08-06T07:22:41Z", "updated_at": "2021-08-16T12:25:30Z", "user": "alfonsomhc" }, { "repo": "pytorch/TensorRT", "number": 561, "title": "Are FastRCNN models from TorchVision supported in TRTorch?", "body": "## \u2753 Question\r\n\r\nTried FastRCNN and MaskRCNN models from TorchVision. The model fails to compile with error \"RuntimeError: tuple appears in op that does not forward tuples, unsupported kind: aten::append\"\r\n\r\n## What you have already tried\r\n\r\ncode to reproduce: \r\nimport torch\r\nprint(torch.__version__)\r\nimport trtorch\r\nprint(trtorch.__version__)\r\nimport torchvision\r\n\r\nfastrcnn = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\r\nmodel = fastrcnn.eval().to(\"cuda\")\r\nscripted_model = torch.jit.script(model)\r\ncompile_settings = {\r\n \"input_shapes\": [ [3, 300, 400],[3, 300, 400] ],\r\n \"op_precision\": torch.float \r\n}\r\ntrt_model = trtorch.compile(scripted_model, compile_settings)\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.8.1\r\n - CPU Architecture: \r\n - OS (e.g., Linux): Ubuntu\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): N/A\r\n - Are you using local sources or building from archives: N/A\r\n - Python version: python3.7\r\n - CUDA version: 11.1\r\n - GPU models and configuration:\r\n - Any other relevant information: TRTorch - 0.3.0\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/561", "state": "closed", "labels": [ "feature request", "question", "component: lowering", "No Activity", "component: partitioning" ], "created_at": "2021-08-06T01:36:18Z", "updated_at": "2023-07-29T00:02:10Z", "user": "saipj" }, { "repo": "pytorch/tutorials", "number": 1637, "title": "Distributed Data Parallel Tutorial UX improvement suggestion", "body": "referring to the tutorial: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html \r\n\r\nThough the tutorial is broken down into sections, it doesn't show how to actually run the code from each section until the very end of the tutorial.\r\n\r\nThe code as presented in each section only gives function definitions despite the tutorial text carrying on as though the user is supposed to be able to see something from running the code snippet that has been provided. This combination of info presented in a way that seems self contained, with code that seems self contained but isn't was rather confusing. \r\n\r\nto remedy this issue I think it'd make things easier if either \r\nA) the code to run each section is included in that code snippet\r\nB) a notebook is included in the tutorial so users can see how the code is actually run, without having to guess/find that this code is only listed at the bottom of the tutorial\r\nC) mention that a reasonable default set to run the code snippets can be found at the bottom of the tutorial.\r\n\r\nadditional data point: Before I noticed the code at the bottom of the tutorial to invoke the functions, I looked for other resources and came across an external tutorial: https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html which specifically references the pytorch DDP tutorial and raises similar criticism. Though some of the flaws pointed out have been fixed, this piece remains\n\ncc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/1637", "state": "open", "labels": [ "content", "medium", "docathon-h2-2023" ], "created_at": "2021-08-05T01:28:20Z", "updated_at": "2023-11-17T15:30:16Z", "comments": 10, "user": "HDCharles" }, { "repo": "pytorch/pytorch", "number": 62565, "title": "support comparisons between types `c10::optional<T>` and `U` where `T` is comparable to `U`", "body": "## \ud83d\ude80 Feature\r\nSupport comparisons between `c10::optional<T>` and `U` if `T` is comparable to `U`.\r\n\r\n## Motivation\r\n\r\nA very common use-case for this is:\r\n```\r\nc10::optional<std::string> opt = ...;\r\nif (opt == \"blah\") ...\r\n```\r\n\r\nNote that this is supported by `std::optional`. See https://en.cppreference.com/w/cpp/utility/optional/operator_cmp\r\nunder\r\n\r\n> Compare an optional object with a value\r\n\r\n## Pitch\r\n\r\nAdd support and testing for these additional overloads. These are expected to just replace the operators that look like:\r\n```\r\ntemplate <typename T>\r\nbool operator==(std::optional<T> const& opt, T const& val);\r\n```\r\nwith\r\n```\r\ntemplate <typename T, typename U>\r\nbool operator==(std::optional<T> const& opt, U const& val);\r\n```\r\n\r\n## Alternatives\r\n\r\nThis makes the library more expressive. The alternative with existing functionality is `if (opt && *opt == \"blah\") ...`.\r\n\r\n## Additional context\r\n\r\nThis is not expected to be a significant amount of work (< one day).\n\ncc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh", "url": "https://github.com/pytorch/pytorch/issues/62565", "state": "closed", "labels": [ "module: internals", "module: bootcamp", "triaged" ], "created_at": "2021-08-02T14:03:38Z", "updated_at": "2021-08-19T04:41:51Z", "user": "dagitses" }, { "repo": "pytorch/text", "number": 1369, "title": "How to use TorchText with Java", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\nI have a SentencePiece model which I serialized using `sentencepiece_processor`. My end goal is to use this torchscript serialized tokenizer in Java along with DJL Pytorch dependency. I am looking for guidance on how can I import torchtext dependency in Java environment.\r\n\r\nSteps:\r\n**1. Serializing SPM Tokenizer using Torchtext**\r\nTorchscript Serialized file is saved as 'spm-jit.pt'\r\n```\r\nimport torch\r\nfrom torchtext.experimental.transforms import sentencepiece_processor\r\nspm_processor = sentencepiece_processor('foo.model')\r\njit_spm_processor = torch.jit.script(spm_processor)\r\ntorch.jit.save(jit_spm_processor, 'spm-jit.pt')\r\n```\r\n\r\n**2. Deserializing SPM Tokenizer in Python**\r\nLoading`spm-jit.pt` without importing torchtext fails with the following error.\r\n\r\n```\r\nimport torch\r\nspm_tokenizer = torch.jit.load('spm-jit.pt') # Fails when torchtext is not imported\r\n```\r\nError\r\n```\r\n/usr/local/lib/python3.6/dist-packages/torch/jit/_serialization.py in load(f, map_location, _extra_files)\r\n 159 cu = torch._C.CompilationUnit()\r\n 160 if isinstance(f, str) or isinstance(f, pathlib.Path):\r\n--> 161 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)\r\n 162 else:\r\n 163 cpp_module = torch._C.import_ir_module_from_buffer(\r\n\r\nRuntimeError: \r\nUnknown type name '__torch__.torch.classes.torchtext.SentencePiece':\r\nSerialized File \"code/__torch__/torchtext/experimental/transforms.py\", line 6\r\n training : bool\r\n _is_full_backward_hook : Optional[bool]\r\n sp_model : __torch__.torch.classes.torchtext.SentencePiece\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n def forward(self: __torch__.torchtext.experimental.transforms.SentencePieceProcessor,\r\n line: str) -> List[int]:\r\n```\r\n\r\nAfter importing torchtext, I am able to load the tokenizer from torchscript file.\r\n```\r\nimport torch\r\nimport torchtext\r\nspm_tokenizer = torch.jit.load('spm-jit.pt') # Succeeds\r\n```\r\n\r\nThis led me to the conclusion that serialized file has dependency on torchtext for it to load successfully in Java/Python/C++ environment.\r\n\r\nAny guidance on how can I use torchtext in Java and/or C++\r\n\r\nThanks!\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/text/issues/1369", "state": "open", "labels": [], "created_at": "2021-07-29T20:41:12Z", "updated_at": "2021-08-05T23:30:05Z", "user": "anjali-chadha" }, { "repo": "pytorch/pytorch", "number": 62332, "title": "How to Fix \u201cAssertionError: CUDA unavailable, invalid device 0 requested\u201d", "body": "## \ud83d\udc1b Bug\r\n\r\nI'm trying to use my GPU to run the YOLOR model, and I keep getting the error that CUDA is unavailable, not sure how to fix.\r\n\r\nI keep getting the error:\r\n```\r\nTraceback (most recent call last):\r\n File \"D:\\yolor\\detect.py\", line 198, in <module>\r\n detect()\r\n File \"D:\\yolor\\detect.py\", line 41, in detect\r\n device = select_device(opt.device)\r\n File \"D:\\yolor\\utils\\torch_utils.py\", line 47, in select_device\r\n assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity\r\nAssertionError: CUDA unavailable, invalid device 0 requested\r\n```\r\n\r\nWhen I check CUDA availability with:\r\n```\r\npy\r\n>>import torch\r\n>>print(torch.cuda.is_available())\r\n```\r\n\r\nI get `False`, which explains the problem. I tried running the command:\r\n\r\n`py -m pip install torch1.9.0+cu111 torchvision0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html`\r\n\r\nI get the error:` ERROR: Invalid requirement: 'torch1.9.0+cu111'`\r\n\r\nRunning `nvcc --version`, I get:\r\n```\r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2021 NVIDIA Corporation\r\nBuilt on Mon_May__3_19:41:42_Pacific_Daylight_Time_2021\r\nCuda compilation tools, release 11.3, V11.3.109\r\nBuild cuda_11.3.r11.3/compiler.29920130_0\r\n```\r\n\r\nThus, I'm not really sure what the issue is, or how to fix it.\r\n\r\n## Expected behavior\r\n\r\nI expect the program to be able to run, and CUDA to be available.\r\n\r\n## Environment\r\n\r\n - **PyTorch Version (e.g., 1.0):** 1.9.0\r\n - **OS (e.g., Linux):** Windows\r\n - **How you installed PyTorch (`conda`, `pip`, source):** pip\r\n - **Python version:** Python 3.9.4\r\n - **CUDA/cuDNN version:** Cuda compilation tools, release 11.3, V11.3.109\r\n - **GPU models and configuration:** 2070 Super\r\n\r\n\r\nEDIT:\r\nI noticed that I forgot the `==` sign. I ran `py -m pip install --user torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html`, and even after doing so, \r\n```\r\npy\r\n>>import torch\r\n>>print(torch.cuda.is_available())\r\n```\r\nstill gets `False`.\r\n\r\nAdditionally, `torch.version.cuda` gives `None`. Please help!\r\n\r\ncc @ngimel @ezyang @seemethere @malfet @walterddr ", "url": "https://github.com/pytorch/pytorch/issues/62332", "state": "open", "labels": [ "module: binaries", "triaged" ], "created_at": "2021-07-28T15:05:52Z", "updated_at": "2021-07-29T14:36:15Z", "user": "Hana-Ali" }, { "repo": "pytorch/pytorch", "number": 62282, "title": "What is slow-path and fast-path?", "body": "## \u2753 Questions and Help\r\n\r\nI am reading pytorch code base and issue to get a better understanding of the design choices. I keep seeing fast-pathed or fast-passed function. I was wondering what these are? For instance, the issue here (https://github.com/pytorch/pytorch/pull/46469 2).\r\n\r\nThank you in advance!\r\n", "url": "https://github.com/pytorch/pytorch/issues/62282", "state": "closed", "labels": [], "created_at": "2021-07-27T18:50:43Z", "updated_at": "2021-07-27T19:11:20Z", "user": "tarekmak" }, { "repo": "pytorch/pytorch", "number": 62162, "title": "Clarify sparse COO tensor coalesce behavior wrt overflow + how to binarize a sparse tensor", "body": "https://pytorch.org/docs/stable/sparse.html#sparse-uncoalesced-coo-docs does not explain what would be the behavior if passed sparse tensor dtype does not fit the accumulated values (e.g. it is torch.bool). Will it do the saturation properly? Or will it overflow during coalescing?\r\n\r\nBasically, I would like to binarize a sparse tensor, e.g. to clamp all nonzero values by 1. How can I do that? I've tried `S > 0`, `S.clamp(max = 1)`, `S.to(torch.bool)`.\r\n\r\nOnly the latter seems to work.\n\ncc @brianjo @mruberry @nikitaved @pearu @cpuhrsch @IvanYashchuk", "url": "https://github.com/pytorch/pytorch/issues/62162", "state": "open", "labels": [ "module: sparse", "module: docs", "triaged" ], "created_at": "2021-07-25T12:37:49Z", "updated_at": "2021-08-23T14:53:45Z", "user": "vadimkantorov" }, { "repo": "pytorch/pytorch", "number": 61836, "title": "How to support multi-arch in built Docker", "body": "Hello?\r\nI'm using pip3 to install and use PyTorch by writing my own Dockerfile.\r\nArch error when trying to use an image built on V100 on RTX3090.\r\nI want to build an image that supports multiple Arches, such as V100, A100, RTX3090, and use it.\r\nAny good way?\n\ncc @malfet @seemethere @walterddr", "url": "https://github.com/pytorch/pytorch/issues/61836", "state": "closed", "labels": [ "module: build", "triaged", "module: docker" ], "created_at": "2021-07-19T11:15:03Z", "updated_at": "2021-07-19T21:39:37Z", "user": "DonggeunYu" }, { "repo": "pytorch/pytorch", "number": 61831, "title": "How data transfer from disk to GPU?", "body": "## \u2753 Questions and Help\r\n\r\nI have learned that we can use.to(device) to.CUDA () to transfer data to the GPU. I want to know how this process is implemented in the bottom layer.\r\n\r\nThanks, hundan.\r\n", "url": "https://github.com/pytorch/pytorch/issues/61831", "state": "closed", "labels": [], "created_at": "2021-07-19T08:32:32Z", "updated_at": "2021-07-19T21:21:53Z", "user": "pyhundan" }, { "repo": "pytorch/pytorch", "number": 61765, "title": "How to save tensors on mobile (lite interpreter)?", "body": "## Issue description\r\n\r\nBased on the discussion in https://github.com/pytorch/pytorch/pull/30108 it's clear that `pickle_save` is not supported on mobile, because `/csrc/jit/serialization/export.cpp` is not included when building for lite interpreter; producing the following runtime error:\r\n\r\n```c++\r\n#else\r\n AT_ERROR(\r\n \"pickle_save not supported on mobile \"\r\n \"(see https://github.com/pytorch/pytorch/pull/30108)\");\r\n#endif\r\n```\r\n \r\nFor loading there's an option of using `torch::jit::_load_for_mobile`. \r\n\r\n**However, are there any methods, or alternative approaches, of serialising and saving `c10::IValue` objects on the mobile device?**\r\n\r\n---\r\n\r\nMobile code compiled with libraries at `master: 5c1505076bfa764088e2ccef19d7f18336084530`", "url": "https://github.com/pytorch/pytorch/issues/61765", "state": "open", "labels": [ "oncall: mobile" ], "created_at": "2021-07-16T09:43:03Z", "updated_at": "2021-07-21T10:06:17Z", "user": "lytcherino" }, { "repo": "pytorch/TensorRT", "number": 539, "title": "\u2753 [Question] Unknown output type. Only a single tensor or a TensorList type is supported", "body": "## \u2753 Question\r\n\r\nTRTorch Throw \"Unknown output type. Only a single tensor or a TensorList type is supported\"\r\n\r\n## What you have already tried\r\n\r\nI define a model \r\n```python\r\nimport os\r\nimport time\r\nimport torch\r\n\r\nimport torchvision\r\n\r\n\r\nclass Sparse(torch.nn.Module):\r\n\r\n def __init__(self, embedding_size):\r\n super().__init__()\r\n self._embedding_size = embedding_size\r\n self._output = torch.zeros((4, self._embedding_size))\r\n\r\n def forward(self, x):\r\n return self._output\r\n\r\n\r\nclass Model(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.sparse = Sparse(100)\r\n self.linear = torch.nn.Linear(100, 200)\r\n\r\n def forward(self, x):\r\n y = self.sparse(x)\r\n return self.linear(y)\r\n\r\n\r\nif __name__ == '__main__':\r\n model = torch.jit.script(Model())\r\n model.eval()\r\n model.save(\"data/ali.pt\")\r\n\r\n```\r\nsave it to \"data/ali.pt\"\uff0cand convert it with trtorch\uff08input_shapes is need but not used by model)\r\n\r\n\r\n```python\r\nimport torch\r\nimport trtorch\r\nimport trtorch.logging\r\nimport sys\r\n\r\n\r\ndef main(model_path):\r\n trtorch.logging.set_reportable_log_level(trtorch.logging.Level.Debug)\r\n scripted_model = torch.jit.load(model_path).eval().cuda()\r\n compile_settings = {\r\n \"input_shapes\": [\r\n {\r\n \"min\": [1, 3, 224, 224, 1024],\r\n \"opt\": [1, 3, 512, 512, 2048],\r\n \"max\": [1, 3, 1024, 1024, 4096]\r\n }],\r\n \"op_precision\": torch.float32\r\n }\r\n #print(\"check_method_op_support {}\".format(trtorch.check_method_op_support(scripted_model, \"torch.gelu\")))\r\n print(\"Model {} With Graph {}\".format(model_path, scripted_model.graph))\r\n trt_ts_module = trtorch.compile(scripted_model, compile_settings)\r\n torch.jit.save(trt_ts_module, '{}.jit'.format(model_path))\r\n print(\"Generated Torchscript-TRT models.\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n if len(sys.argv) == 0:\r\n main(\"data/with_dense.pt\")\r\n else:\r\n main(sys.argv[1])\r\n\r\n```\r\n\r\nI also try with c++ api to convert, but got same error\r\n\r\n```cpp\r\nint main(int argc, const char *argv[]) {\r\n if (argc < 2) {\r\n std::cerr << \"usage: samplertapp <path-to-pre-built-trt-ts module>\\n\";\r\n return -1;\r\n }\r\n\r\n std::string trt_ts_module_path = argv[1];\r\n std::string mode = argv[2];\r\n //trtorch::logging::set_reportable_log_level(trtorch::logging::Level::kINFO);\r\n trtorch::logging::set_reportable_log_level(trtorch::logging::Level::kDEBUG);\r\n torch::jit::Module trt_ts_mod;\r\n try {\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n trt_ts_mod = torch::jit::load(trt_ts_module_path);\r\n } catch (const c10::Error &e) {\r\n std::cerr << \"error loading the model from : \" << trt_ts_module_path\r\n << std::endl;\r\n return -1;\r\n }\r\n if (1) {\r\n trt_ts_mod.to(at::kCUDA);\r\n trt_ts_mod.eval();\r\n\r\n auto in = torch::randn({1, 1, 32, 32}, {at::kCUDA}).to(torch::kHalf);\r\n auto input_sizes =\r\n std::vector<trtorch::CompileSpec::InputRange>({in.sizes()});\r\n trtorch::CompileSpec cspec(input_sizes);\r\n cspec.op_precision = torch::kHalf;\r\n auto trt_mod = trtorch::CompileGraph(trt_ts_mod, cspec);\r\n auto out = trt_mod.forward({in});\r\n std::cout << \"==================TRT outputs================\" << std::endl;\r\n std::cout << out << std::endl;\r\n std::cout << \"=============================================\" << std::endl;\r\n }\r\n return 0;\r\n}\r\n```\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version 1.8.1\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (pip)\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.6.13\r\n - CUDA version: 11.1\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n## Additional context\r\n\r\nRunning TRT engine\r\nDEBUG: [TRTorch] - TRTorch Version: 0.3.0\r\nUsing TensorRT Version: 7.2.3.4\r\nPyTorch built with:\r\n - GCC 5.4\r\n - C++ Version: 201402\r\n - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)\r\n - OpenMP 201307 (a.k.a. OpenMP 4.0)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 11.1\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\r\n - CuDNN 8.0.5\r\n - Magma 2.5.2\r\n", "url": "https://github.com/pytorch/TensorRT/issues/539", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-07-16T06:35:29Z", "updated_at": "2021-10-29T00:01:39Z", "user": "westfly" }, { "repo": "pytorch/vision", "number": 4180, "title": "[Detectron2] RuntimeError: No such operator torchvision::nms and RecursionError: maximum recursion depth exceeded", "body": "## \ud83d\udc1b Bug\r\n\r\nRunning Detectron2 demo.py creates `RuntimeError: No such operator torchvision::nms` error. \r\nSo far it's the same as #1405 but it gets worse. Creates a Max Recursion Depth error.\r\n\r\nThe primary issue is resolve with a simple naming change (below, thanks to @feiyuhuahuo). However, this creates the `RecursionError: maximum recursion depth exceeded in comparison` issue referenced by @vasyllyashkevych. \r\n\r\nThis fix to `torchvision::nms` creates `RecursionError`\r\n```\r\n# edit file: `local/lib/python3.6/dist-packages/torchvision-0.7.0a0+78ed10c-py3.6-linux-aarch64.egg/torchvision/ops/boxes.py`\r\n\r\n# OLD (bad): \r\ntorch.ops.torchvision.nms(boxes, scores, iou_thres)\r\n\r\n# NEW (better):\r\nimport torchvision # top of file\r\ntorchvision.ops.nms(boxes, scores, iou_thres)\r\n```\r\n## This fix creates the RecursionError: maximum recursion depth exceeded\r\n```\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision-0.7.0a0+78ed10c-py3.6-linux-aarch64.egg/torchvision/ops/boxes.py\", line 43, in nms\r\n return torchvision.ops.nms(boxes, scores, iou_threshold)\r\n [Previous line repeated 970 more times]\r\nRecursionError: maximum recursion depth exceeded\r\n```\r\nFull stack trace below \ud83d\udc47\ud83d\udc47!\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Build Detectron2 from source\r\n```\r\nsudo python3 -m pip install 'git+https://github.com/facebookresearch/detectron2.git'\r\n```\r\n2. Clone Detectron2 repo \r\n3. Run Demo (from the docs https://detectron2.readthedocs.io/en/latest/tutorials/getting_started.html)\r\n```\r\n$ sudo python3 demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \\\r\n --input kasDemo.png \\\r\n --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl\r\n```\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Environment\r\n\r\n\u2757Note Pytorch was installed via [PyTorch for Jetson](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-9-0-now-available/72048)\r\n\r\nSimple system info:\r\n```\r\nHost machine: Nvidia Jetson Xaiver (arm architecture, not x64)\r\nPython: python3.6\r\nDetectron2: installed from source on Github (July 14, 2021)\r\ntorch version: 1.8.0\r\ntorchvision version: 0.7.0 (a0)\r\nCuda version: 10.2\r\n```\r\n\r\nEnv collection script:\r\n```\r\nPyTorch version: 1.8.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.5 LTS (aarch64)\r\nGCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: Could not collect\r\nCMake version: version 3.10.2\r\nLibc version: glibc-2.25\r\n\r\nPython version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] (64-bit runtime)\r\nPython platform: Linux-4.9.201-tegra-aarch64-with-Ubuntu-18.04-bionic\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/aarch64-linux-gnu/libcudnn.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.0.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.5\r\n[pip3] torch==1.8.0\r\n[pip3] torchvision==0.7.0a0+78ed10c\r\n[conda] Could not collect\r\n```\r\n\r\nFull stack trace:\r\n```\r\n$ sudo python3 demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \\\r\n --video-input IMG_3578.MOV \\\r\n --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl\r\n \r\n[07/14 17:23:49 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', input=None, opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl'], output=None, video_input='IMG_3578.MOV', webcam=False)\r\n[07/14 17:24:00 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ...\r\n[07/14 17:24:01 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'\r\nWARNING [07/14 17:24:01 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model:\r\n proposal_generator.anchor_generator.cell_anchors.{0, 1, 2, 3, 4}\r\n 0%| | 0/221 [00:04<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"demo.py\", line 176, in <module>\r\n for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames):\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/std.py\", line 1185, in __iter__\r\n for ", "url": "https://github.com/pytorch/vision/issues/4180", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2021-07-14T23:51:28Z", "updated_at": "2021-08-12T11:16:20Z", "user": "KastanDay" }, { "repo": "pytorch/pytorch", "number": 61526, "title": "how to put ```trainer.fit()``` in for loop?", "body": "I am trying to create multiple model using loop as below.\r\n\r\n```\r\nfor client in clients:\r\n t.manual_seed(10)\r\n client['model'] = LinearNN(learning_rate = args.lr, i_s = args.input_size, h1_s = args.hidden1, h2_s = args.hidden2, n_c = args.output, client=client)\r\n client['optim'] = optim.Adam(client['model'].parameters(), lr= args.lr)\r\n```\r\n\r\nHowever, ```trainer.fit()``` is an async method. To train multiple models, I need to put ```trainer.fit()``` in a loop as follows \r\n\r\n```\r\nfor client in clients:\r\n trainer = pl.Trainer(\r\n max_epochs=args.epochs+1,\r\n progress_bar_refresh_rate=20,\r\n )\r\n trainer.fit(client['model'])\r\n```\r\n\r\nAs this is an async method, it gives an error \r\n\r\n> AttributeError: can't set attribute\r\n\r\nas it doesn't wait for finishing ```trainer.fit()```.\r\n\r\nIs there any way to do that? \r\n\r\nThanks in advance.", "url": "https://github.com/pytorch/pytorch/issues/61526", "state": "closed", "labels": [], "created_at": "2021-07-12T11:29:16Z", "updated_at": "2021-07-12T12:32:18Z", "user": "anik123" }, { "repo": "pytorch/TensorRT", "number": 527, "title": "\u2753 [Question] TRTorch and Pytorch Serve ", "body": "## \u2753 Question\r\n\r\nIs it possbile to use TRTorch with TorchServe?\r\nIf not what is the best way to deploy TRTorch programs?\r\n\r\n## What you have already tried\r\n\r\nIn the documentation, it is said I can continue to use programs via PyTorch API\r\nI have converted all my models to TRTorch.\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/527", "state": "closed", "labels": [ "question" ], "created_at": "2021-07-11T21:12:02Z", "updated_at": "2021-07-14T18:32:38Z", "user": "p1x31" }, { "repo": "pytorch/pytorch", "number": 61510, "title": "What is find_package(Torch REQUIRED) doing that a manual include/glob doesnt?", "body": "In my project, if i do \r\n```\r\nset(CMAKE_PREFIX_PATH \"...lib/python3.6/site-packages/torch/share/cmake/Torch\")\r\nfind_package(Torch REQUIRED)\r\nadd_library(lib SHARED \"lib.hpp\" \"lib.cpp\")\r\ntarget_link_libraries( lib ${TORCH_LIBRARIES})\r\n```\r\nIt all links and works great!\r\n\r\nBut, if I do the following manually, \r\n\r\n```\r\nfile(GLOB TORCH_LIBRARIES \".../lib/python3.6/site-packages/torch/lib/*.so\")\r\ninclude_directories(\".../python3.6/site-packages/torch/include/torch/csrc/api/include/\"\r\n\"...lib/python3.6/site-packages/torch/include\")\r\nadd_library(lib SHARED \"lib.hpp\" \"lib.cpp\")\r\ntarget_link_libraries( lib ${TORCH_LIBRARIES})\r\n```\r\n\r\nIt fails when i try to load the library with:\r\n```\r\nliblib.so: undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)\r\n```\r\n\r\nWhat is find_package(Torch) adding that i am missing?\r\n\r\nEDIT:\r\n\r\nSo it appears that despite include torchlib.so on my own (which that glob command does), the linker doesnt include `libtorch.so`\n\ncc @malfet @seemethere @walterddr", "url": "https://github.com/pytorch/pytorch/issues/61510", "state": "open", "labels": [ "module: build", "triaged" ], "created_at": "2021-07-10T18:35:09Z", "updated_at": "2021-07-12T15:39:08Z", "user": "yaadhavraajagility" }, { "repo": "pytorch/tutorials", "number": 1605, "title": "Need to update the tutorial description in index.rst", "body": "In [index.rst](https://github.com/pytorch/tutorials/blame/master/index.rst#L165), the description of `Text Classification with Torchtext` is duplicated with [the previous tutorial](https://github.com/pytorch/tutorials/blame/master/index.rst#L158) but not correctly explain the tutorial.\r\n\r\nSo we need to update this description as follows:\r\n\r\n* As-is: This is the third and final tutorial on doing \u201cNLP From Scratch\u201d, where we write our own classes and functions to preprocess the data to do our NLP modeling tasks.\r\n* To-be: Learn how to build the dataset and classify text using torchtext library.\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/1605", "state": "closed", "labels": [], "created_at": "2021-07-10T13:00:03Z", "updated_at": "2021-07-27T21:33:05Z", "comments": 0, "user": "9bow" }, { "repo": "pytorch/serve", "number": 1153, "title": "What is the post route link to upload the kitten.jpg? instead of the \"-T\"", "body": "## \ud83d\udcda Documentation\r\n\r\n\r\nFor example, if I use something like postman, how should I upload the image?\r\nThank you.\r\n\r\n![image](https://user-images.githubusercontent.com/21982975/124931000-a7f7aa00-dfb6-11eb-8e7e-22501874981f.png)\r\nAll these ways are failed.", "url": "https://github.com/pytorch/serve/issues/1153", "state": "closed", "labels": [], "created_at": "2021-07-08T13:00:48Z", "updated_at": "2021-07-08T13:56:39Z", "user": "AliceSum" }, { "repo": "pytorch/TensorRT", "number": 526, "title": "\u2753 [Question] Lowering pass for PyTorch Linear", "body": "## \u2753 Question\r\n\r\nHi, I saw that one of the lowering pass TRTorch has is lowering linear to mm + add. I'm wondering what the reason behind this is. Does TensorRT provide better performance with matmul layer + elementwise sum layer than fully connected layer? Or breaking it down help the fusion process in TensorRT?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/526", "state": "closed", "labels": [ "question" ], "created_at": "2021-07-08T05:45:28Z", "updated_at": "2021-07-12T16:14:09Z", "user": "842974287" }, { "repo": "pytorch/serve", "number": 1152, "title": "How to access API outside of localhost?", "body": "Hi! \r\n\r\nI have Wireguard in my machine and have a few other devices connected with it.\r\nLet's say my Wireguard IP is 10.0.0.1, then in the `config.properties` file, I change `inference_address=http://10.0.0.1:8080`. \r\nI'm able to use the API locally but unable to do so outside of the device (I keep getting a timeout error). \r\n\r\n**What I've tried so far:** \r\nI have also tried changing `inference_address` to `0.0.0.0:8080`, but that didn't help either. \r\nEven running it on a different port like `0.0.0.0:5000` didn't help.\r\nIf I use a tunnel (like ngrok) and expose port 8080, it works perfectly. \r\n\r\nIf I have another application running on a separate port, that is accessible by my other device. \r\n\r\nCan someone help out? \r\n\r\nThanks!\r\n", "url": "https://github.com/pytorch/serve/issues/1152", "state": "open", "labels": [ "help wanted", "support" ], "created_at": "2021-07-07T07:20:50Z", "updated_at": "2023-03-17T09:44:42Z", "user": "kkmehta03" }, { "repo": "pytorch/TensorRT", "number": 523, "title": "\u2753 [Question] My aten::chunk op converter does not work correctly", "body": "## \u2753 Question\r\n\r\nI want to use trtorch to compile shufflenet. However, aten::chunk op is not currently supported. I wrote a converter implementation referring to `converters/impl/select.cpp`. Unfortunately, it does not work.\r\n\r\nIs there anything wrong? \r\n\r\nThe python code\r\n```python \r\nmodel = torchvision.models.shufflenet_v2_x1_0(pretrained=True).cuda().eval()\r\ninput_data = torch.randn(1, 3, 224, 224).cuda()\r\nscripted_model = torch.jit.script(model)\r\nout = scripted_model(input_data)\r\n\r\ncompile_settings = {\r\n \"input_shapes\": [(1, 3, 224, 224)],\r\n}\r\n\r\ntrt_ts_module = trtorch.compile(scripted_model, compile_settings)\r\n```\r\n\r\nThe converter I wrote (compared with the converter of `aten::select`, I only changed `numOuputs` and `sizes`)\r\n```cpp\r\nauto chunk_registrations TRTORCH_UNUSED =\r\n RegisterNodeConversionPatterns()\r\n .pattern({\"aten::chunk(Tensor(a) self, int chunks, int dim=0) -> (Tensor[])\",\r\n [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {\r\n auto in = args[0].ITensor();\r\n auto numOutputs = args[1].unwrapToInt();\r\n auto axis = args[2].unwrapToInt();\r\n auto inDimSize = in->getDimensions().d[axis];\r\n LOG_DEBUG(\"Number of chunk outputs: \" << numOutputs);\r\n std::vector<int64_t> sizes;\r\n\r\n if (inDimSize % numOutputs == 0) {\r\n for (int64_t i = 0; i < numOutputs; i++) {\r\n sizes.push_back(inDimSize / numOutputs);\r\n }\r\n }\r\n else {\r\n for (int64_t i = 0; i < numOutputs - 1; i++) {\r\n sizes.push_back(inDimSize / numOutputs + 1);\r\n }\r\n sizes.push_back(inDimSize - (inDimSize / numOutputs + 1) * (numOutputs - 1));\r\n }\r\n\r\n c10::ListTypePtr lt = n->output()->type()->expect<c10::ListType>();\r\n c10::TypePtr elementType = lt->getElementType();\r\n auto list = c10::impl::GenericList(elementType);\r\n list.reserve(numOutputs);\r\n\r\n int start_idx = 0;\r\n for (int64_t i = 0; i < numOutputs; i++) {\r\n at::Tensor indices = torch::arange(start_idx, start_idx + sizes[i], 1).to(torch::kI32);\r\n auto indicesTensor = tensor_to_const(ctx, indices);\r\n\r\n auto gather_layer = ctx->net->addGather(*in, *indicesTensor, axis);\r\n auto gather_out = gather_layer->getOutput(0);\r\n\r\n auto tensor_holder = TensorContainer();\r\n tensor_holder.hold_tensor(gather_out);\r\n auto ival = c10::IValue(std::move(c10::make_intrusive<TensorContainer>(tensor_holder)));\r\n list.emplace_back(ival);\r\n\r\n start_idx = start_idx + sizes[i];\r\n }\r\n\r\n auto chunk_output_ivalue = std::move(torch::jit::IValue(list));\r\n ctx->AssociateValueAndIValue(n->outputs()[0], chunk_output_ivalue);\r\n\r\n LOG_DEBUG(\"Converted chunk op into a list of IValues\");\r\n return true;\r\n }});\r\n```\r\nDEBUG log\r\n```\r\nINFO: [TRTorch Conversion Context] - Adding Layer %78 : Tensor[] = aten::chunk(%input.152, %self.stage2.0.stride, %self.stage2.1.stride) # /opt/conda/lib/python3.8/site-packages/torchvision/models/shufflenetv2.py:89:21 (ctx.AddLayer)\r\nDEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor\r\nDEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value\r\nDEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value\r\nDEBUG: [TRTorch] - Number of chunk outputs: 2\r\nDEBUG: [TRTorch] - Weights: [58]\r\n Number of input maps: 58\r\n Number of output maps: 58\r\n Element shape: [1]\r\nDEBUG: [TRTorch Conversion Context] - Freezing tensor 0x7fbb1c37c4e0 as an IConstantLayer\r\nDEBUG: [TRTorch] - Weights: [58]\r\n Number of input maps: 58\r\n Number of output maps: 58\r\n Element shape: [1]\r\nDEBUG: [TRTorch Conversion Context] - Freezing tensor 0x7fbb1be22d30 as an IConstantLayer\r\nDEBUG: [TRTorch] - Converted chunk op into a list of IValues\r\nDEBUG: [TRTorch Conversion Context] - Evaluating %x1.9 : Tensor, %x2.9 : Tensor = prim::ListUnpack(%78)\r\nDEBUG: [TRTorch Conversion Context] - Found the evaluated value(s) to be True for node: %x1.9 : Tensor, %x2.9 : Tensor = prim::ListUnpack(%78)\r\nDEBUG: [TRTorch Conversion Context] - Found the evaluated value(s) to be True for node: %x1.9 : Tensor, %x2.9 : Tensor = prim::ListUnpack(%78)\r\nDEBUG: [TRTorch Conversion Context] - Evaluating %81 : Float(58, 58, 1, 1, strides=[58, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=<Tensor>]()\r\nDEBUG: [TRTo", "url": "https://github.com/pytorch/TensorRT/issues/523", "state": "closed", "labels": [ "question" ], "created_at": "2021-07-05T08:50:04Z", "updated_at": "2021-07-07T09:30:47Z", "user": "letian-jiang" }, { "repo": "pytorch/pytorch", "number": 61220, "title": "How to submit a DDP job on the PBS/SLURM using multiple nodes", "body": "Hi everyone, I am trying to train using DistributedDataParallel. Thanks to the great work of the team at PyTorch, a very high efficiency has been achieved. Everything is fine when a model is trained on a single node. However, when I try to use multiple nodes in one job script, all the processes will be on the host node and the slave node will not have any processes running on it. Here is my script for the PBS workload manager:\r\n```\r\n#!/bin/sh\r\n#PBS -V\r\n#PBS -q gpu\r\n#PBS -N test_1e4_T=1\r\n#PBS -l nodes=2:ppn=2\r\nsource /share/home/bjiangch/group-zyl/.bash_profile\r\nconda activate Pytorch-181\r\ncd $PBS_O_WORKDIR\r\n\r\npath=\"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/\"\r\n\r\n#Number of processes per node to launch\r\nNPROC_PER_NODE=2\r\n\r\n#Number of process in all modes\r\nWORLD_SIZE=`expr $PBS_NUM_NODES \\* $NPROC_PER_NODE`\r\n\r\nMASTER=`/bin/hostname -s`\r\ncat $PBS_NODEFILE>nodelist\r\n#Make sure this node (MASTER) comes first\r\nSLAVES=`cat nodelist | grep -v $MASTER | uniq`\r\n\r\n#We want names of master and slave nodes\r\nHOSTLIST=\"$MASTER $SLAVES\"\r\n\r\n\r\n#The path you place your code\r\n#This command to run your pytorch script\r\n#You will want to replace this\r\nCOMMAND=\"$path --world_size=$WORLD_SIZE\"\r\n\r\n\r\n#Get a random unused port on this host(MASTER)\r\n#First line gets list of unused ports\r\n#3rd line gets single random port from the list\r\nMPORT=`ss -tan | awk '{print $5}' | cut -d':' -f2 | \\\r\n grep \"[2-9][0-9]\\{3,3\\}\" | sort | uniq | shuf -n 1`\r\n\r\n\r\n#Launch the pytorch processes, first on master (first in $HOSTLIST) then on the slaves\r\nRANK=0\r\nfor node in $HOSTLIST; do\r\n ssh -q $node\r\n python3 -m torch.distributed.launch \\\r\n --nproc_per_node=$NPROC_PER_NODE \\\r\n --nnodes=$PBS_NUM_NODES \\\r\n --node_rank=$RANK \\\r\n --master_addr=\"$MASTER\" --master_port=\"$MPORT\" \\\r\n $COMMAND &\r\n RANK=$((RANK+1))\r\ndone\r\nwait\r\n```\r\nIt is modified according to the [here](https://www.glue.umd.edu/hpcc/help/software/pytorch.html). \r\nI want to submit a 4 process work ( 2 nodes and 2 process each node). \r\nFor validation, I manually ssh to each node from the login node and execute the \r\nssh gpu1\r\npython3 -m torch.distributed.launch --nnodes=2 --node_rank=0\r\nssh gpu2\r\npython3 -m torch.distributed.launch --nnodes=2 --node_rank=1\r\n\r\nIt will work and has a pretty good parallel efficiency. The same problem will occur on another cluster with a slurm workload manager. I don't see any difference between the two and lead to the totally different results.\r\nAnd the final error \r\n```\r\nTraceback (most recent call last):\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\nTraceback (most recent call last):\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\nTraceback (most recent call last):\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\nTraceback (most recent call last):\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,return _run_code(code, main_globals, None,\r\n\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 87, in _run_code\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 87, in _run_code\r\n return _run_code(code, main_globals, None,\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 87, in _run_code\r\n return _run_code(code, main_globals, None,\r\n File \"/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py\", line 1, in <module>\r\n exec(code, run_globals)\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py\", line 1, in <module>\r\n exec(code, run_globals)\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py\", line 1, in <module>\r\n exec(code, run_globals)\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py\", line 1, in <module>\r\n import run.train\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py\", line 70, in <module>\r\n import run.train\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py\", line 70, in <module>\r\n import run.train\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py\", line 70, in <module>\r\n import run.train\r\n File \"/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py\", line 70, in <module>\r\n Prop_class = DDP(Prop_class, device_ids=[local_rank], o", "url": "https://github.com/pytorch/pytorch/issues/61220", "state": "closed", "labels": [], "created_at": "2021-07-04T05:52:54Z", "updated_at": "2021-07-12T20:11:16Z", "user": "zhangylch" }, { "repo": "pytorch/functorch", "number": 67, "title": "How to perform jvps and not vjps?", "body": "Hi! Thanks for the working prototype, it would be a great addition to pytorch!\r\n\r\nI'm currently using pytorch for research purposes, and I would like to implicitly compute jacobian-vector products (i.e. where the given vectors should multiply the \"inputs\" and not the \"outputs\" of the transformation).\r\n\r\nIs there a `jvp` function? Is there a workaround?", "url": "https://github.com/pytorch/functorch/issues/67", "state": "closed", "labels": [], "created_at": "2021-07-03T07:34:59Z", "updated_at": "2022-12-08T20:04:56Z", "user": "trenta3" }, { "repo": "pytorch/pytorch", "number": 61128, "title": "How to get in touch about a security issue?", "body": "Hey there,\r\n\r\nAs there isn't a `SECURITY.md` with an email on your repository, I am unsure how to contact you regarding a potential security issue.\r\n\r\nWould you kindly add a `SECURITY.md` file with an e-mail to your repository? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this as the best way to ensure security issues are responsibly disclosed, and it would massively help security researchers get in touch next time.\r\n\r\nThank you so much and I look forward to hearing from you!", "url": "https://github.com/pytorch/pytorch/issues/61128", "state": "closed", "labels": [], "created_at": "2021-07-01T16:07:47Z", "updated_at": "2021-07-12T17:13:59Z", "user": "zidingz" }, { "repo": "pytorch/vision", "number": 4147, "title": "Nan Loss while using resnet_fpn(not pretrained) backbone with FasterRCNN", "body": "## \ud83d\udc1b Bug\r\nI am using this model\r\n```\r\nfrom torchvision.models.detection import FasterRCNN\r\nfrom torchvision.models.detection.backbone_utils import resnet_fpn_backbone\r\nbackbone = resnet_fpn_backbone(backbone_name='resnet152', pretrained=False)\r\nmodel = FasterRCNN(backbone,\r\n num_classes=2)\r\n```\r\nWhen I set pretrained=True in the backbone, it works absolutely fine but when I set pretrained=False. It starts giving such output\r\n\r\n```\r\nEpoch: [0] [ 0/457] eta: 0:27:14 lr: 0.000003 loss: 141610976.0000 (141610976.0000) loss_classifier: 50311224.0000 (50311224.0000) loss_box_reg: 62420652.0000 (62420652.0000) loss_objectness: 7461720.0000 (7461720.0000) loss_rpn_box_reg: 21417388.0000 (21417388.0000) time: 3.5773 data: 0.8030 max mem: 10427\r\nLoss is nan, stopping training\r\n{'loss_classifier': tensor(nan, device='cuda:1', grad_fn=<NllLossBackward>), 'loss_box_reg': tensor(nan, device='cuda:1', grad_fn=<DivBackward0>), 'loss_objectness': tensor(nan, device='cuda:1', grad_fn=<BinaryCrossEntropyWithLogitsBackward>), 'loss_rpn_box_reg': tensor(nan, device='cuda:1', grad_fn=<DivBackward0>)}\r\n```\r\n\r\n## Environment\r\n\r\n```\r\ntorch-version = 1.9.0a0+c3d40fd\r\ntorchvision-version = 0.10.0a0\r\n```\r\n\r\nUsing this dockerfile\r\n```\r\nFROM nvcr.io/nvidia/pytorch:21.06-py3\r\n\r\nRUN pip install pytorch-lightning\r\nRUN pip install -U git+https://github.com/albu/albumentations --no-cache-dir\r\nRUN pip install --upgrade albumentations \r\nRUN pip install timm\r\nRUN pip install odach\r\nRUN pip install ensemble_boxes\r\nRUN pip install opencv-python-headless\r\nRUN pip install --no-cache-dir --upgrade pip\r\n\r\nRUN apt update && apt install -y libsm6 libxext6\r\nRUN apt-get install -y libxrender-dev\r\n\r\nRUN apt install -y p7zip-full p7zip-rar\r\n```\r\nHelp!\n\ncc @datumbox", "url": "https://github.com/pytorch/vision/issues/4147", "state": "closed", "labels": [ "question", "topic: object detection" ], "created_at": "2021-07-01T13:57:44Z", "updated_at": "2021-08-17T18:37:51Z", "user": "sahilg06" }, { "repo": "pytorch/TensorRT", "number": 519, "title": "\u2753 [Question] Are there plans to support TensorRT 8 and Ubuntu 20.04? ", "body": "", "url": "https://github.com/pytorch/TensorRT/issues/519", "state": "closed", "labels": [ "question", "No Activity", "Story: TensorRT 8" ], "created_at": "2021-06-30T23:01:38Z", "updated_at": "2022-02-13T00:01:42Z", "user": "danielgordon10" }, { "repo": "pytorch/text", "number": 1350, "title": "How to build vocab from Glove embedding?", "body": "## \u2753 How to build vocab from Glove embedding?\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\nHow to build vocab from Glove embedding?\r\n\r\nI have gone through the documentation and the release update, I got to know that the Vectors object is not an attribute of the new Vocab object anymore.\r\n\r\nBut I would still want to build my vocab using Glove embedding or perhaps using Glove embedding in my model, anyway for the new API? ", "url": "https://github.com/pytorch/text/issues/1350", "state": "open", "labels": [], "created_at": "2021-06-30T16:11:53Z", "updated_at": "2022-02-27T11:29:03Z", "user": "OsbertTay" }, { "repo": "pytorch/vision", "number": 4134, "title": "Hey, after changing the segmention.py script, has anyone tested the backbone using Mobilenetv3_large as deeplabv3? When I start the train.py script, I throw the error shown in the screenshot:", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n[\r\n![image](https://user-images.githubusercontent.com/49866330/123753767-ca216600-d8ec-11eb-88a5-81a4374ca9f2.png)\r\n](url)", "url": "https://github.com/pytorch/vision/issues/4134", "state": "closed", "labels": [ "question" ], "created_at": "2021-06-29T07:15:14Z", "updated_at": "2021-06-29T10:32:17Z", "user": "GalSang17" }, { "repo": "pytorch/pytorch", "number": 60847, "title": "How to release CPU memory cache in Libtorch JIT ?", "body": "## \u2753 Questions and Help\r\nHi every one, I would like to know to release CPU memory cache in Libtorch JIT? If there is no such way, can I set percentile of maximum memory used as cache ? And I want to know if each torchscript::jit::module has its own memory cache , or all modules share one global memory cache ? Thanks.\r\n\r\n\r\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/60847", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2021-06-28T03:40:21Z", "updated_at": "2021-07-07T03:01:06Z", "user": "w1d2s" }, { "repo": "pytorch/pytorch", "number": 60825, "title": "Where OpInfo doesn't handle cases where one of the inputs is a scalar", "body": "https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py#L4436\r\n\r\nIt'd be nice to cover the other cases as well.\n\ncc @mruberry @VitalyFedyunin @walterddr @heitorschueroff", "url": "https://github.com/pytorch/pytorch/issues/60825", "state": "open", "labels": [ "module: tests", "triaged", "module: sorting and selection" ], "created_at": "2021-06-26T20:37:52Z", "updated_at": "2021-08-30T20:34:56Z", "user": "Chillee" }, { "repo": "pytorch/TensorRT", "number": 511, "title": "\u2753 [Question] How can I trace the code that causing Unsupported operators", "body": "## \u2753 Question\r\n\r\nHi. I'm trying to enable TensorRT for a Torch Script model and getting a bunch of Unsupported operators. I'm willing to change the implementation to avoid those unsupported operators or even trying to add support for it. But I struggling to find which line of code in my model are causing it.\r\n\r\nI'm doing something like this:\r\n```python\r\nmodel = torch.jit.script(model)\r\nmodel = torch._C._jit_to_backend(\"tensorrt\", model, spec)\r\n```\r\n\r\nAnd getting something like this:\r\n```console\r\nERROR: [TRTorch] - Method requested cannot be compiled by TRTorch.\r\nUnsupported operators listed below:\r\n - aten::__contains__.str_list(str[] l, str item) -> (bool)\r\n - aten::_set_item.str(Dict(str, t)(a!) l, str(b -> *) idx, t(c -> *) v) -> ()\r\n - aten::dict() -> (Dict(str, Tensor))\r\n - aten::format(str self, ...) -> (str)\r\n - aten::list.t(t[] l) -> (t[])\r\n - aten::values.str(Dict(str, t) self) -> (t[](*))\r\nYou can either implement converters for these ops in your application or request implementation\r\nhttps://www.github.com/nvidia/TRTorch/issues\r\n\r\nTraceback (most recent call last):\r\n File \"convert.py\", line 6, in <module>\r\n model = SomeModel('weights/ghostnet0.5.pth')\r\n File \"/home/linus/model/model.py\", line 88, in __init__\r\n self.model = torch._C._jit_to_backend(\"tensorrt\", self.model, spec)\r\nRuntimeError: The following operation failed in the TorchScript interpreter.\r\nTraceback of TorchScript (most recent call last):\r\n File \"<string>\", line 4, in __preprocess\r\n def __preprocess(self, mod: Any, method_compile_spec: Dict[str, Any]):\r\n self.__create_backend()\r\n self.__processed_module = self.__backend.preprocess(mod, method_compile_spec)\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n \r\nRuntimeError: [enforce fail at /workspace/TRTorch/py/trtorch/csrc/tensorrt_backend.cpp:19] Expected core::CheckMethodOperatorSupport(mod.toModule(), it->key()) to be true but got false\r\nMethod forwardcannot be compiled by TRTorch\r\n```\r\n\r\nMy question is:\r\n- Does TRTorch have some traceback (like TorchScript) to tell the users which line of code caused the problem ?\r\n- How can I find which line of code are using the unsupported operations?\r\n\r\n## What you have already tried\r\nI have tried printing out the graph for my scripted model with: `print(model.graph)` but yet to find those listed operators above.\r\n```\r\ngraph(%self : __torch__.retinaface.RetinaFace,\r\n %inputs.1 : Tensor):\r\n %124 : Function = prim::Constant[name=\"softmax\"]()\r\n %123 : None = prim::Constant()\r\n %122 : int = prim::Constant[value=3]()\r\n %121 : int = prim::Constant[value=-1]() # /home/linus/model/model.py:121:67\r\n %index.1 : int = prim::Constant[value=0]() # /home/linus/model/model.py:98:33\r\n %index.3 : int = prim::Constant[value=1]() # /home/linus/model/model.py:99:33\r\n....\r\n```\r\n\r\nI thought that by finding the ops, I can use that comment on the right to find which part of my code are using an unsupported ops. But so far, none have been found ;(\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.8.1\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): Ubuntu 20.04 LTS\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda\r\n - Build command you used (if compiling from source): None\r\n - Are you using local sources or building from archives: local sources\r\n - Python version: 3.8.8\r\n - CUDA version: 11.1.74\r\n - GPU models and configuration: GTX 1080Ti\r\n - Any other relevant information: \r\n\r\n## Additional context\r\n\r\nNone for now. Thanks for checking by ;)", "url": "https://github.com/pytorch/TensorRT/issues/511", "state": "closed", "labels": [ "question" ], "created_at": "2021-06-26T13:48:04Z", "updated_at": "2021-07-22T17:06:15Z", "user": "lamhoangtung" }, { "repo": "pytorch/pytorch", "number": 60819, "title": "I want to use the bach size of image to foward with libtorch, how should i do?", "body": "single image forward:\r\n\r\nstd::vector<std::vector<Detection>>\r\nmodel4_dec::Run(const cv::Mat& img, float conf_threshold, float iou_threshold) {\r\n\ttorch::NoGradGuard no_grad;\r\n\tstd::cout << \"----------New Frame----------\" << std::endl;\r\n\r\n\t// TODO: check_img_size()\r\n\r\n\t/*** Pre-process ***/\r\n\r\n\tauto start = std::chrono::high_resolution_clock::now();\r\n\r\n\t// keep the original image for visualization purpose\r\n\tcv::Mat img_input = img.clone();\r\n\r\n\tstd::vector<float> pad_info;\r\n\tpad_info = LetterboxImage(img_input, img_input, cv::Size(INPUT_W, INPUT_H));\r\n\tconst float pad_w = pad_info[0];\r\n\tconst float pad_h = pad_info[1];\r\n\tconst float scale = pad_info[2];\r\n\r\n\tcv::cvtColor(img_input, img_input, cv::COLOR_BGR2RGB); // BGR -> RGB\r\n\timg_input.convertTo(img_input, CV_32FC3, 1.0f / 255.0f); // normalization 1/255\r\n\tauto tensor_img = torch::from_blob(img_input.data, { 1, img_input.rows, img_input.cols, img_input.channels() }).to(device_);\r\n\r\n\ttensor_img = tensor_img.permute({ 0, 3, 1, 2 }).contiguous(); // BHWC -> BCHW (Batch, Channel, Height, Width)\r\n\r\n\tif (half_) {\r\n\t\ttensor_img = tensor_img.to(torch::kHalf);\r\n\t}\r\n\r\n\tstd::vector<torch::jit::IValue> inputs;\r\n\tinputs.emplace_back(tensor_img);\r\n\r\n\tauto end = std::chrono::high_resolution_clock::now();\r\n\tauto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);\r\n\t// It should be known that it takes longer time at first time\r\n\tstd::cout << \"pre-process takes : \" << duration.count() << \" ms\" << std::endl;\r\n\r\n\t/*** Inference ***/\r\n\t// TODO: add synchronize point\r\n\tstart = std::chrono::high_resolution_clock::now();\r\n\r\n\t// inference\r\n\ttorch::jit::IValue output = module_.forward(inputs);\r\n\r\n\tend = std::chrono::high_resolution_clock::now();\r\n\tduration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);\r\n\t// It should be known that it takes longer time at first time\r\n\tstd::cout << \"inference takes : \" << duration.count() << \" ms\" << std::endl;\r\n\r\n\t/*** Post-process ***/\r\n\r\n\tstart = std::chrono::high_resolution_clock::now();\r\n\tauto detections = output.toTuple()->elements()[0].toTensor();\r\n\r\n\t// result: n * 7\r\n\t// batch index(0), top-left x/y (1,2), bottom-right x/y (3,4), score(5), class id(6)\r\n\tauto result = PostProcessing(detections, pad_w, pad_h, scale, img.size(), conf_threshold, iou_threshold);\r\n\r\n\tend = std::chrono::high_resolution_clock::now();\r\n\tduration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);\r\n\t// It should be known that it takes longer time at first time\r\n\tstd::cout << \"post-process takes : \" << duration.count() << \" ms\" << std::endl;\r\n\r\n\treturn result;\r\n}\r\n\r\nbut i want to use bach of image to forward:\r\nstd::vector<std::vector<Detection>> model4_dec::tensor_run(std::vector<cv::Mat>& _vecimg, float conf_threshold, float iou_threshold)\r\n{\r\n\ttorch::NoGradGuard no_grad;\r\n\r\n\tfloat scale = 1;// std::min(out_w / in_w, out_h / in_h);\r\n\tint imgwidth = INPUT_W;\r\n\tint imgheight = INPUT_H;\r\n\tstatic float data[BATCH_SIZE * 3 * INPUT_H * INPUT_W];\r\n\tfor (int b = 0; b < _vecimg.size(); b++)\r\n\t{\r\n\t\t// keep the original image for visualization purpose\r\n\t\tcv::Mat img_input = _vecimg[b].clone();\r\n\t\timgwidth = img_input.cols;\r\n\t\timgheight = img_input.rows;\r\n\t\tscale = std::min(INPUT_W / img_input.cols, INPUT_H / img_input.rows);\r\n\t\tif (img_input.empty()) continue;\r\n\t\tcv::Mat pr_img = preprocess_img(img_input, INPUT_W, INPUT_H); // letterbox BGR to RGB\r\n\t\tpr_img.convertTo(pr_img, CV_32FC3, 1.0f / 255.0f); // normalization 1/255\r\n\t\tint i = 0;\r\n\t\tfor (int row = 0; row < INPUT_H; ++row) {\r\n\t\t\tuchar* uc_pixel = pr_img.data + row * pr_img.step;\r\n\t\t\tfor (int col = 0; col < INPUT_W; ++col) {\r\n\t\t\t\tdata[b * 3 * INPUT_H * INPUT_W + i] = (float)uc_pixel[1] / 255.0;\r\n\t\t\t\tdata[b * 3 * INPUT_H * INPUT_W + i + INPUT_H * INPUT_W] = (float)uc_pixel[1] / 255.0;\r\n\t\t\t\tdata[b * 3 * INPUT_H * INPUT_W + i + 2 * INPUT_H * INPUT_W] = (float)uc_pixel[0] / 255.0;\r\n\t\t\t\tuc_pixel += 3;\r\n\t\t\t\t++i;\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n\r\n\tauto tensor_img = torch::from_blob(data, { BATCH_SIZE, INPUT_H, INPUT_W, 3 }).to(device_);\r\n\ttensor_img = tensor_img.permute({ 0, 3, 1, 2 }).contiguous(); // BHWC -> BCHW (Batch, Channel, Height, Width)\r\n\r\n\tif (half_) {\r\n\t\ttensor_img = tensor_img.to(torch::kHalf);\r\n\t}\r\n\r\n\tstd::vector<torch::jit::IValue> inputs;\r\n\tinputs.emplace_back(tensor_img);\r\n\r\n\t// inference\r\n\ttorch::jit::IValue output = module_.forward(inputs);\r\n\r\n\r\n\t/*** Post-process ***/\r\n\tauto detections = output.toTuple()->elements()[0].toTensor();\r\n\r\n\t// result: n * 7\r\n\t// batch index(0), top-left x/y (1,2), bottom-right x/y (3,4), score(5), class id(6)\r\n\tauto result = PostProcessing(detections, 0, 0, scale, cv::Size(imgwidth, imgheight), conf_threshold, iou_threshold);\r\n\r\n\treturn result;\r\n}\r\n", "url": "https://github.com/pytorch/pytorch/issues/60819", "state": "closed", "labels": [], "created_at": "2021-06-26T08:28:09Z", "updated_at": "2021-06-28T13:27:36Z", "user": "xinsuinizhuan" }, { "repo": "pytorch/vision", "number": 4117, "title": "publish nightly v0.11 to Conda channel `pytorch-nightly`", "body": "## \ud83d\ude80 Feature\r\n\r\nI would kindly request if you can update the latest nightly to Conda\r\n\r\n## Motivation\r\n\r\nWe are going to test some against future Pytorch v1.10 but we also need to have a TV for these test and as TV is fixed to a particular PT version with the latest TV revert PT to v1.9\r\n\r\n## Pitch\r\n\r\nsimple testing against future versions\r\n\r\n## Alternatives\r\n\r\nmixing conda and pypi sources\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n", "url": "https://github.com/pytorch/vision/issues/4117", "state": "closed", "labels": [ "question", "topic: binaries" ], "created_at": "2021-06-25T07:42:01Z", "updated_at": "2021-06-29T10:41:04Z", "user": "Borda" }, { "repo": "pytorch/vision", "number": 4115, "title": "Again details about how pretrained models are trained?", "body": "1. I use [v0.5.0/references](https://github.com/pytorch/vision/blob/build/v0.5.0/references/classification/train.py) to train a resnet50 wirth defalut config. But, I got Best_val Top1=75.806%, which has a gap of 0.3% about the pretrained model. How can I to repreduce your accuracy ?\r\n2. I notice you said you use you recompute the batch norm statistics after training, can you show more details?", "url": "https://github.com/pytorch/vision/issues/4115", "state": "open", "labels": [ "question" ], "created_at": "2021-06-25T05:51:59Z", "updated_at": "2021-06-28T14:59:26Z", "user": "YYangZiXin" }, { "repo": "pytorch/vision", "number": 4105, "title": "Hi, why rate is two times with paper", "body": "https://github.com/pytorch/vision/blob/d1ab583d0d2df73208e2fc9c4d3a84e969c69b70/torchvision/models/segmentation/deeplabv3.py#L32\r\n\r\n```\r\nn. In the\r\nend, our improved ASPP consists of (a) one 1\u00d71 convolution\r\nand three 3 \u00d7 3 convolutions with rates = (6, 12, 18) when\r\noutput stride = 16 (all with 256 filters and batch normalization), and (b) the image-level features, \r\n```\r\n", "url": "https://github.com/pytorch/vision/issues/4105", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2021-06-24T07:04:07Z", "updated_at": "2021-07-02T03:52:59Z", "user": "flystarhe" }, { "repo": "pytorch/vision", "number": 4104, "title": "whats diff in this code ```try....except```", "body": "https://github.com/pytorch/vision/blob/d1ab583d0d2df73208e2fc9c4d3a84e969c69b70/torchvision/_internally_replaced_utils.py#L13\r\n\r\n![image](https://user-images.githubusercontent.com/49515380/123214308-a24f8e00-d4f9-11eb-90fc-6deddf8e3926.png)\r\n\r\n\r\nexcept code also use torch.hub,its same as try code!!!\r\n\r\nwhy do like this?", "url": "https://github.com/pytorch/vision/issues/4104", "state": "closed", "labels": [ "question" ], "created_at": "2021-06-24T06:37:49Z", "updated_at": "2021-06-24T11:46:28Z", "user": "jaffe-fly" }, { "repo": "pytorch/pytorch", "number": 60625, "title": "How to checkout 1.8.1 release", "body": "I am trying to compile pytorch 1.8.1 release from source but not sure which branch to checkout, as there is no 1.8.1 and the 1.8.0 branches seem to be rc1 or rc2. \r\n\r\nso for example\r\n\r\n```\r\ngit checkout -b remotes/origin/lts/release/1.8\r\n\r\ngit describe --tags\r\n```\r\n\r\nreturns\r\n\r\nv1.8.0-rc1-4570-g80f40b172f\r\n\r\n\r\nSo how to get 1.8.1? I know the release tarballs don't work\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/60625", "state": "closed", "labels": [], "created_at": "2021-06-24T04:17:58Z", "updated_at": "2021-06-24T19:06:58Z", "user": "beew" }, { "repo": "pytorch/serve", "number": 1135, "title": "how to serve model converted by hummingbird from sklearn?", "body": "if i have a sklearn model, and then use hummingbird (https://github.com/microsoft/hummingbird) to transfer as pytorch tensor model \r\n\r\nso the model structure is from hummingbird, but not by my own such as :\r\n\r\nhummingbird.ml.containers.sklearn.pytorch_containers.PyTorchSklearnContainerClassification\r\nso i dont have the model.py\r\n\r\nhow to use torch serve to serve this model?\r\ni tried below:\r\n\r\ntorch-model-archiver --model-name aa --version 1.0 --handler text_classifier --serialized-file torch_hm_model_cuda.pth\r\ntorchserve --start --ncs --model-store model_store --models tt=aa.mar\r\n\r\nit juse show :\r\n\r\nRemoving orphan pid file.\r\njava.lang.NoSuchMethodError: java.nio.file.Files.readString(Ljava/nio/file/Path;)Ljava/lang/String;\r\n\r\nno other messages. thanks in advance.\r\n\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/1135", "state": "closed", "labels": [], "created_at": "2021-06-23T07:25:56Z", "updated_at": "2021-06-24T10:03:57Z", "user": "aohan237" }, { "repo": "pytorch/pytorch", "number": 60433, "title": "Libtorch JIT : Does enabling profiling mode increase CPU memory usage ? How to disable profiling mode properly ?", "body": "Hi, I am trying to deploying an Attention-based Encoder Decoder (AED) model with libtorch C++ frontend, when model's decoder loops at output sequence ( the decoder jit module 's forward method is repeatedly called at each label time step ), the CPU memory usage is very high (~ 20 GB), and I think it's far too high compared to it should be ( at each decoder step, the internal state tensors should occupy about < 400 MB in total, and state tensors at previous steps is released correctly with management of smart pointers).\r\n\r\nI call torch::jit::getProfilingMode() at begining of inference, and it's true; I try to set it false, but the memory usage is still high.\r\n\r\nI would like to know :\r\n1) whether the high CPU memory usage is related to torch JIT 's profiling mode ?\r\n2) is there any other way to profile CPU memory usage ?\r\n\r\nThe libtorch version used is 1.9.0\r\n\r\nThanks a lot.\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\n\r\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/60433", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-06-22T04:13:05Z", "updated_at": "2021-06-28T03:36:08Z", "user": "w1d2s" }, { "repo": "pytorch/vision", "number": 4091, "title": "Unnecessary call .clone() in box_convert function", "body": "https://github.com/pytorch/vision/blob/d391a0e992a35d7fb01e11110e2ccf8e445ad8a0/torchvision/ops/boxes.py#L183-L184\r\n\r\nWe can just return boxes without .clone().\r\n\r\nWhat's the purpose?", "url": "https://github.com/pytorch/vision/issues/4091", "state": "closed", "labels": [ "question" ], "created_at": "2021-06-22T02:03:37Z", "updated_at": "2021-06-22T14:14:56Z", "user": "developer0hye" }, { "repo": "pytorch/android-demo-app", "number": 156, "title": "How to add Model Inference Time to yolov5 demo when using live function? Like the iOS demo?", "body": "Dear developer, I watched this repository (for Android) yolov5 application test video and I compared another repository (for iOS) yolov5 application test video.\r\nI found that the Android application is missing the provision of \" Model Inference Time\" for real time detection, could you please add it? If not, could you please tell me how to add it? Thank you.\r\n![yolov5_Android](https://user-images.githubusercontent.com/61718945/122684823-64711200-d23a-11eb-9911-ef87a0682102.png)\r\n![yolov5_ios](https://user-images.githubusercontent.com/61718945/122684824-663ad580-d23a-11eb-9be2-5be9c595c89a.png)\r\n", "url": "https://github.com/pytorch/android-demo-app/issues/156", "state": "closed", "labels": [], "created_at": "2021-06-20T18:43:18Z", "updated_at": "2022-05-08T15:41:52Z", "user": "zxsitu" }, { "repo": "pytorch/android-demo-app", "number": 154, "title": "where is yolov5 model", "body": "Does anyone know how to download yolov5s.torchscript.ptl, I don't have this file", "url": "https://github.com/pytorch/android-demo-app/issues/154", "state": "open", "labels": [], "created_at": "2021-06-19T14:38:56Z", "updated_at": "2021-06-19T15:18:46Z", "user": "GuoQuanhao" }, { "repo": "pytorch/pytorch", "number": 60266, "title": "UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler.", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.\r\n1.\r\n1.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/60266", "state": "closed", "labels": [], "created_at": "2021-06-18T13:09:33Z", "updated_at": "2021-06-18T15:53:56Z", "user": "wanyne-yyds" }, { "repo": "pytorch/pytorch", "number": 60253, "title": "How to export SPP-NET to onnx ?", "body": "Here's my code\uff1a\r\n----------------------------------------------------------start----------------------------------------------------------------\r\nimport torch\r\nimport torch.nn as nn\r\nimport math\r\nimport torch.nn.functional as F\r\n\r\n\r\nclass Spp(nn.Module):\r\n\r\n def __init__(self, level, pooling_type=\"max_pool\"):\r\n super().__init__()\r\n self.num = level\r\n self.pooling_type = pooling_type\r\n\r\n def forward(self, x):\r\n h, w = x.shape[2:]\r\n kernel_size = (math.ceil(h / self.num), math.ceil(w / self.num))\r\n stride = kernel_size\r\n pooling = (math.ceil((kernel_size[0] * self.num - h) / 2), math.ceil((kernel_size[1] * self.num - w) / 2))\r\n if self.pooling_type == 'max_pool' or self.pooling_type == \"max\":\r\n tensor = F.max_pool2d(x, kernel_size=kernel_size, stride=stride, padding=pooling)\r\n else:\r\n tensor = F.avg_pool2d(x, kernel_size=kernel_size, stride=stride, padding=pooling)\r\n return tensor\r\n\r\n\r\nclass SppNet(nn.Module):\r\n \r\n def __init__(self, pooling_type=\"max_pool\", level=1):\r\n super(SppNet, self).__init__()\r\n self.spps = []\r\n for i in range(level):\r\n self.spps.append(Spp(pooling_type=pooling_type, level=i+1))\r\n pass\r\n\r\n def forward(self, x):\r\n n, c = input.shape[0:2]\r\n out = []\r\n for spp in self.spps:\r\n y = spp(x).reshape(n, c, -1)\r\n out.append(y)\r\n out = torch.cat(out, dim=2)\r\n return out\r\n\r\n\r\n\r\nif __name__ == '__main__':\r\n input = torch.randn(3, 45, 100, 120)\r\n sppNet = SppNet(level=7)\r\n y0 = sppNet(input)\r\n print(y0.shape)\r\n\r\n sppNet.eval()\r\n\r\n torch.onnx.export(sppNet, # model being run\r\n input, # model input (or a tuple for multiple inputs)\r\n 'spp-net.onnx',\r\n # where to save the model (can be a file or file-like object)\r\n export_params=True, # store the trained parameter weights inside the model file\r\n opset_version=11, # the ONNX version to export the model to\r\n do_constant_folding=True, # whether to execute constant folding for optimization\r\n input_names=[\"input\"], # the model's input names\r\n output_names=[\"output\"], # the model's output names\r\n dynamic_axes={\r\n \"input\": {0: \"batch_size\", 1: \"channel\", 2:\"height\", 3:\"width\"},\r\n \"output\": {0: \"batch_size\", 1: \"channel\", 2:\"length\"}\r\n },\r\n enable_onnx_checker=True)\r\n\r\n------------------------------------------------------end----------------------------------------------------------------\r\n\r\nExported model \u201ckernel_ size\u3001pooling \u201d parameter is fixed. \r\nBut I need to set it to a fixed parameter. \r\nThat is to say, the kernel is calculated automatically according to the input size and other parameters, so how to do?\r\nAsk for advice\uff0cThank you!\r\n\r\n\r\n\n\ncc @garymm @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/60253", "state": "closed", "labels": [ "module: onnx", "triaged", "onnx-triaged" ], "created_at": "2021-06-18T06:56:59Z", "updated_at": "2022-11-01T22:16:36Z", "user": "yongxin3344520" }, { "repo": "pytorch/TensorRT", "number": 502, "title": "\u2753 [Question] failed to build docker image", "body": "## \u2753 Question\r\n\r\nfailed to build docker image\r\n\r\n## What you have already tried\r\n\r\n`docker build -t trtorch -f notebooks/Dockerfile.notebook .`\r\n\r\n\r\n## Additional context\r\n\r\n```\r\nStep 13/21 : WORKDIR /workspace/TRTorch\r\n ---> Running in 6043f6a80286\r\nRemoving intermediate container 6043f6a80286\r\n ---> 18eaa4134512\r\nStep 14/21 : RUN bazel build //:libtrtorch --compilation_mode opt\r\n ---> Running in e5ae54ec3c1e\r\nExtracting Bazel installation...\r\nStarting local Bazel server and connecting to it...\r\nLoading:\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nLoading: 0 packages loaded\r\nAnalyzing: target //:libtrtorch (1 packages loaded, 0 targets configured)\r\nAnalyzing: target //:libtrtorch (39 packages loaded, 155 targets configured)\r\nINFO: Analyzed target //:libtrtorch (42 packages loaded, 2697 targets configured).\r\nINFO: Found 1 target...\r\n[0 / 112] [Prepa] BazelWorkspaceStatusAction stable-status.txt ... (6 actions, 0 running)\r\nERROR: /workspace/TRTorch/cpp/api/BUILD:3:11: C++ compilation of rule '//cpp/api:trtorch' failed (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 61 argument(s) skipped)\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 61 argument(s) skipped)\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox\r\nIn file included from cpp/api/src/compile_spec.cpp:6:0:\r\nbazel-out/k8-opt/bin/cpp/api/_virtual_includes/trtorch/trtorch/trtorch.h:26:25: error: different underlying type in enum 'enum class c10::DeviceType'\r\n enum class DeviceType : int8_t;\r\n ^~~~~~\r\nIn file included from bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/c10_cuda/c10/core/Device.h:3:0,\r\n from bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/ATen/ATen/core/TensorBody.h:3,\r\n from bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/ATen/ATen/Tensor.h:3,\r\n from external/libtorch/include/torch/csrc/autograd/function_hook.h:5,\r\n from external/libtorch/include/torch/csrc/autograd/variable.h:7,\r\n from external/libtorch/include/torch/csrc/jit/api/module.h:3,\r\n from cpp/api/src/compile_spec.cpp:1:\r\nbazel-out/k8-opt/bin/external/libtorch/_virtual_includes/c10_cuda/c10/core/DeviceType.h:15:12: note: previous definition here\r\n enum class DeviceType : int16_t {\r\n ^~~~~~~~~~\r\nTarget //:libtrtorch failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 490.394s, Critical Path: 28.98s\r\nINFO: 1030 processes: 1024 internal, 6 processwrapper-sandbox.\r\nFAILED: Build did NOT complete successfully\r\nFAILED: Build did NOT complete successfully\r\n```\r\n", "url": "https://github.com/pytorch/TensorRT/issues/502", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-06-17T10:22:25Z", "updated_at": "2021-09-27T00:01:13Z", "user": "chrjxj" }, { "repo": "pytorch/pytorch", "number": 60122, "title": "what version of python is suggested with pytorch 1.9", "body": "I know pytorch support a variety of python version, but I wonder what version is suggested? python 3.6.6? 3.7.7? etc? \r\nThanks", "url": "https://github.com/pytorch/pytorch/issues/60122", "state": "closed", "labels": [], "created_at": "2021-06-16T19:05:52Z", "updated_at": "2021-06-16T20:54:15Z", "user": "seyeeet" }, { "repo": "pytorch/pytorch", "number": 60115, "title": "How to install torchaudio on Mac M1 ARM?", "body": "`torchaudio` doesn't seem to be available for Mac M1. \r\n\r\nIf I run `conda install pytorch torchvision torchaudio -c pytorch` (as described on pytorch's main page) I get this error message:\r\n\r\n```\r\nPackagesNotFoundError: The following packages are not available from current channels:\r\n - torchaudio \r\n```\r\nIf I run the command without `torchaudio` everything installs fine.\r\n\r\nHow can I fix this and install torchaudio too? \r\n\r\nIf it isn't available (yet) \u2013 do you have any plans to release it too?\r\n\r\nThanks in advance for your help! And I apologize in advance if I don't see the forest for the trees and overlooked sth. obvious.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/60115", "state": "closed", "labels": [], "created_at": "2021-06-16T18:24:04Z", "updated_at": "2021-06-16T20:42:03Z", "user": "suissemaxx" }, { "repo": "pytorch/pytorch", "number": 59933, "title": "If I only have the model of PyTorch and don't know the dimension of the input, how to convert it to onnx?", "body": "## \u2753 If I only have the model of PyTorch and don't know the dimension of the input, how to convert it to onnx?\r\n\r\n### Question\r\n\r\nI have a series of PyTorch trained models, such as \"model.pth\", but I don't know the input dimensions of the model.\r\nFor instance, in the following function: torch.onnx.export(model, args, f, export_params=True, verbose=False, training=False, input_names=None, output_names=None).\r\nI don't know the \"args\" of the function. How do I define it by just having the model file such as \"model.pth\"?\r\n", "url": "https://github.com/pytorch/pytorch/issues/59933", "state": "closed", "labels": [], "created_at": "2021-06-14T08:38:29Z", "updated_at": "2021-06-14T15:31:26Z", "user": "Wendy-liu17" }, { "repo": "pytorch/pytorch", "number": 59870, "title": "How to export a model with nn.Module in for loop to onnx?", "body": "Bellow is a demo code:\r\n```\r\nclass Demo(nn.Module):\r\n def __init__(self, hidden_size, max_span_len):\r\n super().__init__()\r\n self.max_span_len = max_span_len\r\n self.fc = nn.Linear(hidden_size * 2, hidden_size)\r\n\r\n def forward(self, seq_hiddens):\r\n '''\r\n seq_hiddens: (batch_size, seq_len, hidden_size)\r\n '''\r\n seq_len = seq_hiddens.size()[1]\r\n\r\n hiddens_list = []\r\n for ind in range(seq_len):\r\n hidden_each_step = seq_hiddens[:, ind, :]\r\n a = seq_hiddens[:, ind:ind + self.max_span_len, :]\r\n b = hidden_each_step[:, None, :].repeat(1, a.shape[1], 1) \r\n \r\n tmp = torch.cat([a, b], dim=-1)\r\n tmp = torch.tanh(self.fc(tmp))\r\n hiddens_list.append(tmp)\r\n\r\n output = torch.cat(hiddens_list, dim = 1)\r\n return output\r\n\r\n```\r\nHow to expot it to onnx? I need the fc Layer in for loop. Script function seems not work.\r\nThanks!!!\r\n\n\ncc @garymm @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/59870", "state": "closed", "labels": [ "module: onnx" ], "created_at": "2021-06-11T12:25:15Z", "updated_at": "2021-06-15T18:34:50Z", "user": "JaheimLee" }, { "repo": "pytorch/vision", "number": 4001, "title": "Unable to build torchvision on Windows (installed torch from source and it is running)", "body": "## \u2753 Questions and Help\r\n\r\nI have installed torch successfully in my PC via source, but I am facing this issue while installing the torchvison. I don't think I can install torchvision via pip as it is re-downloading the torch.\r\n\r\nPlease help me to install it\r\n\r\nTIA\r\ni used `python setup.py install`\r\n```\r\nBuilding wheel torchvision-0.9.0a0+01dfa8e\r\nPNG found: True\r\nRunning build on conda-build: False\r\nRunning build on conda: True\r\nJPEG found: True\r\nBuilding torchvision with JPEG image support\r\nFFmpeg found: True\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\dhawals\\repos\\build_binaries\\vision\\setup.py\", line 472, in <module>\r\n ext_modules=get_extensions(),\r\n File \"C:\\Users\\dhawals\\repos\\build_binaries\\vision\\setup.py\", line 352, in get_extensions\r\n platform_tag = subprocess.run(\r\n File \"C:\\Users\\dhawals\\miniconda3\\lib\\subprocess.py\", line 501, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"C:\\Users\\dhawals\\miniconda3\\lib\\subprocess.py\", line 947, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"C:\\Users\\dhawals\\miniconda3\\lib\\subprocess.py\", line 1356, in _execute_child\r\n args = list2cmdline(args)\r\n File \"C:\\Users\\dhawals\\miniconda3\\lib\\subprocess.py\", line 561, in list2cmdline\r\n for arg in map(os.fsdecode, seq):\r\n File \"C:\\Users\\dhawals\\miniconda3\\lib\\os.py\", line 822, in fsdecode\r\n filename = fspath(filename) # Does type-checking of `filename`.\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n```", "url": "https://github.com/pytorch/vision/issues/4001", "state": "closed", "labels": [ "question" ], "created_at": "2021-06-08T09:48:25Z", "updated_at": "2021-06-14T11:01:21Z", "user": "dhawals1939" }, { "repo": "pytorch/pytorch", "number": 59607, "title": "Where is libtorch archive???", "body": "Where is libtorch archive???\r\n\r\nI can't find libtorch 1.6.0..", "url": "https://github.com/pytorch/pytorch/issues/59607", "state": "closed", "labels": [], "created_at": "2021-06-08T01:03:23Z", "updated_at": "2023-04-07T13:29:34Z", "user": "hi-one-gg" }, { "repo": "pytorch/xla", "number": 2981, "title": "Where is torch_xla/csrc/XLANativeFunctions.h?", "body": "## \ud83d\udc1b Bug\r\n\r\nTrying to compile master found that there is no https://github.com/pytorch/xla/blob/master/torch_xla/csrc/XLANativeFunctions.h after updating to latest master.\r\n\r\nHow this file is generated? (aka which step Im missing?)\r\n\r\n```\r\n$ time pip install -e . --verbose\r\n...............\r\n [23/101] clang++-8 -MMD -MF /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/tyoc213/Documents/github/pytorch/xla -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/tyoc213/Documents/github/pytorch -I/home/tyoc213/Documents/github/pytorch/torch/csrc -I/home/tyoc213/Documents/github/pytorch/torch/lib/tmp_install/include -I/home/tyoc213/Documents/github/pytorch/torch/include -I/home/tyoc213/Documents/github/pytorch/torch/include/torch/csrc/api/include -I/home/tyoc213/Documents/github/pytorch/torch/include/TH -I/home/tyoc213/Documents/github/pytorch/torch/include/THC -I/home/tyoc213/miniconda3/envs/xla/include/python3.8 -c -c /home/tyoc213/Documents/github/pytorch/xla/torch_xla/csrc/init_python_bindings.cpp -o /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE=\"_clang\"' '-DPYBIND11_STDLIB=\"_libstdcpp\"' '-DPYBIND11_BUILD_ABI=\"_cxxabi1002\"' -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1\r\n FAILED: /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o\r\n clang++-8 -MMD -MF /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/tyoc213/Documents/github/pytorch/xla -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/tyoc213/Documents/github/pytorch -I/home/tyoc213/Documents/github/pytorch/torch/csrc -I/home/tyoc213/Documents/github/pytorch/torch/lib/tmp_install/include -I/home/tyoc213/Documents/github/pytorch/torch/include -I/home/tyoc213/Documents/github/pytorch/torch/include/torch/csrc/api/include -I/home/tyoc213/Documents/github/pytorch/torch/include/TH -I/home/tyoc213/Documents/github/pytorch/torch/include/THC -I/home/tyoc213/miniconda3/envs/xla/include/python3.8 -c -c /home/tyoc213/Documents/github/pytorch/xla/torch_xla/csrc/init_python_bindings.cpp -o /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE=\"_clang\"' '-DPYBIND11_STDLIB=\"_libstdcpp\"' '-DPYBIND11_BUILD_ABI=\"_cxxabi1002\"' -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1\r\n /home/tyoc213/Documents/github/pytorch/xla/torch_xla/csrc/init_python_bindings.cpp:36:10: fatal error: 'torch_xla/csrc/XLANativeFunctions.h' file not found\r\n #include \"torch_xla/csrc/XLANativeFunctions.h\"\r\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n 1 error generated.\r\n\r\n```\r\n\r\n## Environment\r\n\r\n - Installing from source on Linux/CUDA:\r\n - torch_xla version: master\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/2981", "state": "closed", "labels": [ "stale" ], "created_at": "2021-06-08T00:42:14Z", "updated_at": "2021-07-21T13:22:46Z", "user": "tyoc213" }, { "repo": "pytorch/TensorRT", "number": 495, "title": "\u2753 [Question] How does the compiler uses the optimal input shape ? ", "body": "When compiling the model we have to specify an optimal input shape as well as a minimal and maximal one. \r\n\r\nI tested various optimal sizes to evaluate the impact of this parameter but found little to no difference for the inference time. \r\n\r\nHow is this parameter used by the compiler ?\r\n\r\n\r\nThank you for your time and consideration, \r\n", "url": "https://github.com/pytorch/TensorRT/issues/495", "state": "closed", "labels": [ "question" ], "created_at": "2021-06-07T13:21:20Z", "updated_at": "2021-06-09T09:52:53Z", "user": "MatthieuToulemont" }, { "repo": "pytorch/text", "number": 1323, "title": "How to use pretrained embeddings (`Vectors`) in the new API?", "body": "From what is see in the `experimental` module is that we pass a vocab object, which transforms the token into an unique integer.\r\n\r\nhttps://github.com/pytorch/text/blob/e189c260e959ab966b1eaa986177549a6445858c/torchtext/experimental/datasets/text_classification.py#L50-L55\r\n\r\nThus something like `['hello', 'word']` might turn into `[42, 43]`, this can then be fed into an `nn.Embedding` layer to get the corresponding embedding vector and so on.\r\n\r\nWhat i dont't understand is how do i use\r\n\r\nhttps://github.com/pytorch/text/blob/e189c260e959ab966b1eaa986177549a6445858c/torchtext/vocab.py#L475-L487\r\n\r\n`GloVe` is a `Vectors` but it transforms `['hello', 'world']` into its corresponding `Embedding` tensor representation, this doesn't allow me to pad the sentences beforehand.\r\n\r\nAlso its weird that now i don't need a `Vocab` object, but in most of the modules i see that `Vocab` is built if its set to `None`.\r\n\r\nhttps://github.com/pytorch/text/blob/e189c260e959ab966b1eaa986177549a6445858c/torchtext/experimental/datasets/text_classification.py#L85-L89\r\n\r\nI don't really understand how am i supposed to interpret `Vocab` and `Vectors` and where should i use them? In `nn.Module` i.e. my model, or in `data.Dataset`, i.e. my dataset ? What if i want to fine tune the pretrained embeddings as well ?\r\n\r\nShould both of them be used, or just either one ?\r\n\r\nI couldn't even find good examples in https://github.com/pytorch/text/tree/master/examples/text_classification\r\n\r\nI'm coming from the traditional torch vision library guy, so kudos to dumping the old legacy style torchtext, i really hated it, the new api's seem promising, but just a little confusing as of now.", "url": "https://github.com/pytorch/text/issues/1323", "state": "open", "labels": [], "created_at": "2021-06-05T10:58:38Z", "updated_at": "2021-07-01T03:26:20Z", "user": "satyajitghana" }, { "repo": "pytorch/pytorch", "number": 59368, "title": "How to remap RNNs hidden tensor to other device in torch.jit.load?", "body": "Model: CRNN (used in OCR)\r\n\r\n1. When I trace model in cpu device, and use torch.jit.load(f, map_location=\"cuda:0\"), I got an error as below\r\nInput and hidden tensor are not at same device, found input tensor at cuda:0 and hidden tensor at cpu.\r\n\r\n2. When I trace model in cuda:0 device, and use torch.jit.load(f, map_location=\"cuda:1\"), I got an error as below\r\nInput and hidden tensor are not at same device, found input tensor at cuda:1 and hidden tensor at cuda:0.\r\n\r\nIs there a way to remap RNNs hidden tensor to other device in loaded module by jit?\r\n\r\nPyTorch Version: 1.8.1\r\n\r\n\r\n\r\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/59368", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-06-03T09:24:23Z", "updated_at": "2021-10-21T06:19:02Z", "user": "shihaoyin" }, { "repo": "pytorch/vision", "number": 3949, "title": "Meaning of Assertion of infer_scale function in torchvision/ops/poolers.py", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nI want to know the meaning of the assert at infer_scale function in torchvision/ops/poolers.py.\r\n![image](https://user-images.githubusercontent.com/23451721/120587728-dee71700-c470-11eb-8057-0d26a9afdfd1.png)\r\n\r\nIt makes assertion error \r\n\r\n File \"/home/ubuntu/.jupyter/engine.py\", line 199, in evaluate_one_image\r\n output = model(loader)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py\", line 98, in forward\r\n detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/models/detection/roi_heads.py\", line 752, in forward\r\n box_features = self.box_roi_pool(features, proposals, image_shapes)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py\", line 221, in forward\r\n self.setup_scales(x_filtered, image_shapes)\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py\", line 182, in setup_scales\r\n scales = [self.infer_scale(feat, original_input_shape) for feat in features]\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py\", line 182, in <listcomp>\r\n scales = [self.infer_scale(feat, original_input_shape) for feat in features]\r\n File \"/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py\", line 166, in infer_scale\r\n assert possible_scales[0] == possible_scales[1]\r\nAssertionError\r\n\r\nlike this, and without assertion it makes correct results.\r\nwhat's the meaning of that assertion?\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/3949", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2021-06-03T04:38:03Z", "updated_at": "2021-06-09T11:59:52Z", "user": "teang1995" }, { "repo": "pytorch/pytorch", "number": 59231, "title": "How to solve the AssertionError: Torch not compiled with CUDA enabled", "body": "For the usage of the repo based on PyTorch(Person_reID_baseline_pytorch), I followed the guidance on its readme.md. However, I've got an error on the training step below: (I used --gpu_ids -1 as I use CPU only option in my MacOS)\r\n\r\n`python train.py --gpu_ids -1 --name ft_ResNet50 --train_all --batchsize 32 --data_dir /Users/455832/Person_reID_baseline_pytorch/Market-1501-v15.09.15/pytorch`\r\n\r\nThe error I got is below:\r\n\r\n```\r\nDownloading: \"https://download.pytorch.org/models/resnet50-19c8e357.pth\" to /Users/455832/.cache/torch/checkpoints/resnet50-19c8e357.pth\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 102502400/102502400 [00:14<00:00, 7210518.23it/s]\r\nft_net(\r\n (model): ResNet(\r\n (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\r\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace)\r\n (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\r\n (layer1): Sequential(\r\n (0): Bottleneck(\r\n (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\r\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\r\n (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace)\r\n (downsample): Sequential(\r\n (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\r\n (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n )\r\n (1): Bottleneck(\r\n (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\r\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\r\n (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace)\r\n )\r\n (2): Bottleneck\r\n.......\r\n.......\r\n)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 386, in <module>\r\n model = model.cuda()\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 265, in cuda\r\n return self._apply(lambda t: t.cuda(device))\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 193, in _apply\r\n module._apply(fn)\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 193, in _apply\r\n module._apply(fn)\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 199, in _apply\r\n param.data = fn(param.data)\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 265, in <lambda>\r\n return self._apply(lambda t: t.cuda(device))\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 162, in _lazy_init\r\n _check_driver()\r\n File \"/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 75, in _check_driver\r\n raise AssertionError(\"Torch not compiled with CUDA enabled\")\r\nAssertionError: Torch not compiled with CUDA enabled\r\n```\r\n\r\n\r\nAs suggested in its readme.md, I installed pytorch=1.1.0 and torchvision=0.3.0 and numpy=1.13.1, which are requirements, into my virtual environment using 3.6.12 python requirement over the instructions in PyTorch official website (https://pytorch.org/get-started/previous-versions/#wheel-10)\r\n\r\n`conda install pytorch==1.1.0 torchvision==0.3.0 -c pytorch`\r\n\r\nCan you please guide me to solve this issue?\r\n", "url": "https://github.com/pytorch/pytorch/issues/59231", "state": "closed", "labels": [], "created_at": "2021-05-31T20:45:33Z", "updated_at": "2023-06-04T06:22:56Z", "user": "aktaseren" }, { "repo": "pytorch/TensorRT", "number": 493, "title": "\u2753 [Question] How to set three input tensor shape in input_shape? ", "body": "## \u2753 Question\r\n\r\n<!-- How to set three input tensor shape in input_shape\uff1f-->\r\nI have three input tensor:src_tokens, dummy_embeded_x, dummy_encoder_embedding\r\nIn this case, I don't konw how to set input_shape in compile_settings's \"input_shape\"\r\nWho can help me? Thank you!\r\n\r\n`encoder_out = model.forward_encoder([src_tokens, dummy_embeded_x, dummy_encoder_embedding])`\r\n`...`\r\n`script_encoder = torch.jit.script(encoder)...`\r\n`compile_settings = {\r\n \"input_shapes\": [[2, 16]],\r\n \"op_precision\": torch.float32\r\n }`", "url": "https://github.com/pytorch/TensorRT/issues/493", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-31T02:48:53Z", "updated_at": "2021-06-23T19:56:33Z", "user": "wxyhv" }, { "repo": "pytorch/vision", "number": 3938, "title": "Batch size of the training recipes on multiple GPUs", "body": "## \u2753 Questions and Help\r\n\r\nIn the README file that describes the recipes of training the classification models, under the references directory, it is stated that the models are trained with batch-size=32 on 8 GPUs. \r\n\r\nDoes it mean that:\r\n- the whole batch-size is 32 and each GPU gets only 4 images to process at a time?\r\n- OR each GPU gets 32 images to process at a time, meaning that the global batch-size is actually 256?\r\n\r\nThanks.\r\n", "url": "https://github.com/pytorch/vision/issues/3938", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-30T13:16:07Z", "updated_at": "2021-05-30T14:52:38Z", "user": "talcs" }, { "repo": "pytorch/pytorch", "number": 59186, "title": "Document on how to use ATEN_CPU_CAPABILITY", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nIt would be great if ATEN_CPU_CAPABILITY would be documented with an example on how to use it. \r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nI am currently trying to build PyTorch without AVX instructions, because I am deploying my docker image to a lot of different systems. While trying to understand how to remove AVX instructions I found ATEN_CPU_CAPABILITY. It is not clear on how to use it. \r\n\r\nMy unanswered questions are: Does it work on runtime? Do I have build PyTorch myself and set ATEN_CPU_CAPABILITY before building? Can I pass ATEN_CPU_CAPABILITY to setup.py? How do I know if I set it the right way? Are there any wheels without AVX instructions available?\r\n", "url": "https://github.com/pytorch/pytorch/issues/59186", "state": "closed", "labels": [], "created_at": "2021-05-30T11:40:07Z", "updated_at": "2021-05-30T21:31:30Z", "user": "derneuere" }, { "repo": "pytorch/serve", "number": 1103, "title": "Can two workflows share the same model with each other?", "body": "Continuing my previous post: [How i do models chain processing and batch processing for analyzing text data?](https://github.com/pytorch/serve/issues/1055)\r\n\r\nCan I create two workflows using the same RoBERTa base model to perform two different tasks, let's say the classifier_model and summarizer_model? I would like to be able to share the base model with two workflows.\r\n\r\nI am trying to register two workflows: wf_classifier.war and wf_summarizer.war. The first one is registered and the second one is not.\r\n\r\n[log.log](https://github.com/pytorch/serve/files/6562664/log.log)\r\n\r\n\r\n**wf_classifier.war**\r\n```\r\nmodels:\r\n min-workers: 1\r\n max-workers: 1\r\n batch-size: 1\r\n max-batch-delay: 1000\r\n retry-attempts: 5\r\n timeout-ms: 300000\r\n\r\n roberta:\r\n url: roberta_base.mar\r\n\r\n classifier:\r\n url: classifier.mar\r\n\r\ndag:\r\n roberta: [classifier]\r\n```\r\n\r\n**wf_summarizer.war**\r\n```\r\nmodels:\r\n min-workers: 1\r\n max-workers: 1\r\n batch-size: 1\r\n max-batch-delay: 1000\r\n retry-attempts: 5\r\n timeout-ms: 300000\r\n\r\n roberta:\r\n url: roberta_base.mar\r\n\r\n summarizer:\r\n url: summarizer.mar\r\n\r\ndag:\r\n roberta_base: [summarizer]\r\n```\r\n\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/1103", "state": "open", "labels": [ "question", "triaged_wait", "workflowx" ], "created_at": "2021-05-28T18:04:57Z", "updated_at": "2022-09-08T12:27:30Z", "user": "yurkoff-mv" }, { "repo": "pytorch/TensorRT", "number": 490, "title": "\u2753 [Question] How could I integrate TensorRT's Group Normalization plugin into a TRTorch model ? ", "body": "## \u2753 Question\r\n\r\nWhat would be the steps to be able to use TensorRT's Group Normalization plugin into a TRTorch model ? \r\n\r\nThe plugin is defined [here](https://github.com/NVIDIA/TensorRT/tree/master/plugin/groupNormalizationPlugin)\r\n\r\n## Context\r\n\r\nBeing new to this, the Readme from core/conversion/converters didn't really clarify the steps I should follow to make the converter for a TensorRt plugin\r\n\r\n## Environment\r\n\r\nAs an environment I use the `docker/Dockerfile.20.10 -t trtorch:pytorch1.7-cuda11.1-trt7.2.1` from the commit 6bb9fbf561c9cc3f0f1c4c7dde3d61c88e687efc\r\n\r\nThank you for your time and consideration", "url": "https://github.com/pytorch/TensorRT/issues/490", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-27T12:53:09Z", "updated_at": "2021-06-09T09:53:11Z", "user": "MatthieuToulemont" }, { "repo": "pytorch/cpuinfo", "number": 55, "title": "Compilation for freeRTOS", "body": "Hi all,\r\n\r\nWe are staring to look into using cpuinfo in a freeRTOS / ZedBoard setup.\r\nDo you know if any attempts to port this code to freeRTOS before?\r\nIf not, do you have any tips / advise on how to start this porting?\r\n\r\nThanks,\r\n\r\nPablo.", "url": "https://github.com/pytorch/cpuinfo/issues/55", "state": "open", "labels": [ "question" ], "created_at": "2021-05-26T09:57:18Z", "updated_at": "2024-01-11T00:56:44Z", "user": "pablogh-2000" }, { "repo": "pytorch/pytorch", "number": 58894, "title": "ease use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/58894", "state": "closed", "labels": [], "created_at": "2021-05-25T02:41:52Z", "updated_at": "2021-05-25T21:30:41Z", "user": "umie0128" }, { "repo": "pytorch/tutorials", "number": 1539, "title": "(Libtorch)How to use packed_accessor64 to access tensor elements in CUDA?", "body": "The [tutorial ](https://pytorch.org/cppdocs/notes/tensor_basics.html#cuda-accessors) gives an example about using _packed_accessor64_ to access tensor elements efficiently as follows. However, I still do not know how to use _packed_accessor64_. Can anyone give me a more specific example? Thanks.\r\n```\r\n__global__ void packed_accessor_kernel(\r\n PackedTensorAccessor64<float, 2> foo,\r\n float* trace) {\r\n int i=threadIdx.x\r\n gpuAtomicAdd(trace, foo[i][i])\r\n}\r\n \r\ntorch::Tensor foo = torch::rand({12, 12});\r\n \r\n// assert foo is 2-dimensional and holds floats.\r\nauto foo_a = foo.packed_accessor64<float,2>();\r\nfloat trace = 0;\r\n \r\npacked_accessor_kernel<<<1, 12>>>(foo_a, &trace);\r\n```\n\ncc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/1539", "state": "open", "labels": [ "CUDA", "medium", "docathon-h2-2023" ], "created_at": "2021-05-24T15:56:26Z", "updated_at": "2023-11-14T06:41:03Z", "user": "tangyipeng100" }, { "repo": "pytorch/text", "number": 1316, "title": "How to load AG_NEWS data from local files", "body": "## How to load AG_NEWS data from local files\r\n\r\nI can't get ag news data with `train_iter, test_iter = AG_NEWS(split=('train', 'test'))` online because of my bad connection. So I download the the `train.csv` and `test.csv` manually to my local folder `AG_NEWS` from url `'train': \"https://raw.githubusercontent.com/mhjabreel/CharCnn_Keras/master/data/ag_news_csv/train.csv\",\r\n 'test': \"https://raw.githubusercontent.com/mhjabreel/CharCnn_Keras/master/data/ag_news_csv/test.csv\"`\r\n\r\nAfter that I tried to load ag news data with `train_iter, test_iter = AG_NEWS(root = './AG_NEWS', split=('train', 'test'))`, throw a exception `RuntimeError: The hash of /myfolder/AG_NEWS/train.csv does not match. Delete the file manually and retry.`\r\n\r\nMy file content is \r\n```\r\nmyfolder\r\n\u2502 \r\n\u2514\u2500\u2500\u2500AG_NEWS\r\n\u2502 \u2514\u2500\u2500\u2500 train.csv\r\n\u2502 \u2514\u2500\u2500\u2500 test.csv\r\n```\r\n", "url": "https://github.com/pytorch/text/issues/1316", "state": "open", "labels": [], "created_at": "2021-05-24T06:23:55Z", "updated_at": "2021-05-24T14:54:04Z", "user": "robbenplus" }, { "repo": "pytorch/tutorials", "number": 1534, "title": "Why libtorch tensor value assignment takes so much time?", "body": "I just assign 10000 values to a tensor:\r\n```\r\nclock_t start = clock();\r\ntorch::Tensor transform_tensor = torch::zeros({ 10000 });\r\nfor (size_t m = 0; m < 10000 m++)\r\n\ttransform_tensor[m] = int(m);\r\nclock_t finish = clock();\r\n```\r\nAnd it takes 0.317s. If I assign 10,000 to an array or a vector, the time cost will be less.\r\nWhy tensor takes so much time? Can the time cost be decreased?\r\n", "url": "https://github.com/pytorch/tutorials/issues/1534", "state": "open", "labels": [ "question", "Tensors" ], "created_at": "2021-05-24T01:59:42Z", "updated_at": "2023-03-08T16:31:16Z", "user": "tangyipeng100" }, { "repo": "pytorch/pytorch", "number": 58554, "title": "How to install pytorch1.8.1 with cuda 11.3?", "body": "How to install pytorch1.8.1 with cuda 11.3?", "url": "https://github.com/pytorch/pytorch/issues/58554", "state": "closed", "labels": [], "created_at": "2021-05-19T13:24:42Z", "updated_at": "2021-05-20T03:42:17Z", "user": "Bonsen" }, { "repo": "pytorch/xla", "number": 2957, "title": "How to compile xla_ltc_plugin", "body": "I was following https://github.com/pytorch/xla/tree/asuhan/xla_ltc_plugin to build ltc-based torch/xla. I compiled ltc successfully but encountered errors when compiling xla. I guess I must have missed something here. Help is greatly appreciated :) cc @asuhan \r\n\r\n<details>\r\n\r\n <summary>Error log</summary>\r\n\r\n```\r\n[1/14] clang++-8 -MMD -MF /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/version.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ubuntu/pytorch/xla -I/home/ubuntu/pytorch/xla/../lazy_tensor_core -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/ubuntu/pytorch -I/home/ubuntu/pytorch/torch/csrc -I/home/ubuntu/pytorch/torch/lib/tmp_install/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/torch-dev/include/python3.7m -c -c /home/ubuntu/pytorch/xla/lazy_xla/csrc/version.cpp -o /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/version.o -std=c++14 -Wno-sign-compare -Wno-unknown-pragmas -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE=\"_gcc\"' '-DPYBIND11_STDLIB=\"_libstdcpp\"' '-DPYBIND11_BUILD_ABI=\"_cxxabi1011\"' -DTORCH_EXTENSION_NAME=_LAZYXLAC -D_GLIBCXX_USE_CXX11_ABI=1\r\n[2/14] clang++-8 -MMD -MF /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ubuntu/pytorch/xla -I/home/ubuntu/pytorch/xla/../lazy_tensor_core -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/ubuntu/pytorch -I/home/ubuntu/pytorch/torch/csrc -I/home/ubuntu/pytorch/torch/lib/tmp_install/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/torch-dev/include/python3.7m -c -c /home/ubuntu/pytorch/xla/lazy_xla/csrc/compiler/data_ops.cpp -o /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o -std=c++14 -Wno-sign-compare -Wno-unknown-pragmas -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE=\"_gcc\"' '-DPYBIND11_STDLIB=\"_libstdcpp\"' '-DPYBIND11_BUILD_ABI=\"_cxxabi1011\"' -DTORCH_EXTENSION_NAME=_LAZYXLAC -D_GLIBCXX_USE_CXX11_ABI=1\r\nFAILED: /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o \r\nclang++-8 -MMD -MF /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ubuntu/pytorch/xla -I/home/ubuntu/pytorch/xla/../lazy_tensor_core -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/ubuntu/pytorch -I/home/ubuntu/pytorch/torch/csrc -I/home/ubuntu/pytorch/torch/lib/tmp_install/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/incl", "url": "https://github.com/pytorch/xla/issues/2957", "state": "closed", "labels": [ "stale" ], "created_at": "2021-05-19T09:31:12Z", "updated_at": "2021-07-08T09:11:02Z", "user": "hzfan" }, { "repo": "pytorch/pytorch", "number": 58530, "title": "How to remove layer use parent name", "body": "Hi, I am a new user of pytorch. I try to load trained model and want to remove the last layer named 'fc'\r\n\r\n```\r\nmodel = models.alexnet()\r\nmodel.fc = nn.Linear(4096, 4)\r\n\r\nckpt = torch.load('net_epoch_24.pth')\r\nmodel.load_state_dict(ckpt)\r\n \r\nmodel.classifier = nn.Sequential(nn.Linear(9216, 1024),\r\n nn.ReLU(),\r\n nn.Dropout(0.5),\r\n nn.Linear(1024, 8),\r\n nn.LogSoftmax(dim=1))\r\n\r\nprint(model)\r\n```\r\nprint out :\r\n```\r\nAlexNet(\r\n (features): Sequential(\r\n (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))\r\n (1): ReLU(inplace=True)\r\n (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\r\n (4): ReLU(inplace=True)\r\n (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\r\n (7): ReLU(inplace=True)\r\n (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\r\n (9): ReLU(inplace=True)\r\n (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\r\n (11): ReLU(inplace=True)\r\n (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n )\r\n (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))\r\n (classifier): Sequential(\r\n (0): Linear(in_features=9216, out_features=1024, bias=True)\r\n (1): ReLU()\r\n (2): Dropout(p=0.5, inplace=False)\r\n (3): Linear(in_features=1024, out_features=8, bias=True)\r\n (4): LogSoftmax(dim=1)\r\n )\r\n (fc): Linear(in_features=4096, out_features=4, bias=True)\r\n)\r\n```\r\n\r\nis there any simple way to remove the last layer ('fc') ?\r\n\r\nthanks", "url": "https://github.com/pytorch/pytorch/issues/58530", "state": "closed", "labels": [], "created_at": "2021-05-19T03:54:29Z", "updated_at": "2021-05-20T05:29:28Z", "user": "ramdhan1989" }, { "repo": "pytorch/pytorch", "number": 58460, "title": "how to convert scriptmodel to onnx?", "body": "how to convert scriptmodel to onnx?\r\nD:\\Python\\Python37\\lib\\site-packages\\torch\\onnx\\utils.py:348: UserWarning: Model has no forward function\r\n warnings.warn(\"Model has no forward function\")\r\nException occurred when processing textline: 1\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity", "url": "https://github.com/pytorch/pytorch/issues/58460", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2021-05-18T03:15:29Z", "updated_at": "2022-02-24T08:22:22Z", "user": "williamlzw" }, { "repo": "pytorch/TensorRT", "number": 473, "title": "\u2753 Is it possible to use TRTorch with batchedNMSPlugin for TensorRT?", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\nHi, I am trying to convert detectron2 traced keypoint-rcnn model that contains ops from torchvision like torchvision::nms. I get the following error:\r\n\r\n> \r\n> terminate called after throwing an instance of 'torch::jit::ErrorReport'\r\n> what(): \r\n> Unknown builtin op: torchvision::nms.\r\n> Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.\r\n> :\r\n> File \"/usr/local/lib/python3.7/dist-packages/torchvision/ops/boxes.py\", line 36\r\n> \"\"\"\r\n> _assert_has_ops()\r\n> return torch.ops.torchvision.nms(boxes, scores, iou_threshold)\r\n> ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n> Serialized File \"code/__torch__/torchvision/ops/boxes.py\", line 26\r\n> _8 = __torch__.torchvision.extension._assert_has_ops\r\n> _9 = _8()\r\n> _10 = ops.torchvision.nms(boxes, scores, iou_threshold)\r\n> ~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n> return _10\r\n> 'nms' is being compiled since it was called from 'batched_nms'\r\n> File \"/usr/local/lib/python3.7/dist-packages/torchvision/ops/boxes.py\", line 75\r\n> offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))\r\n> boxes_for_nms = boxes + offsets[:, None]\r\n> keep = nms(boxes_for_nms, scores, iou_threshold)\r\n> ~~~ <--- HERE\r\n> return keep\r\n> Serialized File \"code/__torch__/torchvision/ops/boxes.py\", line 18\r\n> _7 = torch.slice(offsets, 0, 0, 9223372036854775807, 1)\r\n> boxes_for_nms = torch.add(boxes, torch.unsqueeze(_7, 1), alpha=1)\r\n> keep = __torch__.torchvision.ops.boxes.nms(boxes_for_nms, scores, iou_threshold, )\r\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n> _0 = keep\r\n> return _0\r\n> 'batched_nms' is being compiled since it was called from 'RPN.forward'\r\n> Serialized File \"code/__torch__/detectron2/modeling/proposal_generator/rpn.py\", line 19\r\n> argument_9: Tensor,\r\n> image_size: Tensor) -> Tensor:\r\n> _0 = __torch__.torchvision.ops.boxes.batched_nms\r\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\r\n> _1 = self.rpn_head\r\n> _2 = (self.anchor_generator).forward(argument_1, argument_2, argument_3, argument_4, argument_5, argument_6, argument_7, argument_8, )\r\n> \r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n - PyTorch Version: 1.8.0\r\n - CPU Architecture: arm64\r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): .whl file for jetson\r\n - Build command you used (if compiling from source): `bazel build //:libtrtorch`\r\n - Are you using local sources or building from archives: Local\r\n - Python version: 3.7\r\n - CUDA version: 10.2\r\n - GPU models and configuration: Nvidia Jetson Xavier nx\r\n - Any other relevant information: torchvision C++ API compiled locally\r\n\r\n## Additional context\r\n\r\nI know that there is [batchedNMSPlugin](https://www.ccoderun.ca/programming/doxygen/tensorrt/md_TensorRT_plugin_batchedNMSPlugin_README.html) for TensorRT, but I have no idea how to include it for conversion. I'd appreciate any advice.\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/473", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-16T11:28:14Z", "updated_at": "2022-08-20T07:31:37Z", "user": "VRSEN" }, { "repo": "pytorch/extension-cpp", "number": 72, "title": "How does the layer of C++ extensions translate to TorchScript or onnx?", "body": "\r\n", "url": "https://github.com/pytorch/extension-cpp/issues/72", "state": "open", "labels": [], "created_at": "2021-05-14T09:50:12Z", "updated_at": "2025-08-26T03:36:50Z", "user": "yanglinxiabuaaa" }, { "repo": "pytorch/vision", "number": 3832, "title": "Error converting to onnx: forward function contains for loop", "body": "Hello, there is a for loop in my forward function. When I turned to onnx, the following error occurred:\r\n\r\n`[ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Split node. Name:'Split_ 1277' Status Message: Cannot split using values in 'split' attribute. Axis=0 Input shape={59} NumOutputs=17 Num entries in 'split' (must equal number of outputs) was 17 Sum of sizes in 'split' (must equal size of selected axis) was 17`\r\n\r\nPart of my forward code\uff1a\r\n```\r\n y, ey, x, ex = pad(boxes, w, h)\r\n if len(boxes) > 0:\r\n im_data = []\r\n indx_y = torch.where(ey > y-1)[0]\r\n for ind in indx_y:\r\n img_k = imgs[image_inds[ind],:, (y[ind] - 1).type(torch.int64):ey[ind].type(torch.int64), (x[ind]-1).type(torch.int64):ex[ind].type(torch.int64)].unsqueeze(0)\r\n im_data.append(imresample(img_k, (24, 24)))\r\n im_data = torch.cat(im_data, dim=0)\r\n return im_data\r\n```\r\nI found that during the first onnx conversion, the for loop was executed 17 times, but when I tested it, the for loop required 59 times, so there was an error. In the forward function, indx_y is dynamic, so the number of for loops is also dynamic. Is there any way to solve this problem\uff1f\r\n\n\ncc @neginraoof", "url": "https://github.com/pytorch/vision/issues/3832", "state": "open", "labels": [ "question", "awaiting response", "module: onnx" ], "created_at": "2021-05-14T03:52:58Z", "updated_at": "2021-05-18T09:42:32Z", "user": "wytcsuch" }, { "repo": "pytorch/vision", "number": 3825, "title": "Why does RandomErasing transform aspect ratio use log scale", "body": "See from https://github.com/pytorch/vision/commit/06a5858b3b73d62351456886f0a9f725fddbb3fe the aspect ratio is chosen randomly from a log scale\r\n\r\nI didn't see this in the original paper? And in the reference implementation. \r\n\r\nhttps://github.com/zhunzhong07/Random-Erasing/blob/c699ae481219334755de93e9c870151f256013e4/transforms.py#L38 \n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3825", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2021-05-13T11:43:04Z", "updated_at": "2021-05-13T12:05:11Z", "user": "jxu" }, { "repo": "pytorch/vision", "number": 3822, "title": "torchvision C++ compiling ", "body": "1. quesion:\r\n\r\nWhen I trying to compile torchvision from source in c++ language, the terminal thow erros:\r\nIn file included from /home/pc/anaconda3/include/python3.8/pytime.h:6:0,\r\n from /home/pc/anaconda3/include/python3.8/Python.h:85,\r\n from /media/pc/data/software/vision-0.9.0/torchvision/csrc/vision.cpp:4:\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray \u2018\\343\u2019 in program\r\n const std::vector<IValue>& slots() const {\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray \u2018\\200\u2019 in program\r\n const std::vector<IValue>& slots() const {\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray \u2018\\200\u2019 in program\r\n const std::vector<IValue>& slots() const {\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray \u2018\\343\u2019 in program\r\n const std::vector<IValue>& slots() const {\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray \u2018\\200\u2019 in program\r\n const std::vector<IValue>& slots() const {\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray \u2018\\200\u2019 in program\r\n const std::vector<IValue>& slots() const {\r\n ^\r\nIn file included from /media/pc/data/software/libtorch/include/c10/core/DispatchKey.h:6:0,\r\n from /media/pc/data/software/libtorch/include/torch/library.h:61,\r\n from /media/pc/data/software/vision-0.9.0/torchvision/csrc/vision.cpp:6:\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray \u2018\\343\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray \u2018\\343\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray \u2018\\343\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray \u2018\\343\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray \u2018\\200\u2019 in program\r\n obj->slots().size() == 1,\r\n ^\r\n\r\n2. enviroment:\r\n libtorch: 1.8.1\r\n vision: 0.9.1\r\n cmake: 3.19.6\r\n gcc: 7.5.0\r\n python: 3.8.5\r\n system: Ubuntu 18.04\r\n\r\n3. compile code\r\n cmake -DCMAKE_PREFIX_PATH=/media/pc/data/software/libtorch -DCMAKE_INSTALL_PREFIX=/media/pc/data/software/torchvision/install -DCMAKE_BUILD_TYPE=Release -DWITH_CUDA=ON ..\r\n\r\nThanks~\uff01", "url": "https://github.com/pytorch/vision/issues/3822", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-13T10:29:13Z", "updated_at": "2021-05-13T12:02:06Z", "user": "swordnosword" }, { "repo": "pytorch/text", "number": 1305, "title": "On Vocab Factory functions behavior", "body": "Related discussion #1016 \r\nRelated PRs #1304, #1302 \r\n\r\n---------\r\n\r\ntorchtext provides several factory functions to construct [Vocab class](https://github.com/pytorch/text/blob/f7a6fbd3a910c4066b9a748545df388ae5933a6a/torchtext/vocab.py#L19) object. The primary ways to construct vocabulary are:\r\n\r\n1. Reading raw text from file followed by tokenization to get token entries.\r\n2. Reading token entries directly from file\r\n3. Through iterators that yields iterator or list of tokens\r\n3. Through user supplied ordered dictionary that maps tokens to their corresponding occurrence frequencies\r\n\r\nTypically a vocabulary not only serve the purpose of numericalizing supplied tokens, but they also provide index for special occasions for example when the queried token is out of vocabulary (OOV) or when we need indices for special places like padding, masking, sentence beginning and end etc. \r\n\r\nAs the NLP is fast evolving, research and applied community alike will find novel and creative ways to push the frontiers of the field. Hence as a platform provider for NLP research and application, it is best not to make assumptions on special symbols including unknown token. We shall provide the aforementioned factory functions with minimal API requirements. We would expect the user to set the special symbols and fallback index through low level APIs of Vocab class. \r\n\r\nBelow are the examples of few scenarios and use cases:\r\n\r\nNote that querying OOV token through Vocab object without setting default index would raise RuntimeError. Hence it is necessary to explicitly set this through API unless user wants to explicitly handle the runtime error as and when it happens. In below examples we set the default index to be same as index of `<unk>` token.\r\n\r\nExample 1: Creating Vocab through text file and explicitly handling special symbols and fallback scenario\r\n```\r\nfrom torchtext.vocab import build_vocab_from_text_file\r\nvocab = build_vocab_from_text_file(\"path/to/raw_text.txt\", min_freq = 1)\r\nspecial_symbols = {'<unk>':0,'<pad>':1,'<s>':2,'</s>':3} \r\ndefault_index = special_symbols['<unk>']\r\nfor token, index in special_symbols.items():\r\n if token in vocab:\r\n vocab.reassign_token(token, index)\r\n else:\r\n vocab.insert_token(token, index)\r\nvocab.set_default_index(default_index)\r\n```\r\n\r\nExample 2: Reading vocab directly from file with all the special symbols and setting fallback index to unknown token\r\n```\r\nfrom torchtext.vocab import build_vocab_from_file\r\nunk_token = '<unk>'\r\nvocab = build_vocab_from_text_file(\"path/to/tokens.txt\", min_freq = 1)\r\nassert unk_token in vocab\r\nvocab.set_default_index(vocab[unk_token])\r\n```\r\n\r\nExample 3: Building Vocab using Iterators and explicitly adding special symbols and fallback index\r\n```\r\nfrom torchtext.vocab import build_vocab_from_iterator\r\nspecial_symbols = {'<unk>':0,'<pad>':1,'<s>':2,'</s>':3} \r\nvocab = build_vocab_from_iterator(iter_obj, min_freq = 1)\r\nfor token, index in special_symbols.items():\r\n if token in vocab:\r\n vocab.reassign_token(token, index)\r\n else:\r\n vocab.insert_token(token, index)\r\nvocab.set_default_index(vocab[unk_token])\r\n```\r\n\r\nExample 4: Creating vocab through user supplied ordered dictionary that also contains all the special symbols\r\n```\r\nfrom torchtext.vocab import vocab as vocab_factory\r\nunk_token = '<unk>'\r\nvocab = vocab_factory(ordered_dict, min_freq = 1)\r\nassert unk_token in vocab\r\nvocab.set_default_index(vocab[unk_token])\r\n```\r\n\r\nFurthermore, legacy [Vocab class constructor](https://github.com/pytorch/text/blob/f7a6fbd3a910c4066b9a748545df388ae5933a6a/torchtext/legacy/vocab.py#L28) provide additional arguments to build Vocab using [Counters](https://docs.python.org/3/library/collections.html#collections.Counter). Here it provide support to add special symbols directly through input arguments rather than calling any low-level API. \r\n\r\n\r\nWe would love to hear from our users and community if the factory functions above is a good trade-off between flexibility and abstraction or if users would like to handle special symbols and default index through API arguments instead of explicitly calling the low level APIs of Vocab class.\r\n\r\nwith @cpuhrsch \r\n\r\ncc: @hudeven, @snisarg, @dongreenberg \r\n\r\n\r\n", "url": "https://github.com/pytorch/text/issues/1305", "state": "open", "labels": [ "enhancement", "question", "need discussions" ], "created_at": "2021-05-13T02:52:19Z", "updated_at": "2021-05-13T04:07:13Z", "user": "parmeet" }, { "repo": "pytorch/functorch", "number": 23, "title": "Figure out how to transform over optimizers", "body": "One way to transform over training loops (e.g. to do model ensembling or the inner step of a MAML) is to use a function that represents the optimizer step instead of an actual PyTorch optimizer. Right now I think we have the following requirements\r\n- There should be a function version of each optimizer (e.g. `F.sgd`)\r\n- The function should have an option to not mutate (e.g. `F.sgd(..., inplace=False)`)\r\n- The function should be differentiable\r\n\r\nPyTorch already has some here (in Prototype stage): https://github.com/pytorch/pytorch/blob/master/torch/optim/_functional.py, so we should check if these fit the requirements, and, if not, decide if we should influence the design", "url": "https://github.com/pytorch/functorch/issues/23", "state": "open", "labels": [], "created_at": "2021-05-11T13:13:39Z", "updated_at": "2021-05-11T13:13:39Z", "user": "zou3519" }, { "repo": "pytorch/vision", "number": 3811, "title": "Mask-rcnn training - all AP and Recall scores in \u201cIoU Metric: segm\u201d remain 0", "body": "With torchvision\u2019s pre-trained mask-rcnn model, trying to train on a custom dataset prepared in COCO format.\r\n\r\nUsing torch/vision/detection/engine\u2019s `train_one_epoch` and `evaluate` methods for training and evaluation, respectively.\r\n\r\nThe loss_mask metric is reducing as can be seen here:\r\n```\r\nEpoch: [5] [ 0/20] eta: 0:00:54 lr: 0.005000 loss: 0.5001 (0.5001) loss_classifier: 0.2200 (0.2200) loss_box_reg: 0.2616 (0.2616) loss_mask: 0.0014 (0.0014) loss_objectness: 0.0051 (0.0051) loss_rpn_box_reg: 0.0120 (0.0120) time: 2.7308 data: 1.2866 max mem: 9887\r\nEpoch: [5] [10/20] eta: 0:00:26 lr: 0.005000 loss: 0.4734 (0.4982) loss_classifier: 0.2055 (0.2208) loss_box_reg: 0.2515 (0.2595) loss_mask: 0.0012 (0.0013) loss_objectness: 0.0038 (0.0054) loss_rpn_box_reg: 0.0094 (0.0113) time: 2.6218 data: 1.1780 max mem: 9887\r\nEpoch: [5] [19/20] eta: 0:00:02 lr: 0.005000 loss: 0.5162 (0.5406) loss_classifier: 0.2200 (0.2384) loss_box_reg: 0.2616 (0.2820) loss_mask: 0.0014 (0.0013) loss_objectness: 0.0051 (0.0062) loss_rpn_box_reg: 0.0120 (0.0127) time: 2.6099 data: 1.1755 max mem: 9887\r\n```\r\nBut the `evaluate` output shows absolutely no improvement from zero for IoU segm metric:\r\n\r\nIoU metric: bbox\r\n```\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.653\r\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.843\r\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.723\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.788\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.325\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.701\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.738\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.739\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.832\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.456\r\nIoU metric: segm\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\n````\r\nThe segm metrics don\u2019t improve even after training 500 epochs.\r\n\r\nAnd, the masks that I get as output after training for 100 or 500 epochs, if I visualize, they are showing a couple of dots here and there.\r\n\r\nWith the same dataset and annotations json, I was able to train instance seg model on detectron2. the the segmentation IoU metrics have clearly improved by each epoch.\r\n\r\nPlease suggest, what needs to be done. Posting here as there was no response on discuss.pytorch forum for 5 days\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3811", "state": "open", "labels": [ "question", "topic: semantic segmentation" ], "created_at": "2021-05-11T12:09:41Z", "updated_at": "2023-03-02T19:34:03Z", "user": "hemasunder" }, { "repo": "pytorch/TensorRT", "number": 449, "title": "error: \u2018tryTypeMetaToScalarType\u2019 is not a member of \u2018c10\u2019", "body": "## \u2753 CMake building error using this [repo](https://github.com/JosephChenHub/TRTorch)\r\n\r\n<!-- Your question -->\r\nHow to build the TRTorch or use the release packages of TRTorch in Ubuntu 18.04?\r\n## What you have already tried\r\nTried build TRTorch through CMakeLists.txt provided by [this](https://github.com/NVIDIA/TRTorch/issues/263).\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - OS (e.g., Linux):Ubuntu18.04\r\n - Build command you used (if compiling from source):cmake.. and make\r\n - Are you using local sources or building from archives:Yes\r\n - CUDA version:11.1\r\n - TensorRT version:7.2.3.4\r\n - Make error:\r\nA bunch of warnings and then the error:\r\n```\r\n/home/SENSETIME/dongchunyu/dongchunyu/depends/TensorRT-7.2.3.4/include/NvInfer.h:3250:22: note: declared here\r\n class TRT_DEPRECATED IRNNv2Layer : public ILayer\r\n ^~~~~~~~~~~\r\n/home/SENSETIME/dongchunyu/dongchunyu/depends/TensorRT-7.2.3.4/include/NvInfer.h:5662:85: warning: \u2018IPluginLayer\u2019 is deprecated [-Wdeprecated-declarations]\r\n ITensor* const* inputs, int32_t nbInputs, IPluginExt& plugin) TRTNOEXCEPT = 0;\r\n ^\r\n/home/SENSETIME/dongchunyu/dongchunyu/depends/TensorRT-7.2.3.4/include/NvInfer.h:3454:22: note: declared here\r\n class TRT_DEPRECATED IPluginLayer : public ILayer\r\n ^~~~~~~~~~~~\r\n/home/SENSETIME/dongchunyu/dongchunyu/codes/c++/tmp/TRTorch/core/util/trt_util.cpp: In function \u2018c10::optional<nvinfer1::DataType> trtorch::core::util::toTRTDataType(caffe2::TypeMeta)\u2019:\r\n/home/SENSETIME/dongchunyu/dongchunyu/codes/c++/tmp/TRTorch/core/util/trt_util.cpp:270:21: error: \u2018tryTypeMetaToScalarType\u2019 is not a member of \u2018c10\u2019\r\n if (auto t = c10::tryTypeMetaToScalarType(dtype)) {\r\n ^~~~~~~~~~~~~~~~~~~~~~~\r\n/home/SENSETIME/dongchunyu/dongchunyu/codes/c++/tmp/TRTorch/core/util/trt_util.cpp:270:21: note: suggested alternative: \u2018optTypeMetaToScalarType\u2019\r\n if (auto t = c10::tryTypeMetaToScalarType(dtype)) {\r\n ^~~~~~~~~~~~~~~~~~~~~~~\r\n optTypeMetaToScalarType\r\nCMakeFiles/util.dir/build.make:110: recipe for target 'CMakeFiles/util.dir/core/util/trt_util.cpp.o' failed\r\nmake[2]: *** [CMakeFiles/util.dir/core/util/trt_util.cpp.o] Error 1\r\nCMakeFiles/Makefile2:219: recipe for target 'CMakeFiles/util.dir/all' failed\r\nmake[1]: *** [CMakeFiles/util.dir/all] Error 2\r\nMakefile:83: recipe for target 'all' failed\r\nmake: *** [all] Error 2\r\n```\r\n\r\n## Additional context\r\nWishing for official cmake tool!!!\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/449", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-05-10T06:45:33Z", "updated_at": "2021-11-01T00:01:56Z", "user": "AllentDan" }, { "repo": "pytorch/vision", "number": 3801, "title": "Unable to train the keypointrcnn_resnet50_fpn model", "body": "## \u2753 The Predictions after training the model are empty.\r\n\r\n### I'm trying to train the model for keypoints detection, & bounding boxes, using the script in the references section, while training the loss_keypoint is always 0.0000. Using those weights for prediction is giving no predictions at all.\r\n\r\nI'm running this on a Windows 2016 server (EC2 instance on AWS), with a single GPU (Instance Type: p2.xlarge)\r\n\r\nMy Dataset is in COCO format, but I'm using only 14 keypoints per person, so I had defined the model in the train.py file as below:\r\n```\r\nmodel = torchvision.models.detection.keypointrcnn_resnet50_fpn(\r\n pretrained=False, progress=True, num_classes=1, num_keypoints=14, \r\n pretrained_backbone=True, trainable_backbone_layers=None)\r\n```\r\n\r\n& I've made appropriate changes in coco_utils.py for Keypoint flip.\r\n\r\n**Training**\r\nCommand:\r\n```\r\npython train.py --dataset coco_kp2 --model keypointrcnn_resnet50_fpn --epochs 1 --lr-steps 36 43 \r\n--aspect-ratio-group-factor 3\r\n```\r\nOutput:\r\n```\r\nNot using distributed mode\r\nNamespace(aspect_ratio_group_factor=3, batch_size=2, data_augmentation='hflip', data_path='/datasets01/COCO/022719/', dataset='coco_kp2', device='cuda', dist_url='env://', distributed=False, epochs=1, lr=0.02, lr_gamma=0.1, lr_step_size=8, lr_steps=[36, 43], model='keypointrcnn_resnet50_fpn', momentum=0.9, output_dir='.', pretrained=False, print_freq=20, resume='', rpn_score_thresh=None, start_epoch=0, test_only=False, trainable_backbone_layers=None, weight_decay=0.0001, workers=4, world_size=1)\r\nLoading data\r\nloading annotations into memory...\r\nDone (t=0.02s)\r\ncreating index...\r\nindex created!\r\nloading annotations into memory...\r\nDone (t=0.02s)\r\ncreating index...\r\nindex created!\r\nCreating data loaders\r\nUsing [0, 0.5, 0.6299605249474366, 0.7937005259840997, 1.0, 1.2599210498948732, 1.5874010519681994, 2.0, inf] as bins for aspect ratio quantization\r\nCount of instances per bin: [180]\r\nCreating model\r\nStart training\r\nEpoch: [0] [ 0/90] eta: 0:11:55 lr: 0.000244 loss: 0.7178 (0.7178) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.6962 (0.6962) loss_rpn_box_reg: 0.0216 (0.0216) time: 7.9505 data: 5.2040 max mem: 2618\r\nEpoch: [0] [20/90] eta: 0:02:04 lr: 0.004734 loss: 0.6764 (0.6253) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.6526 (0.6053) loss_rpn_box_reg: 0.0186 (0.0200) time: 1.4630 data: 0.0062 max mem: 2951\r\nEpoch: [0] [40/90] eta: 0:01:22 lr: 0.009224 loss: 0.0664 (0.3587) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0488 (0.3400) loss_rpn_box_reg: 0.0147 (0.0186) time: 1.5132 data: 0.0061 max mem: 2951\r\nEpoch: [0] [60/90] eta: 0:00:47 lr: 0.013714 loss: 0.0196 (0.2480) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0072 (0.2316) loss_rpn_box_reg: 0.0118 (0.0164) time: 1.4801 data: 0.0065 max mem: 2951\r\nEpoch: [0] [80/90] eta: 0:00:15 lr: 0.018204 loss: 0.0192 (0.1919) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0067 (0.1761) loss_rpn_box_reg: 0.0121 (0.0158) time: 1.4868 data: 0.0062 max mem: 2951\r\nEpoch: [0] [89/90] eta: 0:00:01 lr: 0.020000 loss: 0.0182 (0.1745) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0067 (0.1591) loss_rpn_box_reg: 0.0107 (0.0153) time: 1.4933 data: 0.0053 max mem: 2951\r\nEpoch: [0] Total time: 0:02:20 (1.5584 s / it)\r\nTest: [ 0/90] eta: 0:08:32 model_time: 0.4447 (0.4447) evaluator_time: 0.0010 (0.0010) time: 5.6930 data: 5.2317 max mem: 2951\r\nTest: [89/90] eta: 0:00:00 model_time: 0.3594 (0.3613) evaluator_time: 0.0010 (0.0011) time: 0.3689 data: 0.0033 max mem: 2951\r\nTest: Total time: 0:00:38 (0.4315 s / it)\r\nAveraged stats: model_time: 0.3594 (0.3613) evaluator_time: 0.0010 (0.0011)\r\nAccumulating evaluation results...\r\nDONE (t=0.02s).\r\nAccumulating evaluation results...\r\nDONE (t=0.00s).\r\nIoU metric: bbox\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | ar", "url": "https://github.com/pytorch/vision/issues/3801", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2021-05-09T16:28:57Z", "updated_at": "2021-05-10T15:54:04Z", "user": "d1nz-g33k" }, { "repo": "pytorch/serve", "number": 1055, "title": "How i do models chain processing and batch processing for analyzing text data?", "body": "Hello, I wanted to thank you for creating such a convenient and easily deployable model service.\r\nI have several questions / suggestions (maybe they have already been implemented).\r\n\r\nThe first thing I would like to know / get is the launch of a chain of models... Example: I have a basic model, let's say BERT, I would like to use it to get embeddings for further solving other tasks, such as classification, text summarization, Question/Answering etc. Those I would like to transfer data from the base model (BERT) to other models for solving particular problems (QA_model, classifier_model, summarizer_model). I would like to be able to dynamically change the output.\r\n```\r\n[\r\n {\r\n \"modelName\": \"BERT\",\r\n \"modelVersion\": \"1.0\",\r\n }\r\n \r\n {\r\n \"modelName\": \"QA_model\",\r\n \"modelVersion\": \"1.0\",\r\n }\r\n {\r\n \"modelName\": \"classifier_model\",\r\n \"modelVersion\": \"1.0\",\r\n }\r\n {\r\n \"modelName\": \"summarizer_model\",\r\n \"modelVersion\": \"1.0\",\r\n }\r\n]\r\n```\r\n\r\nThe second question is how to perform batch processing of text data in order to get execution for several sentences at once? And what are the restrictions on the batch size?", "url": "https://github.com/pytorch/serve/issues/1055", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-08T10:42:46Z", "updated_at": "2021-05-17T09:19:23Z", "user": "yurkoff-mv" }, { "repo": "pytorch/vision", "number": 3784, "title": "Could T.Lambda be nn.Module?", "body": "It would allow it to be placed in nn.ModuleList for passing to RandomApply (for scriptability)\r\n\r\nhttps://pytorch.org/vision/stable/transforms.html#torchvision.transforms.RandomApply\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3784", "state": "open", "labels": [ "question", "module: transforms" ], "created_at": "2021-05-06T12:22:19Z", "updated_at": "2021-05-07T14:13:30Z", "user": "vadimkantorov" }, { "repo": "pytorch/vision", "number": 3783, "title": "[docs] Unclear if to_pil_image / to_tensor copy or zero-copy for CPU<->CPU", "body": "It currently uses a vague language \"convert\". It's not sure if \"conversion\" incurs a copy or not\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3783", "state": "open", "labels": [ "question", "module: transforms" ], "created_at": "2021-05-06T11:52:59Z", "updated_at": "2021-05-12T11:53:46Z", "user": "vadimkantorov" }, { "repo": "pytorch/vision", "number": 3782, "title": "ToTensor confuse me with the way it takes input", "body": "So the `ToTensor` class of `to_tensor` function takes input in the dimension of (H, W) while PIL has it's images dimension be (W, H).\r\nWhy is this transpose ?\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3782", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2021-05-06T10:17:52Z", "updated_at": "2021-05-07T06:49:38Z", "user": "MohamedAliRashad" }, { "repo": "pytorch/vision", "number": 3772, "title": "Unable to get segmented mask output image", "body": ".", "url": "https://github.com/pytorch/vision/issues/3772", "state": "closed", "labels": [ "question", "awaiting response", "topic: semantic segmentation" ], "created_at": "2021-05-05T08:31:26Z", "updated_at": "2021-06-01T06:16:06Z", "user": "shubhamkotal" }, { "repo": "pytorch/tutorials", "number": 1506, "title": "Seq2seq Transformer Tutorial best model saving", "body": "In [this tutorial](https://pytorch.org/tutorials/beginner/transformer_tutorial.html#load-and-batch-data), it says \"Save the model if the validation loss is the best we\u2019ve seen so far.\", and then the following code follows (also [here](https://github.com/pytorch/tutorials/blob/master/beginner_source/transformer_tutorial.py#L324-L326)).\r\n\r\n```\r\nif val_loss < best_val_loss:\r\n best_val_loss = val_loss\r\n best_model = model\r\n```\r\nHowever, my understanding is that this kind of checkpointing won't work, as `best_model` will contain a pointer to the same set of parameters as `model` (which will be updated). I tried to verify this by checking that `next(model.parameters())` and `next(best_model.parameters())` are identical, and it seemed like that was the case (although admittedly I did not check that the last model was indeed not the best one).\r\n\n\ncc @pytorch/team-text-core @Nayef211", "url": "https://github.com/pytorch/tutorials/issues/1506", "state": "closed", "labels": [ "question", "Text", "module: torchtext" ], "created_at": "2021-05-05T05:04:47Z", "updated_at": "2023-03-08T20:55:00Z", "user": "micahcarroll" }, { "repo": "pytorch/xla", "number": 2927, "title": "How to install torch_xla with python version 3.9.2", "body": "## \u2753 Questions and Help\r\n\r\nI have to use 3.9.2 for other dependency. Given that my python version must be 3.9.2, how do I install torch_xla ? \r\n\r\nI tried these 2 method shown in tutorial \r\n\r\n1)\r\n\r\n`!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl` \r\n\r\nThis ( and c38 c39 variant of it ) does not work. \r\n\r\n<img width=\"682\" alt=\"Screen Shot 2021-05-03 at 11 18 17 PM\" src=\"https://user-images.githubusercontent.com/14815380/116957477-ec3c9600-ac65-11eb-8e2c-ba5c7050af25.png\">\r\n\r\n2)\r\n```\r\nVERSION = \"20200516\" #@param [\"1.5\" , \"20200516\", \"nightly\"]\r\n!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py\r\n!python pytorch-xla-env-setup.py --version $VERSION\r\n```\r\n\r\nRunning these gives\r\n\r\n<img width=\"706\" alt=\"Screen Shot 2021-05-03 at 11 20 02 PM\" src=\"https://user-images.githubusercontent.com/14815380/116957546-18f0ad80-ac66-11eb-97c2-4d134f50f90e.png\">\r\n\r\n\r\n## Question\r\n\r\nHow do I use xla with python 3.9.2 ?\r\n\r\nDo I must have python 3.7.x in order to use xla ???", "url": "https://github.com/pytorch/xla/issues/2927", "state": "closed", "labels": [ "stale" ], "created_at": "2021-05-04T03:21:02Z", "updated_at": "2021-06-22T17:43:47Z", "user": "sirgarfieldc" }, { "repo": "pytorch/vision", "number": 3767, "title": "Failing to load the pre-trained weights on multi-gpus.", "body": "## \ud83d\udc1b Bug\r\n\r\nDownloading the pre-trained weights for following models, Alexnet, Resnet_152, Resnet -18, SqueezeNet, VGG11 and trying to load them on any gpu other than cuda:0, it throw error.\r\n\r\n\r\n## To Reproduce\r\n\r\nwget https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth\r\n\r\n```\r\nimport torch\r\nfrom torchvision.models.alexnet import AlexNet\r\nclass ImageClassifier(AlexNet):\r\n def __init__(self):\r\n super(ImageClassifier, self).__init__() \r\ndevice1='cuda:0'\r\ndevice2='cuda:2'\r\nmodel = ImageClassifier()\r\nstate_dict = torch.load(\"alexnet-owt-4df8aa71.pth\", map_location=device2)\r\nmodel.load_state_dict(state_dict)\r\nmodel = model.to(device2)\r\n```\r\n\r\n## Error \r\n_File \"test_device.py\", line 16, in\r\nstate_dict = torch.load(\"alexnet-owt-4df8aa71.pth\", map_location=device2)........_\r\n\r\n_RuntimeError: Attempted to set the storage of a tensor on device \"cuda:0\" to a storage on different device \"cuda:2\". This is no longer allowed; the devices must match_\r\n\r\n## Expected behavior\r\n\r\nBe able to load the state_dict on any cuda device using map_location.\r\n\r\n## Enviroment\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0):1.7.1, 1.8.0,1.8.1\r\n - OS (e.g., Linux): ubuntu 18.04\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7\r\n - CUDA/cuDNN version:10.2\r\n - GPU models and configuration: Nvidia Tesla k80\r\n - Any other relevant information:\r\n\r\n## Additional context\r\nThese models are being used in Torchserve examples are failing in multi-gpu setting to be loaded on different cuda device. As a work around in Torchserve stated dicts are loaded first on cuda:0 then move the model to another device/ cuda+ids which creates this [issue](https://github.com/pytorch/serve/issues/1037) where it results in duplicated processes on two gpus and adding to the memory footprint.\r\n", "url": "https://github.com/pytorch/vision/issues/3767", "state": "closed", "labels": [ "question" ], "created_at": "2021-05-03T21:33:58Z", "updated_at": "2023-08-22T16:03:22Z", "user": "HamidShojanazeri" }, { "repo": "pytorch/serve", "number": 1045, "title": "How to add a new handler guide", "body": "Goal is to support new use cases easily\r\n\r\nThe base handler is also quite general in its capabilities so want to showcase a bit more what can be done", "url": "https://github.com/pytorch/serve/issues/1045", "state": "closed", "labels": [ "documentation", "enhancement" ], "created_at": "2021-04-28T20:43:10Z", "updated_at": "2021-05-05T19:17:39Z", "user": "msaroufim" }, { "repo": "pytorch/pytorch", "number": 57118, "title": "How to view VLOG information", "body": " How to use VLOG, which is same to specify TF_CPP_MIN_VLOG_LEVEL variable in TensorFlow.\r\n", "url": "https://github.com/pytorch/pytorch/issues/57118", "state": "open", "labels": [ "module: logging", "triaged" ], "created_at": "2021-04-28T11:16:12Z", "updated_at": "2024-09-04T19:25:04Z", "user": "HangJie720" }, { "repo": "pytorch/vision", "number": 3746, "title": "Details on pre-training of torchvision models", "body": "I realize there is a closed issue on this topic here: https://github.com/pytorch/vision/issues/666\r\n\r\nThe issue has been opened in 2018. I have not found any documentation on how the models of torchvision are pre-trained, therefore I am opening another issue. Is the above answer still valid? Are the models still trained according to https://github.com/pytorch/examples/tree/master/imagenet ?\r\nSpecifically, I would like to know the details on the image size and data transformation used.\r\n\r\nThanks!", "url": "https://github.com/pytorch/vision/issues/3746", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2021-04-28T11:13:01Z", "updated_at": "2021-04-28T11:44:38Z", "user": "spurra" }, { "repo": "pytorch/text", "number": 1295, "title": "How to train data with the similar number of tokens in a batch using distributed training?", "body": "My code needs two functions:\r\n1. Bucket iterator;\r\n2. In each batch, the number of tokens are similar. (This means the batch size of each batch is not same.)\r\n\r\nI think I could fulfill the function 2 with a custom sampler which inherits torch.utils.data.Sampler, but as seen in the tutorial, Bucket iterator inherits torch.utils.data.Dataset, and for distributed training, the torch.utils.data.distributed.DistributeSampler should be used. The custom sampler and the DistributedSampler can\u2019t both be used in torch.utils.data.DataLoader (dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False).\r\n\r\nSo, how to sample data (sentences) in a batch with the similar number of tokens for distributed training?\r\n\r\nThanks a lot.\r\n", "url": "https://github.com/pytorch/text/issues/1295", "state": "open", "labels": [], "created_at": "2021-04-27T09:36:11Z", "updated_at": "2021-07-06T16:22:55Z", "user": "sandthou" }, { "repo": "pytorch/vision", "number": 3729, "title": "Evaluation Method does not work for detection", "body": "## \ud83d\udc1b Bug\r\n\r\nAfter the training process, when running the evaluation function available here (https://github.com/pytorch/vision/blob/dc42f933f3343c76727dbfba6e4242f1bcb8e1a0/references/detection/engine.py), the process gets stuck without any error. I left the evaluation method running for two days but there are no error or any suggestion of what this problem could be when interrupting the process. \r\n\r\n## Expected behaviour\r\n\r\nThe result of this function is supposed to be the IoU metrics (the image is taken from the tutorial available on PyTorch)\r\n![Schermata 2021-04-26 alle 10 49 10](https://user-images.githubusercontent.com/30385910/116055497-0f22e500-a67d-11eb-9e2d-fde967773920.png)\r\n\r\n## Environment\r\n\r\n - PyTorch version: 1.7.0a0+7036e91\r\n - OS: Ubuntu 18.04\r\n - How you installed PyTorch / torchvision: pip\r\n - Build command you used (if compiling from source): //\r\n - Python version: 3.6\r\n - CUDA/cuDNN version: //\r\n - GPU models and configuration: Tesla T4\r\n", "url": "https://github.com/pytorch/vision/issues/3729", "state": "closed", "labels": [ "question", "awaiting response", "module: reference scripts" ], "created_at": "2021-04-26T08:54:08Z", "updated_at": "2025-01-02T07:26:34Z", "user": "aliceinland" }, { "repo": "pytorch/pytorch", "number": 56898, "title": "How do I convert the quantified model to onnx or ncnn\uff1f", "body": "## \u2753 How do I convert the quantified model to onnx or ncnn\uff1f\r\n\r\n### how to convert int8 model in pytorch to onnx. \r\nI train a model with quantization aware train in pytorch, however I need use quantizated model to onnx, I have tried, but normal code is not work. Any bady can help me, thanks a lot.\r\n@eklitzke @dreiss @huitseeker @jfsantos bug for guidance", "url": "https://github.com/pytorch/pytorch/issues/56898", "state": "closed", "labels": [], "created_at": "2021-04-26T03:07:06Z", "updated_at": "2021-04-27T22:51:13Z", "user": "fucker007" }, { "repo": "pytorch/serve", "number": 1041, "title": "How to debug slow serve models", "body": "## \ud83d\udcda Documentation\r\n\r\nMany issues are essentially people confused about the overhead that torch serve introduces so a good solution would be to have the below in a guide before opening a perf issue.\r\n1. Point to existing benchmarks so people can get a baseline estimate\r\n2. Running model without serve\r\n3. Commands to get serve overhead\r\n4. Expectations around how serve will scale horizontally and vertically\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/1041", "state": "closed", "labels": [ "documentation", "enhancement", "help wanted" ], "created_at": "2021-04-22T15:09:57Z", "updated_at": "2021-05-13T16:21:35Z", "user": "msaroufim" }, { "repo": "pytorch/pytorch", "number": 56634, "title": "[package] Module name reported in error message does not always match what is needed to extern/mock it", "body": "## \ud83d\udc1b Bug\r\nThe module name as printed in the packaging error messages is not always the name with which it can be successfully externed or mocked.\r\n\r\n## To Reproduce\r\n```\r\nimport torch\r\nimport io\r\n\r\nmodel = torch.hub.load('nicolalandro/ntsnet-cub200', 'ntsnet', pretrained=True, **{'topN': 6, 'device':'cpu', 'num_classes': 200})\r\nmodel.eval()\r\n\r\nwith torch.package.PackageExporter(io.BytesIO()) as exp:\r\n exp.extern([\r\n \"sys\",\r\n \"io\",\r\n \"PIL.**\",\r\n \"_queue\",\r\n \"urllib3.**\",\r\n ])\r\n exp.save_pickle(\"ntsnet\", \"model.pkl\", model)\r\n```\r\nThis code produces the following error:\r\n```\r\nValueError: cannot save source for module \"mklinit\" because its source file \"/home/meghanl/local/miniconda3/envs/tutorial/lib/python3.8/site-packages/mkl/_mklinit.cpython-38-x86_64-linux-gnu.so\" could not be found. See the dependency graph for more info:\r\n```\r\n\r\n## Expected Outcome\r\n`exp.extern(\"mklinit\")` externs this module.\r\n\r\n## Actual Outcome\r\n`exp.extern(\"mklinit\")` does not extern this module; the same error is produced. `exp.extern(\"mkl.**\")` does extern this module.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/56634", "state": "open", "labels": [ "triaged" ], "created_at": "2021-04-21T21:41:20Z", "updated_at": "2021-04-21T21:43:22Z", "user": "SplitInfinity" }, { "repo": "pytorch/pytorch", "number": 56473, "title": "how to restore a model's weight from jit.traced model file?", "body": "Hi guys,\r\n i have a traced pt model file, now i need to use it restore a net instance like below\r\n```py\r\ntraced_model = torch.jit.load('traced.pt')\r\nstate_dict = extract_state_dict(traced_model) #need to implement\r\nmodel = construct_model(args)\r\nmodel.load_state_dict(state_dict)\r\n```\r\nextract_state_dict is the function i want to know to implement,thanks\r\n\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/56473", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-04-20T12:54:26Z", "updated_at": "2021-04-21T03:51:31Z", "user": "fortuneko" }, { "repo": "pytorch/examples", "number": 901, "title": "Pytorch C++ Frontend: generating networks at runtime?", "body": "closing and moving to pytorch repo", "url": "https://github.com/pytorch/examples/issues/901", "state": "closed", "labels": [], "created_at": "2021-04-20T04:25:48Z", "updated_at": "2021-04-20T04:30:13Z", "comments": 0, "user": "r2dliu" }, { "repo": "pytorch/vision", "number": 3678, "title": "Deformable convolution best practice? ", "body": "## \u2753 Questions and Help\r\nWould appreciate it if anyone has some insight on how to use deformable convolution correctly. \r\n\r\nDeformable convolution is tricky as even the official implementation is different from what's described in the paper. The paper claims to use 2N offset size instead of 2 x ks x ks. \r\n\r\nAnyway, we're using the 2 x ks x ks offset here, but I always got poor performance. Accuracy drops in CIFAR10 and YOLACT. Anything wrong with my usage? \r\n```\r\nfrom torchvision.ops import DeformConv2d\r\n\r\nclass DConv(nn.Module):\r\n def __init__(self, inplanes, planes, kernel_size=3, stride=1, padding=1, bias=False):\r\n super(DConv, self).__init__()\r\n self.conv1 = nn.Conv2d(inplanes, 2 * kernel_size * kernel_size, kernel_size=kernel_size,\r\n stride=stride, padding=padding, bias=bias)\r\n self.conv2 = DeformConv2d(inplanes, planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias)\r\n\r\n def forward(self, x):\r\n out = self.conv1(x)\r\n out = self.conv2(x, out)\r\n return out\r\n```\r\n", "url": "https://github.com/pytorch/vision/issues/3678", "state": "open", "labels": [ "question", "module: ops" ], "created_at": "2021-04-16T07:20:24Z", "updated_at": "2021-04-21T12:56:47Z", "user": "liyy201912" }, { "repo": "pytorch/pytorch", "number": 56149, "title": "how to RegisterPass for torch.jit.trace() ", "body": "## \u2753 Questions and Help\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\n\r\nis there are new way to create a custom transformation pass in torchscript with torch1.8.0 ?\r\n\r\njust like here: pytorch_compiler_tutorial/register.cpp at master \u00b7 bwasti/pytorch_compiler_tutorial \u00b7 GitHub 2\r\n\r\ntorch.jit.trace() doesnt call: RegisterPass pass anymore\r\n\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/56149", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-04-15T15:25:42Z", "updated_at": "2021-06-04T16:57:02Z", "user": "Andrechang" }, { "repo": "pytorch/vision", "number": 3673, "title": "The document of torchvision.ops.deform_conv2d is not clear", "body": "## \ud83d\udcda Documentation\r\nFrom the documentation, I cannot get the exact meaning of 18(ie, 2*3*3) channels of the offset in a deformable convolution? \r\n\r\nI want to visualize the offset of the deformable convolution with kernel size 3*3.\r\nSo It\u2019s essential for me to know what\u2019s the exact meaning of these channels.\r\n\r\nI write down something possible here:\r\n```python\r\nupper-left: ul\r\nupper-right: ur\r\nbottom-left: bl\r\nbottom-right: br\r\nup: u\r\nbottom: b\r\nright: r\r\nleft: l\r\ncenter: c\r\n\r\npossible offset layout (maybe not correct):\r\ndelta_ul_x, delta_ul_y, delta_u_x, delta_u_y, delta_ur_x, delta_ur_y;\r\ndelta_l_x, delta_l_y, delta_c_x, delta_c_y, delta_r_x, delta_r_y;\r\ndelta_bl_x, delta_bl_y, delta_b_x, delta_b_y, delta_br_x, delta_br_y;\r\n```\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/3673", "state": "open", "labels": [ "question" ], "created_at": "2021-04-15T06:43:49Z", "updated_at": "2022-05-18T04:57:34Z", "user": "Zhaoyi-Yan" }, { "repo": "pytorch/xla", "number": 2883, "title": "How to dump HLO IR", "body": "## \u2753 Questions and Help\r\n\r\nHi, \r\nI want to extract the HLO PROTO/TEXT of a function/module that I wrote in PyTorch.\r\nSomething similar to what jax is doing [here](https://jax.readthedocs.io/en/latest/jax.html#jax.xla_computation):\r\n\r\n```\r\ndef f(x): \r\n return jax.numpy.sin(jax.numpy.cos(x))\r\nc = jax.xla_computation(f)(3.)\r\nhlo_proto = c. as_serialized_hlo_module_proto() \r\nhlo_txt = c. as_hlo_text()\r\n```\r\n\r\nIs there something similar that I can do it torch_xla?\r\n", "url": "https://github.com/pytorch/xla/issues/2883", "state": "closed", "labels": [ "stale" ], "created_at": "2021-04-15T02:42:09Z", "updated_at": "2021-06-22T17:43:37Z", "user": "KatiaSN602" }, { "repo": "pytorch/pytorch", "number": 55914, "title": "how to convert libtorch trained model to torch script model", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/55914", "state": "closed", "labels": [], "created_at": "2021-04-13T15:57:00Z", "updated_at": "2021-04-13T16:28:26Z", "user": "WuLoing" }, { "repo": "pytorch/vision", "number": 3658, "title": "Failed to compile torchvision for ROCm as documented in pytorch.org", "body": "## \ud83d\udc1b Bug\r\n\r\nFailed to compile torchvision for ROCm as documented in pytorch.org/get-started\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nas in: https://pytorch.org/get-started/locally/\r\n1. python -m venv ptamd; source ptamd/bin/activate\r\n1. pip install torch -f https://download.pytorch.org/whl/rocm4.0.1/torch_stable.html\r\n1. pip install ninja && pip install 'git+https://github.com/pytorch/vision.git@v0.9.1'\r\n\r\nsame with v0.9.0\r\n\r\n## Error\r\n\r\nptamd/lib/python3.8/site-packages/torch/include/c10/util/complex.h:9:10: fatal error: 'thrust/complex.h' file not found\r\n#include <thrust/complex.h>\r\n ^~~~~~~~~~~~~~~~~~\r\n1 error generated when compiling for gfx803.\r\n\r\n## Environment\r\n\r\n```\r\nPyTorch version: 1.8.1+rocm4.0.1\r\nIs debug build: False\r\nROCM used to build PyTorch: 4.0.20496-4f163c68\r\n\r\nOS: CentOS Linux 8 (x86_64)\r\nGCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) # same on GCC 10\r\n\r\nPython version: 3.8 (64-bit runtime)\r\nIs CUDA available: True\r\nGPU models and configuration: Vega 20\r\nHIP runtime version: 3.21.2\r\nMIOpen runtime version: 2.9.0\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.20.2\r\n[pip3] torch==1.8.1+rocm4.0.1\r\n```\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/3658", "state": "open", "labels": [ "question", "topic: build", "topic: binaries" ], "created_at": "2021-04-11T12:32:10Z", "updated_at": "2021-07-03T13:15:58Z", "user": "henrique" }, { "repo": "pytorch/TensorRT", "number": 429, "title": "\u2753 [Question] Does TRTorch support autograd in inference?", "body": "## \u2753 Question\r\n\r\nSome models can contain autograd as part of their inference pass; a simple example, which does compile to TorchScript, would be:\r\n```python\r\nimport torch\r\nclass M(torch.nn.Module):\r\n def forward(self, x):\r\n x.requires_grad_(True)\r\n y = x**2\r\n return torch.autograd.grad([y.sum()], [x])[0]\r\n\r\nm = M()\r\nm(3*torch.ones(3)) # => tensor([6., 6., 6.])\r\nms = torch.jit.script(m)\r\nms(3*torch.ones(3)) # => tensor([6., 6., 6.])\r\n```\r\n\r\nI know `autograd.grad` isn't in the list of supported operations, but I'm curious whether something like this would be possible in TRTorch, or if it is fundamentally incompatible with the TensorRT design.\r\n\r\nThanks!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/429", "state": "closed", "labels": [ "question" ], "created_at": "2021-04-08T05:12:50Z", "updated_at": "2021-04-12T14:03:51Z", "user": "Linux-cpp-lisp" }, { "repo": "pytorch/pytorch", "number": 55452, "title": "How to access model embedded functions?", "body": "## \u2753 Questions and Help\r\n\r\nI am working on C# .NET with Visual Studio 2019, over Windows Server 2019 Standard. \r\nI aim to export a Python model to run inference on C# with onnxruntime.\r\nI am using [Resemble-ai voice encoder](https://github.com/resemble-ai/Resemblyzer/blob/master/resemblyzer/voice_encoder.py) as ONNX, using:\r\n\r\nimport torch\r\nimport torch.onnx\r\nx = torch.randn(1,3,40,requires_grad=True)\r\ntorch_out = encoder(x)\r\n\r\ntorch.onnx.export(encoder,\r\n x,\r\n \"resemblyzer.onnx\",\r\n opset_version=13,\r\n input_names=['input'],\r\n output_names=['output'])\r\n\r\nThe export takes place without warnings or errors. The graph and input/outputs of the onnx model seem all right.\r\n\r\nBut I can't figure out how to use the model's embedded functions \"embed_utterance\" and \"embed_speaker\". Is that even possible? I mean, do the ONNX model include those functions or just the parameters of the trained model?\r\nIf the functions are inside de ONNX model, a snippet on how to access them would be great.\r\n\r\n\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity", "url": "https://github.com/pytorch/pytorch/issues/55452", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2021-04-07T10:22:44Z", "updated_at": "2021-04-21T08:56:02Z", "user": "ADD-eNavarro" }, { "repo": "pytorch/pytorch", "number": 55223, "title": "How to use PyTorch with ROCm (radeon gpu)? How to transfer data to gpu? ", "body": "Hey,\r\nSo far I didnt see any documentation or similar, which gives a hint how to use PyTorch with other GPUs than NVIDIA (when the new ROCm package is installed). How can I choose my radeon GPU as device and so use it for training? Very glad for any advices.\r\n\r\nBest\n\ncc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport", "url": "https://github.com/pytorch/pytorch/issues/55223", "state": "closed", "labels": [ "module: rocm", "triaged" ], "created_at": "2021-04-02T08:07:42Z", "updated_at": "2023-08-22T22:02:51Z", "user": "oconnor127" }, { "repo": "pytorch/TensorRT", "number": 420, "title": "\u2753 [Question] How can I pull TRTorch docker image?", "body": "## \u2753 Question\r\nI use the command to pull TRTorch docker image\r\n```\r\nsudo docker pull docker.pkg.github.com/nvidia/trtorch/docgen:0.3.0\r\n```\r\nGet respose unauthorized: Your request could not be authenticated by the GitHub Packages service. Please ensure your access token is valid and has the appropriate scopes configured.\r\n\r\nI can't find anyway to accsee the token.", "url": "https://github.com/pytorch/TensorRT/issues/420", "state": "closed", "labels": [ "question" ], "created_at": "2021-04-01T12:18:50Z", "updated_at": "2021-04-02T15:57:07Z", "user": "Tshzzz" }, { "repo": "pytorch/text", "number": 1265, "title": "How to split `_RawTextIterableDataset`", "body": "## \u2753 Questions and Help\r\nI am trying to move from using `legacy` and use new provided features, i was doing this:\r\n```\r\nfrom torchtext import legacy\r\nTEXT = legacy.data.Field(lower=True, batch_first=True)\r\nLABEL = legacy.data.LabelField(dtype=torch.float)\r\ntrain_data, test_data = legacy.datasets.IMDB.splits(TEXT, LABEL, root='/tmp/imdb/')\r\ntrain_data, valid_data = train_data.split(split_ratio=0.8, random_state=random.seed(SEED))\r\n```\r\nBut now i want to split train_data, how can i do that?\r\n```\r\nfrom torchtext.datasets import IMDB\r\ntrain_iter, test_iter = IMDB(split=('train', 'test'))\r\n# I need to split train_iter into train_iter and valid_iter\r\n```\r\nAnd i think providing more features more than just this [one](https://github.com/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb) would help more, Thanks!\r\n<!-- Please send questions or ask for help here. -->\r\n", "url": "https://github.com/pytorch/text/issues/1265", "state": "open", "labels": [ "feature request" ], "created_at": "2021-03-30T15:34:40Z", "updated_at": "2023-07-30T03:13:25Z", "user": "KickItLikeShika" }, { "repo": "pytorch/text", "number": 1264, "title": "How to use fasttext emebddings in the torchtext Nightly Vocab", "body": "I have a custom trained facebook fasttext embedding which i want to use in my RNN. \r\n\r\ni use the nightly version of torchtext so the Vocab is kinda new. \r\nHow do i use fastext embedding there. a simple clear example would be great. \r\n", "url": "https://github.com/pytorch/text/issues/1264", "state": "open", "labels": [], "created_at": "2021-03-27T12:48:11Z", "updated_at": "2021-03-29T01:44:16Z", "user": "StephennFernandes" }, { "repo": "pytorch/pytorch", "number": 54790, "title": "tools/git-clang-format: The downloaded binary is not what was expected!", "body": "`tools/git-clang-format` seems to do a test on hash of the clang-format binary, but if it mismatches it just says \"The downloaded binary is not what was expected!\" and no instructions how to remediate. I rm -rf'ed .clang-format-bin that might help", "url": "https://github.com/pytorch/pytorch/issues/54790", "state": "closed", "labels": [ "module: lint", "triaged" ], "created_at": "2021-03-26T18:58:06Z", "updated_at": "2021-04-07T00:19:01Z", "user": "ezyang" }, { "repo": "pytorch/pytorch", "number": 54758, "title": "How to release unnecessary tensor which occupys memory when executing inference at test phrase?", "body": "## \u2753 Questions and Help\r\nI have a memory-cost operation, I put this operation into a function like this:\r\n```\r\nclass xxx(nn.Module):\r\n def forward(xxx):\r\n xxx = self.cost_memory_function(xxx)\r\n ... # OOM error occurs here rather than at the above function.\r\n return xxx\r\n def cost_memory_function(xxx):\r\n ...\r\n```\r\nBut If the tensors generated from the function \"cost_memory_function\" release, the next part should successfully run. So I guess the tensors at function \"cost_memory_function\" still occupy memory even though the function has exited. \r\nSo I want to know how to release some tensors which is unnecessary. I have set \"torch.set_grad_enable\" as False.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/54758", "state": "closed", "labels": [], "created_at": "2021-03-26T06:14:24Z", "updated_at": "2021-03-26T16:08:40Z", "user": "shoutOutYangJie" }, { "repo": "pytorch/TensorRT", "number": 411, "title": "how to compile on windows\uff1f", "body": "", "url": "https://github.com/pytorch/TensorRT/issues/411", "state": "closed", "labels": [ "help wanted", "No Activity" ], "created_at": "2021-03-25T22:51:05Z", "updated_at": "2021-07-28T00:01:06Z", "user": "statham123" }, { "repo": "pytorch/vision", "number": 3602, "title": "Imagenet dataloader error: RuntimeError: The archive ILSVRC2012_devkit_t12.tar.gz is not present in the root directory or is corrupted.", "body": "## \ud83d\udc1b Bug\r\n\r\nI am using pytorch 1.8.0 and torchvision 0.9.\r\nI am trying to use the pretrained models from pytorch and evaluate them on imagenet val data. That should be fairly straightforward, but I am getting stuck on the dataloader.\r\nI downloaded the imagenet and the folder structure that I have is like this:\r\n```\r\n/media/SSD2/ILSVRC/\r\n |----Annotation\r\n |----ImageSets\r\n |----Data\r\n |----CLS-LOC\r\n |----test\r\n |----train\r\n |----val\r\n |----ILSVRC2012_val_00000009.JPEG\r\n |----ILSVRC2012_val_00000010.JPEG\r\n |----...\r\n```\r\nI tried `datasets.ImageNet`, based on [pytorch](https://pytorch.org/vision/stable/datasets.html#imagenet) where it says to use the following\r\n\r\n```\r\nimagenet_data = torchvision.datasets.ImageNet('path/to/imagenet_root/')\r\ndata_loader = torch.utils.data.DataLoader(imagenet_data,\r\n batch_size=4,\r\n shuffle=True,\r\n num_workers=args.nThreads)\r\n```\r\nI changed the path_to_imagenet_to `/media/SSD2/ILSVRC/` like this\r\n\r\n `torchvision.datasets.ImageNet('/media/SSD2/ILSVRC/',split='val',download=False)` \r\nbut I get this error:\r\n```\r\nRuntimeError: The archive ILSVRC2012_devkit_t12.tar.gz is not present in the root directory or is corrupted. You need to download it externally and place it in /media/SSD2/ILSVRC/.\r\n```\r\nIs it a bug or I am doing something wrong?\r\n\n\ncc @pmeier", "url": "https://github.com/pytorch/vision/issues/3602", "state": "closed", "labels": [ "question", "module: datasets" ], "created_at": "2021-03-24T19:15:28Z", "updated_at": "2021-03-25T17:01:31Z", "user": "seyeeet" }, { "repo": "pytorch/pytorch", "number": 54583, "title": "How to specific a op qconfig in \"prepare_jit\" qconfig_dict", "body": "## \u2753 Questions and Help\r\npytorch1.7/torchvision0.8.0\r\n\r\nI want to use \"prepare_jit\" and \"convert_jit\" to quantize Resnet18. But I can't specific 'layer1.0.conv1' to different qconfig.\r\nmy code:\r\nmodel = models.__dict__['resnet18] (pretrained=True)\r\nmodel = torch.jit.script(model.eval())\r\nqconfig1 = torch.quantization.QConfig(\r\n activation=torch.quantization.HistogramObserver.with_args(\r\n reduce_range=False),\r\n weight=torch.quantization.default_per_channel_weight_observer)\r\ntorch.quantization.prepare_jit(model, {'layer1.0.conv1':qconfig1}, True)\r\nmodel(torch.randn(1, 3, 224, 224))\r\ntorch.quantization.convert_jit(model, True, False)\r\n\r\nBut it will fail as below message:\r\nFile \"/home/xxx/python3.7/site-packages/torch/quantization/quantize_jit.py\", line 58, in _prepare_jit\r\n quant_type)\r\nRuntimeError: __torch__.torch.nn.modules.conv.___torch_mangle_67.Conv2d (of Python compilation unit at: 0x56088f811c00) is not compatible with the type __torch__.torch.nn.modules.conv.___torch_mangle_66.Conv2d (of Python compilation unit at: 0x56088f811c00) for the field 'conv1'\r\n\r\nIt seems the key 'layer1.0.conv1' is not correct.\r\nHow can I do?\r\n\r\n\r\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/54583", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-03-24T09:57:19Z", "updated_at": "2021-03-25T19:22:22Z", "user": "PenghuiCheng" }, { "repo": "pytorch/tutorials", "number": 1439, "title": "Question about pytorch mobile", "body": "Hello, I'm using Pytorch Mobile to deploy a model to phone via Android Studio.\r\n\r\nI follow the official direction turn the model in to '.pt' , and load it in android studio, but it seems that it doesn't give the right prediction after turn it into '.pt', it always predict to the same label no matter any label of image I feed in.\r\n\r\nThe second question is that ,how can I avoid normalization in function TensorImageUtils.bitmapToFloat32Tensor , just turn it in to Tensor.", "url": "https://github.com/pytorch/tutorials/issues/1439", "state": "closed", "labels": [ "question", "Mobile" ], "created_at": "2021-03-24T07:14:40Z", "updated_at": "2023-03-10T17:22:49Z", "user": "stillbetter" }, { "repo": "pytorch/tutorials", "number": 1432, "title": "Reinforcement Tutorial (DQN)", "body": "Hey, \r\nI try to reproduce [PyTorch Reinforcement Tutorial (DGN)](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#training) in \r\n\r\nIn each time step, the state of the environment need to be evaluated in the function ```def get_screen()```\r\n\r\nThe line \r\n```\r\nscreen = env.render(mode='rgb_array').transpose((2, 0, 1))\r\n```\r\nthrows an error both in Google Colabs and on my local machine. The error is related to the gym environment \r\n```\r\nenv = gym.make('CartPole-v0').unwrapped\r\n```\r\nIs there any idea, how to solve this problem and make this tutorial reproducible again?\r\n", "url": "https://github.com/pytorch/tutorials/issues/1432", "state": "closed", "labels": [ "Reinforcement Learning" ], "created_at": "2021-03-21T23:36:44Z", "updated_at": "2022-09-06T17:44:22Z", "comments": 2, "user": "sambaPython24" }, { "repo": "pytorch/pytorch", "number": 54390, "title": " UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/54390", "state": "closed", "labels": [], "created_at": "2021-03-21T14:17:10Z", "updated_at": "2021-03-22T15:26:58Z", "user": "ZengcanXUE" }, { "repo": "pytorch/TensorRT", "number": 408, "title": "\ud83d\udc1b [Bug] Tests are not being linked properly, fail with 'symbol lookup error'", "body": "## Bug Description\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. bazel test //tests --compilation_mode=dbg --test_output=errors --jobs=4 --runs_per_test=5\r\n\r\nYou will see all the tests fail. I am using stock 1.7.1 PyTorch.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\nboris@snikolaev-DGXStation:~/git/TRTorch$ /home/boris/.cache/bazel/_bazel_boris/c6ee020343103959b26b654eb14e89ac/execroot/TRTorch/bazel-out/k8-dbg/bin/tests/core/conversion/converters/test_linear.runfiles/TRTorch/tests/core/conversion/converters/test_linear\r\n/home/boris/.cache/bazel/_bazel_boris/c6ee020343103959b26b654eb14e89ac/execroot/TRTorch/bazel-out/k8-dbg/bin/tests/core/conversion/converters/test_linear.runfiles/TRTorch/tests/core/conversion/converters/test_linear: symbol lookup error: /home/boris/.cache/bazel/_bazel_boris/c6ee020343103959b26b654eb14e89ac/execroot/TRTorch/bazel-out/k8-dbg/bin/tests/core/conversion/converters/../../../../_solib_k8/libcore_Sutil_Slibtrt_Uutil.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE\r\nboris@snikolaev-DGXStation:~/git/TRTorch$ nm /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so | grep _ZN3c105ErrorC1ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE\r\nboris@snikolaev-DGXStation:~/git/TRTorch$ nm /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so | grep SourceLocation\r\n000000000004f130 T _ZN3c1014WarningHandler7processERKNS_14SourceLocationERKSsb\r\n0000000000051870 T _ZN3c105ErrorC1ENS_14SourceLocationESs\r\n0000000000051870 T _ZN3c105ErrorC2ENS_14SourceLocationESs\r\n000000000004f210 T _ZN3c107Warning4warnENS_14SourceLocationERKSsb\r\n00000000000527c0 t _ZN3c10lsERSoRKNS_14SourceLocationE\r\n\r\n## Expected behavior\r\nTests run (or at least start up) successfully.\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.1\r\n - CPU Architecture: \r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): bazel test //tests --compilation_mode=dbg --test_output=errors --jobs=4 --runs_per_test=5\r\n - Are you using local sources or building from archives: local\r\n - Python version: 3.6\r\n - CUDA version: 11\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/408", "state": "closed", "labels": [ "question" ], "created_at": "2021-03-20T04:06:57Z", "updated_at": "2021-04-07T01:42:26Z", "user": "borisfom" }, { "repo": "pytorch/examples", "number": 895, "title": "Video classification example", "body": "Hi,\r\n\r\nAs we all know that video representation learning is a hot topic in computer vision community (thanks to recent advances in self-supervised learning), is it time to add a toy example for video classification? This code would be as simple as image classification examples. For example, we can add an example of video classification using I3D on UCF-101/HMDB-51? \r\n\r\n", "url": "https://github.com/pytorch/examples/issues/895", "state": "open", "labels": [ "good first issue" ], "created_at": "2021-03-18T19:08:06Z", "updated_at": "2022-03-09T20:44:51Z", "comments": 1, "user": "avijit9" }, { "repo": "pytorch/xla", "number": 2831, "title": "RuntimeError: Cannot access data pointer of Tensor that doesn't have storage, how to resolve it?", "body": "## Issue description\r\nCurrently I am trying to solve an object detection problem using FastRCNN model with the help of Pytorch XLA module \r\nBut while training I am getting a **RuntimeError: Cannot access data pointer of Tensor that doesn't have storage**\r\nIt was working fine when I trained the model in GPU kernel, but started giving error when I switched to TPU\r\n\r\n## Code example\r\nHere's the link to my notebook --> [Object Detection Kernel](https://www.kaggle.com/mesparky/vunbigdata-chest-xray-object-detection?scriptVersionId=57113528)\r\n![image](https://user-images.githubusercontent.com/42636586/111666653-eca9da80-8839-11eb-9d86-65d47bce4309.png)\r\n\r\n## System Info\r\nI am using Kaggle TPU kernel for training my model.\r\n\r\n**PLEASE HELP ME RESOLVING THIS ISSUE**", "url": "https://github.com/pytorch/xla/issues/2831", "state": "closed", "labels": [ "stale" ], "created_at": "2021-03-18T17:03:32Z", "updated_at": "2021-06-26T02:22:49Z", "user": "IamSparky" }, { "repo": "pytorch/tutorials", "number": 1421, "title": "Chatbot tutorial - RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor", "body": "https://github.com/pytorch/tutorials/blob/master/beginner_source/chatbot_tutorial.py\r\nTried running this chatbot tutorial. Training goes well but when actually using the model by uncommenting the final line of code (as specified in the comments) it returns the following error:\r\n\r\n`Iteration: 4000; Percent complete: 100.0%; Average loss: 2.4559\r\n> hello?\r\nTraceback (most recent call last):\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1377, in <module>\r\n evaluateInput(encoder, decoder, searcher, voc)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1242, in evaluateInput\r\n output_words = evaluate(encoder, decoder, searcher, voc, input_sentence)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1225, in evaluate\r\n tokens, scores = searcher(input_batch, lengths, max_length)\r\n File \"C:\\Users\\user\\.conda\\envs\\pytorch-tests\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1160, in forward\r\n encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)\r\n File \"C:\\Users\\user\\.conda\\envs\\pytorch-tests\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 693, in forward\r\n packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)\r\n File \"C:\\Users\\user\\.conda\\envs\\pytorch-tests\\lib\\site-packages\\torch\\nn\\utils\\rnn.py\", line 245, in pack_padded_sequence\r\n _VF._pack_padded_sequence(input, lengths, batch_first)\r\nRuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor\r\n\r\nProcess finished with exit code 1`\r\n\r\nUnfamiliar with pytorch so no idea what the cause is or how to solve it, but it looks like something to do with tensor types.\r\nPackages in environment:\r\n![image](https://user-images.githubusercontent.com/33965786/111622596-af1d6100-87e9-11eb-9f51-783b80383287.png)\r\n", "url": "https://github.com/pytorch/tutorials/issues/1421", "state": "closed", "labels": [ "Text" ], "created_at": "2021-03-18T11:59:07Z", "updated_at": "2023-03-09T19:06:58Z", "comments": 1, "user": "0xVavaldi" }, { "repo": "pytorch/serve", "number": 1013, "title": "How to debug handlers?", "body": "Since the handler's logic is copied inside every `.mar` file, there is no sense of breakpoints in the original handler `.py` file. Can you please suggest how can we debug our handler modules?", "url": "https://github.com/pytorch/serve/issues/1013", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2021-03-17T21:52:46Z", "updated_at": "2021-04-09T18:47:43Z", "user": "duklin" }, { "repo": "pytorch/pytorch", "number": 54212, "title": "How to update a Wiki page?", "body": "## \u2753 Questions and Help\r\n\r\n\r\nThe `-k` option for filtering tests with a string can no longer be used with `python`, and should be used with `pytest` now.\r\n\r\nPull requests can't be submitted for the Wiki, so I couldn't suggest an update to https://github.com/pytorch/pytorch/wiki/Writing-tests-in-PyTorch-1.8. \r\n\r\nPlease update the Wiki page with this detail. Thank you!\n\ncc @brianjo @mruberry @VitalyFedyunin @walterddr", "url": "https://github.com/pytorch/pytorch/issues/54212", "state": "closed", "labels": [ "module: docs", "module: tests", "triaged" ], "created_at": "2021-03-17T21:31:48Z", "updated_at": "2021-03-18T15:10:54Z", "user": "imaginary-person" }, { "repo": "pytorch/FBGEMM", "number": 553, "title": "Is it possible to speed up matrix multiplication by adjusting the values of the Packing parameters under the same hardware environment?", "body": "Hi! I am reading the source code of FBGEMM and interested in the CPU optimization part. I found that FBGEMM sets Packing parameters for each ISA separately. I am curious whether the values of these parameters are determined empirically or by a certain algorithm? Is it possible to speed up matrix multiplication by adjusting the values of the Packing parameters under the same hardware environment? Is it possible to run FBGEMM on more ISA by appropriately setting the values of the Packing parameters? I will be very grateful for your help.", "url": "https://github.com/pytorch/FBGEMM/issues/553", "state": "closed", "labels": [ "question" ], "created_at": "2021-03-17T05:03:16Z", "updated_at": "2021-03-25T07:39:09Z", "user": "umiswing" }, { "repo": "pytorch/pytorch", "number": 53993, "title": "How to set the amp to all fp16 training?", "body": "Hello, I would like to ask how to set up all amp training for fp16? Similar to apex's O1 O2 O3 mode? thank you very much!\n\ncc @mcarilli @ptrblck", "url": "https://github.com/pytorch/pytorch/issues/53993", "state": "closed", "labels": [ "triaged", "module: amp (automated mixed precision)" ], "created_at": "2021-03-15T08:02:23Z", "updated_at": "2021-03-16T03:04:16Z", "user": "sky-fly97" }, { "repo": "pytorch/pytorch", "number": 53957, "title": "Is pytorch 1.8.0 incompatible with cuda 11.2 or what is the reason for this error?", "body": "I have spent all day trying to upgrade cuda to 11.2 and get it working with pytorch. At the moment I believe I should have a fully working version of Cuda 11.2, yet I still get the following error when I try to run my pytorch code, which normally works without issues.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/snap/pycharm-community/226/plugins/python-ce/helpers/pydev/pydevd.py\", line 1477, in _exec\r\n pydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"/snap/pycharm-community/226/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/home/tue/PycharmProjects/Pfold/run_1d_supervised.py\", line 112, in <module>\r\n losses = main()\r\n File \"/home/tue/PycharmProjects/Pfold/supervised/main.py\", line 73, in main\r\n net = train(net, optimizer, dl_train, loss_fnc, dl_test=dl_test, scheduler=lr_scheduler,ite=ite_start, loss_reg_fnc=loss_reg_fnc, loss_reg_min_sep_fnc=loss_reg_min_sep_fnc)\r\n File \"/home/tue/PycharmProjects/Pfold/supervised/optimization.py\", line 75, in train\r\n dists_pred, coords_pred = net(features,mask)\r\n File \"/home/tue/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/tue/PycharmProjects/Pfold/supervised/network_vnet.py\", line 508, in forward\r\n dists += (tr2DistSmall(x[:,i*3:(i+1)*3,:]),)\r\n File \"/home/tue/PycharmProjects/Pfold/supervised/network_transformer.py\", line 155, in tr2DistSmall\r\n D = torch.sum(Z**2, dim=1).unsqueeze(1) + torch.sum(Z**2, dim=1).unsqueeze(2) - 2*Z.transpose(1,2) @ Z\r\nRuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`\r\npython-BaseException\r\nBackend TkAgg is interactive backend. Turning interactive mode on.\r\n\r\nProcess finished with exit code 130 (interrupted by signal 2: SIGINT)\r\n\r\n```\r\n\r\nI have checked that cuda/cudnn seems to work, at least I was able to compile and run a hello_world script with nvcc. Additional information:\r\n\r\n```\r\ntue@tue-laptop:~$ nvcc -V\r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2021 NVIDIA Corporation\r\nBuilt on Sun_Feb_14_21:12:58_PST_2021\r\nCuda compilation tools, release 11.2, V11.2.152\r\nBuild cuda_11.2.r11.2/compiler.29618528_0\r\n```\r\n\r\n```\r\nPython 3.8.5 (default, Jan 27 2021, 15:41:15) \r\n[GCC 9.3.0] on linux\r\nimport torch\r\ntorch.version.cuda\r\n'11.1'\r\ntorch.version\r\n<module 'torch.version' from '/home/tue/.local/lib/python3.8/site-packages/torch/version.py'>\r\ntorch.version.__version__\r\n'1.8.0+cu111'\r\n```\r\n\r\nSearching on the error pytorch is giving, hasn't really lead me to any understanding of what the problem could be, so I'm hoping for some insight here and perhaps a solution?\n\ncc @ngimel", "url": "https://github.com/pytorch/pytorch/issues/53957", "state": "open", "labels": [ "module: cuda", "triaged" ], "created_at": "2021-03-13T06:02:04Z", "updated_at": "2021-03-24T14:13:31Z", "user": "tueboesen" }, { "repo": "pytorch/pytorch", "number": 53888, "title": "How to shift columns (or rows) in a tensor with different offsets?", "body": "`torch.roll` function is only able to shift columns (or rows) with same offsets. But I want to shift columns with different offsets. Suppose the input tensor is\r\n```\r\n[[1,2,3],\r\n [4,5,6],\r\n [7,8,9]]\r\n```\r\nSay, to shift with offset `i` for the i-th column, the expected output is\r\n```\r\n[[1,8,6],\r\n [4,2,9],\r\n [7,5,3]]\r\n```\r\nAn option to do so is to separately shift every column using `torch.roll` and stack them. But for the consideration of effectiveness and code compactness, I don't want to introduce the loop structure. Is there a better way\uff1f", "url": "https://github.com/pytorch/pytorch/issues/53888", "state": "closed", "labels": [ "triaged", "module: advanced indexing" ], "created_at": "2021-03-12T10:11:21Z", "updated_at": "2021-03-13T05:05:51Z", "user": "changmenseng" }, { "repo": "pytorch/FBGEMM", "number": 540, "title": "Is it possible to generate SPMDM kernels with asmjit?", "body": "Hi all, \r\nThanks for sharing such a high-performance GEMM library.\r\n\r\nAfter reading through source codes, I found that only U8S8S32AC* kernels are generated from asmjit. \r\nIs it possible to port SpMDM codes to asmjit? I'm tring to optimzie SpMDM by myself.\r\n\r\nThanks!\r\nYang", "url": "https://github.com/pytorch/FBGEMM/issues/540", "state": "closed", "labels": [ "question" ], "created_at": "2021-03-12T03:08:40Z", "updated_at": "2021-03-17T16:38:56Z", "user": "YangWang92" }, { "repo": "pytorch/vision", "number": 3547, "title": "How to train a classifier with custom class num while also want pretrain=True?", "body": "It will gives an error:\r\n\r\n```\r\n\tsize mismatch for fc.weight: copying a param with shape torch.Size([1000, 1024]) from checkpoint, the shape in current model is torch.Size([42, 1024]).\r\n\r\n```", "url": "https://github.com/pytorch/vision/issues/3547", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2021-03-11T09:35:42Z", "updated_at": "2021-03-19T18:06:32Z", "user": "lucasjinreal" }, { "repo": "pytorch/pytorch", "number": 53693, "title": "how to use torch.distributions.Normal/log_prob in libtorch?", "body": "I dont find class like torch.distributions in libtorch,so is there any way to get log_prob of a tensor?\n\ncc @yf225 @glaringlee @fritzo @neerajprad @alicanb @vishwakftw @nikitaved", "url": "https://github.com/pytorch/pytorch/issues/53693", "state": "closed", "labels": [ "module: distributions", "module: cpp", "triaged" ], "created_at": "2021-03-10T07:23:51Z", "updated_at": "2021-03-10T15:33:47Z", "user": "scirocc" }, { "repo": "pytorch/pytorch", "number": 53678, "title": "[FX] Regression from 1.8: FX can no longer trace functions where the first element of an int list is a Proxy", "body": "```\r\nimport torch\r\nimport torch.fx as fx\r\n\r\ndef f(x):\r\n return torch.reshape(x, (x.shape[0], -1))\r\n\r\nmod = fx.symbolic_trace(f)\r\nprint(mod.code)\r\n```\r\n\r\nIn 1.18 this worked, but it was broken by this PR, which fails since it verifies that the first element of the list is an integer (while it's actually a Proxy): https://github.com/pytorch/pytorch/pull/51350\n\ncc @ezyang", "url": "https://github.com/pytorch/pytorch/issues/53678", "state": "open", "labels": [ "triaged", "module: fx" ], "created_at": "2021-03-10T02:13:32Z", "updated_at": "2022-07-20T21:23:30Z", "user": "Chillee" }, { "repo": "pytorch/pytorch", "number": 53676, "title": "How to concatenate a variable number of tensors", "body": "## \u2753 Questions and Help\r\n\r\nHow to concatenate a variable number of tensors using `torch.cat() `. For example, I have three layers and I need to concatenate the output of these layers as below:\r\n\r\n```\r\n for layer in self.layers:\r\n src = layer(src, src_mask) \r\n # I have three layers and I expect 3 vectors\r\n src = torch.cat([src],1) \r\n```\r\nKind regards,\r\nAiman Solyman", "url": "https://github.com/pytorch/pytorch/issues/53676", "state": "closed", "labels": [], "created_at": "2021-03-10T01:36:15Z", "updated_at": "2021-03-10T08:34:26Z", "user": "aimanmutasem" }, { "repo": "pytorch/TensorRT", "number": 391, "title": "\u2753 [Question] PyTorch 1.8 Support", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\n## What you have already tried\r\nPyTorch 1.8(stable) is released recently.\r\n\r\nWhen will TRTorch be compatible to PyTorch 1.8? ", "url": "https://github.com/pytorch/TensorRT/issues/391", "state": "closed", "labels": [ "question" ], "created_at": "2021-03-09T05:57:53Z", "updated_at": "2021-03-22T21:50:54Z", "user": "developer0hye" }, { "repo": "pytorch/pytorch", "number": 53584, "title": "How to delete Module from GPU? (libtorch C++)", "body": "All the demo only show how to load model files. But how to unload the model file from the GPU and free up the GPU memory space?\r\nI tried this, but it doesn't work.\r\n\r\n```cpp\r\nmodel.~Module(); \r\nc10::cuda::CUDACachingAllocator::emptyCache();\r\n```\n\ncc @yf225 @glaringlee", "url": "https://github.com/pytorch/pytorch/issues/53584", "state": "open", "labels": [ "module: cpp-extensions", "module: cpp", "triaged" ], "created_at": "2021-03-09T02:55:03Z", "updated_at": "2021-03-11T03:11:09Z", "user": "ZhiZe-ZG" }, { "repo": "pytorch/pytorch", "number": 53580, "title": "how to use logging in libtorch C++ ? any example ? Many thanks", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @yf225 @glaringlee", "url": "https://github.com/pytorch/pytorch/issues/53580", "state": "closed", "labels": [ "module: cpp", "triaged" ], "created_at": "2021-03-09T02:34:56Z", "updated_at": "2021-03-10T02:58:13Z", "user": "yulinhuyang" }, { "repo": "pytorch/serve", "number": 1001, "title": "How to deploy on the cloud sentence transformer from the UKPLab from ", "body": "Hi community,\r\n\r\nHow could I practically deploy on the cloud pre-trained sentence transformer from the UKPLab ?\r\n\r\nI saw the issue #681 and customisation proposed but didn't know whether it was intended for cloud.\r\n\r\nSecondly, once deployed on cloud how to configure at scale?\r\n\r\nThanks !", "url": "https://github.com/pytorch/serve/issues/1001", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2021-03-08T20:24:40Z", "updated_at": "2021-05-13T16:51:01Z", "user": "mattvan83" }, { "repo": "pytorch/tutorials", "number": 1401, "title": "Dynamic Quantization for GPT2 model from huggingface.", "body": "Hi,\r\n\r\nReproducibility required: PyTorch version 1.4.0\r\n\r\nI am trying to use the ```torch.quantization.quantize_dynamic``` function to quantize the ```pre_trained``` DistilGPT2 model from Hugging-face.\r\n\r\nAs most transformer blocks in this model are made up of the ```nn.Conv1d``` modules, there occurs a problem while performing the quantization.\r\n\r\nI understand, because the function ```torch.quantization.quantize_dynamic``` does not define a way for quantizing the ```nn.Conv1d``` layer (see the snippet below), they all just go **UN-Quantized** \r\n```\r\n if qconfig_spec is None:\r\n if dtype == torch.qint8:\r\n qconfig_spec = {\r\n nn.Linear : default_dynamic_qconfig,\r\n nn.LSTM : default_dynamic_qconfig\r\n }\r\n```\r\n\r\nPlease suggest a solution.\n\ncc @jerryzh168 @jianyuh", "url": "https://github.com/pytorch/tutorials/issues/1401", "state": "open", "labels": [ "question", "module: quantization" ], "created_at": "2021-03-08T15:06:23Z", "updated_at": "2023-03-09T19:37:48Z", "user": "mriganktiwari" }, { "repo": "pytorch/pytorch", "number": 53395, "title": "How to solve dist.init_process_group from hanging (or deadlocks) with DGX A100?", "body": "## \ud83d\udc1b Bug\r\n\r\nDDP deadlocks on a new dgx A100 machine with 8 gpus\r\n\r\n## To Reproduce\r\n\r\nRun this self contained code:\r\n```\r\n\"\"\"\r\nFor code used in distributed training.\r\n\"\"\"\r\nfrom typing import Tuple\r\n\r\nimport torch\r\nimport torch.distributed as dist\r\n\r\nimport os\r\n\r\nfrom torch import Tensor\r\n\r\nimport torch.multiprocessing as mp\r\n\r\ndef set_sharing_strategy(new_strategy=None):\r\n \"\"\"\r\n https://pytorch.org/docs/stable/multiprocessing.html\r\n https://discuss.pytorch.org/t/how-does-one-setp-up-the-set-sharing-strategy-strategy-for-multiprocessing/113302\r\n https://stackoverflow.com/questions/66426199/how-does-one-setup-the-set-sharing-strategy-strategy-for-multiprocessing-in-pyto\r\n \"\"\"\r\n from sys import platform\r\n\r\n if new_strategy is not None:\r\n mp.set_sharing_strategy(new_strategy=new_strategy)\r\n else:\r\n if platform == 'darwin': # OS X\r\n # only sharing strategy available at OS X\r\n mp.set_sharing_strategy('file_system')\r\n else:\r\n # ulimit -n 32767 or ulimit -n unlimited (perhaps later do try catch to execute this increase fd limit)\r\n mp.set_sharing_strategy('file_descriptor')\r\n\r\ndef use_file_system_sharing_strategy():\r\n \"\"\"\r\n when to many file descriptor error happens\r\n\r\n https://discuss.pytorch.org/t/how-does-one-setp-up-the-set-sharing-strategy-strategy-for-multiprocessing/113302\r\n \"\"\"\r\n import torch.multiprocessing\r\n torch.multiprocessing.set_sharing_strategy('file_system')\r\n\r\ndef find_free_port():\r\n \"\"\" https://stackoverflow.com/questions/1365265/on-localhost-how-do-i-pick-a-free-port-number \"\"\"\r\n import socket\r\n from contextlib import closing\r\n\r\n with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:\r\n s.bind(('', 0))\r\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\r\n return str(s.getsockname()[1])\r\n\r\ndef setup_process(rank, world_size, backend='gloo'):\r\n \"\"\"\r\n Initialize the distributed environment (for each process).\r\n\r\n gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that\r\n it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.\r\n\r\n export NCCL_SOCKET_IFNAME=eth0\r\n export NCCL_IB_DISABLE=1\r\n\r\n https://stackoverflow.com/questions/61075390/about-pytorch-nccl-error-unhandled-system-error-nccl-version-2-4-8\r\n\r\n https://pytorch.org/docs/stable/distributed.html#common-environment-variables\r\n \"\"\"\r\n import torch.distributed as dist\r\n import os\r\n import torch\r\n\r\n if rank != -1: # -1 rank indicates serial code\r\n print(f'setting up rank={rank} (with world_size={world_size})')\r\n # MASTER_ADDR = 'localhost'\r\n MASTER_ADDR = '127.0.0.1'\r\n MASTER_PORT = find_free_port()\r\n # set up the master's ip address so this child process can coordinate\r\n os.environ['MASTER_ADDR'] = MASTER_ADDR\r\n print(f\"{MASTER_ADDR=}\")\r\n os.environ['MASTER_PORT'] = MASTER_PORT\r\n print(f\"{MASTER_PORT}\")\r\n\r\n # - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends\r\n if torch.cuda.is_available():\r\n # unsure if this is really needed\r\n # os.environ['NCCL_SOCKET_IFNAME'] = 'eth0'\r\n # os.environ['NCCL_IB_DISABLE'] = '1'\r\n backend = 'nccl'\r\n print(f'{backend=}')\r\n # Initializes the default distributed process group, and this will also initialize the distributed package.\r\n dist.init_process_group(backend, rank=rank, world_size=world_size)\r\n # dist.init_process_group(backend, rank=rank, world_size=world_size)\r\n # dist.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)\r\n print(f'--> done setting up rank={rank}')\r\n\r\ndef cleanup(rank):\r\n \"\"\" Destroy a given process group, and deinitialize the distributed package \"\"\"\r\n # only destroy the process distributed group if the code is not running serially\r\n if rank != -1: # -1 rank indicates serial code\r\n dist.destroy_process_group()\r\n\r\ndef get_batch(batch: Tuple[Tensor, Tensor], rank) -> Tuple[Tensor, Tensor]:\r\n x, y = batch\r\n if torch.cuda.is_available():\r\n x, y = x.to(rank), y.to(rank)\r\n else:\r\n # I don't think this is needed...\r\n # x, y = x.share_memory_(), y.share_memory_()\r\n pass\r\n return x, y\r\n\r\ndef test_setup():\r\n print('test_setup')\r\n world_size = 4\r\n mp.spawn(setup_process, args=(world_size,), nprocs=4)\r\n dist.destroy_process_group()\r\n print('successful test_setup!')\r\n\r\n\r\nif __name__ == '__main__':\r\n test_setup()\r\n```\r\n\r\nerror msg\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/mu", "url": "https://github.com/pytorch/pytorch/issues/53395", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2021-03-05T19:14:08Z", "updated_at": "2023-06-08T10:36:24Z", "user": "brando90" }, { "repo": "pytorch/pytorch", "number": 53348, "title": "How to obtain the gradient of a tensor when in-place operation included?", "body": "## \u2753 How to obtain the gradient of a tensor when in-place operation included?\r\nFor simplicity, here is the code to describe the question: when using `res = ma @ mb` in pytorch, we can easily obtain the gradient of ma by calling some backward function, e.g. `(res**2).sum().backward(); print(ma.grad)`. But when this multiplication is implemented in a for loop manner, how can we can the gradient of tensor ma or mb?\r\n```python\r\nimport torch\r\nma = torch.randn(2,3,3,4).requires_grad_(True)\r\nmb = torch.randn(2,3,4,5).requires_grad_(True)\r\nB,C,H,W = ma.shape\r\nB,C,W,K = mb.shape\r\nres_torch = torch.zeros((B,C,H,K), requires_grad=True)\r\nfor b in range(B):\r\n for c in range(C):\r\n for h in range(H):\r\n for k in range(K):\r\n for r in range(W):\r\n res_torch[b][c][h][k] = res_torch[b][c][h][k] + ma[b][c][h][r] * mb[b][c][r][k]\r\nres_torch.sum().backward()\r\nprint(ma.grad)\r\n```\r\nA runtime error raised for the above code: `RuntimeError: leaf variable has been moved into the graph interio`.\r\n\r\nHowever for this one, it can not yield the expected result:\r\n```python\r\nma = torch.randn(2,3,3,4).requires_grad_(True)\r\nmb = torch.randn(2,3,4,5).requires_grad_(True)\r\nB,C,H,W = ma.shape\r\nB,C,W,K = mb.shape\r\nres_torch = torch.zeros((B,C,H,K), requires_grad=True)\r\nfor b in range(B):\r\n for c in range(C):\r\n for h in range(H):\r\n for k in range(K):\r\n res = 0\r\n for r in range(W):\r\n res = res + ma[b][c][h][r] * mb[b][c][r][k]\r\n res_torch[b][c][h][k].data.fill_(res)\r\nres_torch.sum().backward()\r\nprint(ma.grad)\r\n```\r\nthe output was `None`.\r\nAny hints for solving this problem?\n\ncc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer", "url": "https://github.com/pytorch/pytorch/issues/53348", "state": "closed", "labels": [ "module: autograd", "triaged" ], "created_at": "2021-03-05T09:33:53Z", "updated_at": "2021-03-06T02:53:54Z", "user": "Leiwx52" }, { "repo": "pytorch/vision", "number": 3509, "title": "simple API discussion about the AutoAugment", "body": "## \u2753 Questions and Help\r\n\r\nquestion about the user interface API \r\n[transforms/autoaugment.py](https://github.com/pytorch/vision/blob/7b9d30eb7c4d92490d9ac038a140398e0a690db6/torchvision/transforms/autoaugment.py) \r\nThe current usage would be `AutoAugment(AutoAugmentPolicy('cifar10'))`, but since the policy is just an `Enum`, I doubt whether it'll be more convenient to be `AutoAugment('cifar10')`. Is there any future advantage to use the policy?\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3509", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2021-03-05T06:19:31Z", "updated_at": "2021-03-07T02:58:29Z", "user": "ain-soph" }, { "repo": "pytorch/examples", "number": 889, "title": "Low training accuracy using pre-trained model", "body": "Hello,\r\nI am trying to evaluate a pre-trained mobilenetv2 model from torchvision on the ImageNet training dataset using this script. \r\nTo do so, I modify lines 235-237 to perform validation on the train loader instead of the val loader:\r\n```\r\n if args.evaluate:\r\n validate(train_loader, model, criterion, args)\r\n return\r\n```\r\nEverything else is left untouched. The command I use to run is:\r\n`python imagenet_train_example.py -a mobilenet_v2 -j 16 -b 1024 -e --pretrained /data/ImageNet`\r\nHowever, the results are lower than expected:\r\n`Acc@1 2.926 Acc@5 15.079 Loss 11.795791`", "url": "https://github.com/pytorch/examples/issues/889", "state": "open", "labels": [ "help wanted", "vision" ], "created_at": "2021-03-04T15:15:11Z", "updated_at": "2022-03-09T21:10:33Z", "comments": 2, "user": "AndreiXYZ" }, { "repo": "pytorch/pytorch", "number": 53264, "title": "How to load trained . torch from conversion to .mlmodel", "body": "Hi , I need in help in converting the .torch to .mlmodel ,, while doing it i faced an error . After researching found no solution for the same and posted for help .\r\nthe error:\r\n<img width=\"1009\" alt=\"Screenshot 2021-03-01 at 10 24 01 PM\" src=\"https://user-images.githubusercontent.com/35099512/109978249-b2154d80-7d23-11eb-8e2f-39497d77051d.png\">\r\nthe code used :\r\n<img width=\"843\" alt=\"Screenshot 2021-03-02 at 10 46 58 PM\" src=\"https://user-images.githubusercontent.com/35099512/109978278-b93c5b80-7d23-11eb-8286-43ac2cab72e3.png\">\r\n \n\ncc @mruberry", "url": "https://github.com/pytorch/pytorch/issues/53264", "state": "open", "labels": [ "oncall: mobile" ], "created_at": "2021-03-04T14:27:11Z", "updated_at": "2021-03-12T05:28:21Z", "user": "NaveenTg" }, { "repo": "pytorch/serve", "number": 989, "title": "How to get the URL parameters within the custom inference handler?", "body": "Hi guys, recently I'm writing an custom service handler for yolov5. However, I have no idea about how to get the URL parameters in my inference handler. \r\n\r\nFor example:\r\n```\r\ncurl -XPOST http://localhost:8080/predictions/yolo?my_parameter=123 -T@sample.jpg\r\n```\r\nHow can I get the value of ``my_parameter`` in my custom service handler? \r\n\r\nI know that I could pass the parameters within the multipart/form-data or json body to my service handler. But I can't, because the API signature is by-design. Passing the parameter with URL is the only choice of mine.\r\n\r\nAny suggestions would be appreciated!", "url": "https://github.com/pytorch/serve/issues/989", "state": "open", "labels": [ "triaged_wait" ], "created_at": "2021-03-03T09:48:50Z", "updated_at": "2023-11-07T12:42:08Z", "user": "neoragex2002" }, { "repo": "pytorch/pytorch", "number": 53101, "title": "How to compile torch/lib/c10d/ProcessGroupNCCL.cpp", "body": "I want to modify `ProcessGroupNCCL.cpp` to add some print statements, but I don't know how to recompile this file.\r\n\r\nIt is located at [https://github.com/pytorch/pytorch/tree/v1.7.1/torch/lib/c10d](https://github.com/pytorch/pytorch/tree/v1.7.1/torch/lib/c10d).\r\n\r\nI'm using pytorch 1.7.1 installed by anaconda.\r\n\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd @cbalioglu", "url": "https://github.com/pytorch/pytorch/issues/53101", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2021-03-02T09:06:49Z", "updated_at": "2021-03-04T03:14:39Z", "user": "1013801464" }, { "repo": "pytorch/text", "number": 1218, "title": "how to load data using TabularDataset and the new nightly torchtext experimental dataloader", "body": "the `torchtext.data.TabularDataset` returns an iterable of objects that cannot further be split into batches, or (x,y) sets of values. making it impossible to use the new `torchtext.vocab.Vocab` to build vocab using `Counter` \r\n\r\n**my use-case code:**\r\ntokenize = lambda x:x.split(\" \")\r\n\r\nkonkani = Field(sequential=True, tokenize=tokenize, init_token='<sos>', eos_token='<eos>')\r\n\r\nhindi = Field(sequential=True, tokenize=tokenize, init_token='<sos>', eos_token='<eos>')\r\n\r\nfields = [(\"word_token_konkani\", konkani), ('word_token_hindi', hindi)]\r\n\r\ntrain_data, test_data = TabularDataset.splits(path=\"translation/\", train=\"train.csv\",\r\n test=\"test.csv\", format=\"csv\", fields=fields)\r\n\r\n\r\ni was trying to refer the migration tutorials here : [link](https://github.com/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb) \r\n\r\n\r\n", "url": "https://github.com/pytorch/text/issues/1218", "state": "closed", "labels": [], "created_at": "2021-02-26T08:08:49Z", "updated_at": "2021-02-26T16:47:50Z", "user": "StephennFernandes" }, { "repo": "pytorch/pytorch", "number": 52850, "title": "How to skip the images in a custom dataset and deal with None values?", "body": "I have an object detection dataset with RGB images and annotations in Json. I use a custom DataLoader class to read the images and the labels. One issue that I\u2019m facing is that I would like to skip images when training my model if/when labels don\u2019t contain certain objects.\r\n\r\nFor example, If one image doesn\u2019t contain any target labels belonging to the class \u2018Cars\u2019, I would like to skip them. When parsing my Json annotation, I tried checking for labels that don\u2019t contain the class \u2018Cars\u2019 and returned None. Subsequently, I used a collate function to filter the None but unfortunately, It is not working.\r\n\r\n\r\n\r\n\r\n```\r\nimport torch\r\nfrom torch.utils.data.dataset import Dataset\r\nimport json\r\nimport os\r\nfrom PIL import Image\r\nfrom torchvision import transforms\r\n#import cv2\r\nimport numpy as np\r\ngeneral_classes = {\r\n # Cars\r\n \"Toyota Corolla\" : 0,\r\n \"VW Golf\" : 0,\r\n \"VW Beetle\" : 0,\r\n\r\n # Motor-cycles\r\n \"Harley Davidson\" : 1,\r\n \"Yamaha YZF-R6\" : 1,\r\n}\r\n\r\ncar_classes={\r\n\"Toyota Corolla\" : 0,\r\n\"VW Golf\" : 0,\r\n\"VW Beetle\" : 0\r\n}\r\n\r\ndef get_transform(train):\r\n transforms = []\r\n # converts the image, a PIL image, into a PyTorch Tensor\r\n transforms.append(T.ToTensor())\r\n if train:\r\n # during training, randomly flip the training images\r\n # and ground-truth for data augmentation\r\n transforms.append(T.RandomHorizontalFlip(0.5))\r\n return T.Compose(transforms)\r\n\r\n\r\ndef my_collate(batch):\r\n batch = list(filter(lambda x: x is not None, batch))\r\n return torch.utils.data.dataloader.default_collate(batch)\r\n\r\n\r\nclass FilteredDataset(Dataset):\r\n # The dataloader will skip the image and corresponding labels based on the dictionary 'car_classes'\r\n def __init__(self, data_dir, transforms):\r\n self.data_dir = data_dir\r\n img_folder_list = os.listdir(self.data_dir)\r\n self.transforms = transforms\r\n\r\n imgs_list = []\r\n json_list = []\r\n self.filter_count=0\r\n self.filtered_label_list=[]\r\n\r\n for img_path in img_folder_list:\r\n #img_full_path = self.data_dir + img_path\r\n img_full_path=os.path.join(self.data_dir,img_path)\r\n json_file = os.path.join(img_full_path, 'annotations-of-my-images.json')\r\n img_file = os.path.join(img_full_path, 'Image-Name.png')\r\n\r\n json_list.append(json_file)\r\n imgs_list.append(img_file)\r\n self.imgs = imgs_list\r\n self.annotations = json_list\r\n total_count=0\r\n\r\n for one_annotation in self.annotations:\r\n filtered_obj_id=[]\r\n with open(one_annotation) as f:\r\n img_annotations = json.load(f)\r\n\r\n parts_list = img_annotations['regions']\r\n for part in parts_list:\r\n current_obj_id = part['tags'][0] # bbox label \r\n check_obj_id = general_classes[current_obj_id]\r\n if(check_obj_id==0):\r\n subclass_id=car_classes[current_obj_id]\r\n filtered_obj_id.append(subclass_id)\r\n total_count=total_count+1\r\n\r\n if(len(filtered_obj_id)>0):\r\n self.filter_count=self.filter_count+1\r\n self.filtered_label_list.append(one_annotation)\r\n\r\n print(\"The total number of the objects in all images: \",total_count)\r\n\r\n\r\n # get one image and the bboxes,img_id, labels of parts, etc in the image as target.\r\n def __getitem__(self, idx):\r\n\r\n img_path = self.imgs[idx]\r\n image_id = torch.tensor([idx])\r\n \r\n with open(self.annotations[idx]) as f:\r\n img_annotations = json.load(f)\r\n parts_list = img_annotations['regions']\r\n obj_ids = []\r\n boxes = []\r\n for part in parts_list:\r\n obj_id = part['tags'][0]\r\n check_obj_id = general_classes[obj_id]\r\n if(check_obj_id==0):\r\n obj_id=car_classes[obj_id]\r\n obj_ids.append(obj_id)\r\n #print(\"---------------------------------------------------\")\r\n \r\n if(len(obj_ids)>0):\r\n img = Image.open(img_path).convert(\"RGB\")\r\n labels = torch.as_tensor(obj_ids, dtype = torch.int64)\r\n target = {}\r\n target['labels'] = labels\r\n \r\n if self.transforms is not None:\r\n img, target = self.transforms(img, target)\r\n return img, target\r\n else:\r\n return None\r\n\r\n\r\n def __len__(self):\r\n return len(self.filtered_label_list)\r\n\r\n\r\n\r\n\r\ntrain_data_path = \"path-to-my-annotation\"\r\n# Generators\r\ntrain_dataset = FilteredDataset(train_data_path,get_transform(train=True))\r\nprint(\"Total files in the train_dataset: \",len(train_dataset))\r\n#print(\"The first instance in the train dataset : \",train_dataset[0])\r\n#training_generator = torch.utils.data.DataLoader(train_dataset)\r\ntraining_generator = torch.utils.data.DataLoader(train_dataset,collate_fn=my_collate)\r\nprint(\"\\n\\n Iterat", "url": "https://github.com/pytorch/pytorch/issues/52850", "state": "open", "labels": [ "module: dataloader", "triaged" ], "created_at": "2021-02-25T18:04:33Z", "updated_at": "2021-02-25T22:04:56Z", "user": "srinivasgln" }, { "repo": "pytorch/vision", "number": 3451, "title": "Can't compile master: requires nightly PyTorch?", "body": "I have installed torch 1.7.1 and g++ 7.5.0. Do I need nightly PyTorch version to compile nightly torchvision 0.9.0?\r\n\r\n`pip install git+https://github.com/pytorch/vision --no-dependencies`: [log.txt](https://github.com/pytorch/vision/files/6037409/log.txt)\r\n", "url": "https://github.com/pytorch/vision/issues/3451", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-24T16:28:35Z", "updated_at": "2021-02-24T18:18:30Z", "user": "vadimkantorov" }, { "repo": "pytorch/vision", "number": 3436, "title": "Windows CPU build missing on PyPI?", "body": "## \ud83d\udc1b Bug\r\n\r\nIs there a reason the CPU build of `torchvision` is not pushed to PyPI anymore?\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. `pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.1`\r\n\r\nOutput:\r\n```\r\nCollecting torch==1.7.1\r\n Downloading torch-1.7.1-cp38-cp38-win_amd64.whl (184.0 MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 184.0 MB 201 kB/s\r\nERROR: Could not find a version that satisfies the requirement torchvision==0.8.2\r\nERROR: No matching distribution found for torchvision==0.8.2\r\n```\r\n\r\n## Expected behavior\r\n\r\nCPU build of `torchvision` is installed.\r\n\r\n## Environment\r\n\r\n - OS: Windows\r\n - Python version: 3.8.6\r\n\r\n## Additional context\r\n\r\n`torchvision` used to be pushed to PyPI ([up until v0.5.0](https://pypi.org/project/torchvision/0.5.0/#files)) and I'm wondering why this isn't the case anymore. I'm aware the standard/recommended way of installing is through [the pytorch.org index](https://download.pytorch.org/whl/torch_stable.html). However, the main `torch` package (CPU only) is being pushed to PyPI, so I'm wondering whether it is inteded that both `torchvision` and `torchaudio` are not or if it's just a bug?\r\n\r\nI could not find any helpful recent information on this, only some discussions around PyPI binary size contraints (mainly [this](https://github.com/pytorch/vision/issues/1774) and [this](https://github.com/pytorch/pytorch/issues/24310#)). I understand this is a problem for the CUDA builds but for the CPU build I really do not see any issue (e.g. `torchvision` v0.5.0 is 1.2 MB).\r\n\r\nDoes anybody have some insight as to why this is happening?\r\n\n\ncc @peterjc123 @nbcsm @guyang3532 @maxluk @gunandrose4u @smartcat2010 @mszhanyi", "url": "https://github.com/pytorch/vision/issues/3436", "state": "closed", "labels": [ "question", "windows", "topic: binaries" ], "created_at": "2021-02-23T11:54:25Z", "updated_at": "2021-03-09T11:25:53Z", "user": "1enn0" }, { "repo": "pytorch/audio", "number": 1298, "title": "how to compute log filter bank energy in torch audio compare with python_speech_feature?", "body": "## \u2753 I want re-procedure result like when i use compute log-filterbank energy of lib: python_speech_feature by using torchaudio.\r\n\r\nthis is my code, and I'm see the result is difference:\r\n\r\n```\r\n# load audio data by librosa\r\npath_audio = \"audio_a.wav\"\r\ny, sr = librosa.load(path_audio, sr=16000, offset=0.5, duration=0.4)\r\n\r\n# load audio data by torch audio\r\naudio_ft, sr = torchaudio.load(path_audio)\r\naudio_ft = audio_ft.squeeze(0)\r\ny_torch = audio_ft[int(0.5*16000):int(0.9*16000)]\r\n\r\n# the result is the same then i compute log filterbank energy\r\nft_f_bank = python_speech_features.logfbank(y, samplerate=16000, winlen=0.025, winstep=0.01, nfilt=64,nfft=512)\r\nprint(ft_f_bank.shape) # result: (39, 64)\r\nft_f_bank_by_torch = torchaudio.compliance.kaldi.fbank(y_torch, sample_frequency=16000.0, frame_length=25.0, frame_shift=10.0, use_log_fbank=True, use_energy=True, num_mel_bins=64)\r\nprint(ft_f_bank_by_torch.shape) # result: (38, 65)\r\n```\r\nHow can i make result return by torchaudio is the same with python speech feature. I'm not have deep understand more about speech feature, so question can so weird, sorry. \r\nThankyou\r\n\r\n", "url": "https://github.com/pytorch/audio/issues/1298", "state": "closed", "labels": [], "created_at": "2021-02-23T10:20:25Z", "updated_at": "2021-02-23T16:34:42Z", "user": "trangtv57" }, { "repo": "pytorch/vision", "number": 3429, "title": "Inconsistency between the pretrained models and labels", "body": "I notice that for pretrain models that are provided the labels are not consistent.\r\nFor example vgg16 class 1 is different from Resnet50 class 1.\r\nCan you let us know where we can find the corresponding labels for each model?\r\nFor vgg i notice that the one that looks like this:\r\n```{\r\n \"0\": [\r\n \"n01440764\",\r\n \"tench\"\r\n ],\r\n \"1\": [\r\n \"n01443537\",\r\n \"goldfish\"\r\n ],\r\n \"2\": [\r\n \"n01484850\",\r\n \"great_white_shark\"\r\n ],\r\n \"3\": [\r\n \"n01491361\",\r\n \"tiger_shark\"\r\n ],\r\n \"4\": [\r\n \"n01494475\",\r\n \"hammerhead\"\r\n ],\r\n \"5\": [\r\n \"n01496331\",\r\n \"electric_ray\"\r\n ],\r\n \"6\": [\r\n \"n01498041\",\r\n \"stingray\"\r\n ],\r\n \"7\": [\r\n \"n01514668\",\r\n \"cock\"\r\n ],\r\n \"8\": [\r\n \"n01514859\",\r\n \"hen\"\r\n```\r\nworks, but this one is not the one that we should use for resnets, please let us know what we should do.\r\nThanks", "url": "https://github.com/pytorch/vision/issues/3429", "state": "closed", "labels": [ "question", "module: models", "module: reference scripts" ], "created_at": "2021-02-22T22:50:42Z", "updated_at": "2021-03-31T08:46:32Z", "user": "seyeeet" }, { "repo": "pytorch/text", "number": 1193, "title": "Looking for an example on how to use BucketIterator with a transformer model?", "body": "I would appreciate an end-to-end example. The examples that I found stop with the BucketIterator. It is unclear what to do with it.\r\n\r\n", "url": "https://github.com/pytorch/text/issues/1193", "state": "closed", "labels": [ "legacy" ], "created_at": "2021-02-20T02:41:12Z", "updated_at": "2024-07-12T11:58:25Z", "user": "sorenwacker" }, { "repo": "pytorch/vision", "number": 3421, "title": "error making: python-torchvision-cuda", "body": "can't make an app from AUR `python-torchvision-cuda` in Arch Linux\r\n\r\n\r\n```sh\r\n=========================================================================================== short test summary info ===========================================================================================\r\nFAILED test/test_functional_tensor.py::Tester::test_adjust_brightness - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_adjust_contrast - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_adjust_gamma - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_adjust_hue - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_adjust_saturation - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_affine - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_center_crop - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_crop - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_five_crop - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_gaussian_blur - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_hflip - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_hsv2rgb - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_pad - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_perspective - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_resize - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_resized_crop - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_rgb2hsv - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_rgb_to_grayscale - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_rotate - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_ten_crop - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_functional_tensor.py::Tester::test_vflip - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_image.py::ImageTester::test_decode_image - AssertionError: False is not true\r\nFAILED test/test_image.py::ImageTester::test_decode_jpeg - AssertionError: False is not true\r\nFAILED test/test_image.py::ImageTester::test_encode_jpeg - AssertionError: False is not true\r\nFAILED test/test_image.py::ImageTester::test_write_jpeg - AssertionError: b'\\xf[2208 chars]e6\\xa6\\x87\\xc2\\x0c\\xaa\\xcc\\xd9\\xe4\\xfd\\xe3\\x82[170942 chars]\\xd9' != b'\\xf[2208 chars]e6\\xa7\\x0f\\xf0\\x83*\\xb36y?x...\r\nFAILED test/test_models.py::ModelTester::test_fasterrcnn_resnet50_fpn_cpu - TypeError: Object of type 'NoneType' is not an instance of 'function'\r\nFAILED test/test_models.py::ModelTester::test_googlenet_eval - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_models.py::ModelTester::test_keypointrcnn_resnet50_fpn_cpu - RuntimeError: class '__torch__.torchvision.models.detection._utils.BoxCoder' already defined.\r\nFAILED test/test_models.py::ModelTester::test_maskrcnn_resnet50_fpn_cpu - RuntimeError: class '__torch__.torchvision.models.detection._utils.BoxCoder' already defined.\r\nFAILED test/test_models.py::ModelTester::test_retinanet_resnet50_fpn_cpu - RuntimeError: class '__torch__.torchvision.models.detection._utils.BoxCoder' already defined.\r\nFAILED test/test_ops.py::RoIPoolTester::test_backward_cpu_contiguous - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_ops.py::RoIPoolTester::test_backward_cpu_non_contiguous - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_ops.py::PSRoIPoolTester::test_backward_cpu_contiguous - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_ops.py::PSRoIPoolTester::test_backward_cpu_non_contiguous - TypeError: Object of type 'module' is not an instance of 'function'\r\nFAILED test/test_ops.py::RoIAlignTester::test_backwar", "url": "https://github.com/pytorch/vision/issues/3421", "state": "closed", "labels": [ "question", "topic: build" ], "created_at": "2021-02-19T19:53:46Z", "updated_at": "2021-02-21T23:01:29Z", "user": "chiboreache" }, { "repo": "pytorch/cpuinfo", "number": 53, "title": "Cpuinfo in sparc", "body": "I was able to compile pytorch on Debian 10, with Sparc processor. However, when it runs, it gives the error that it does not recognize the cpuinfo information and uses only one processor of the 32 existing ones. I would like to know if I can modify something to take at least one 16 core socket. On several occasions I was able to modify the code so that it takes the correct information. Thanks in advance.", "url": "https://github.com/pytorch/cpuinfo/issues/53", "state": "open", "labels": [ "question" ], "created_at": "2021-02-19T17:48:14Z", "updated_at": "2024-01-11T00:57:03Z", "user": "alerenato" }, { "repo": "pytorch/TensorRT", "number": 344, "title": "[Question ][Error ] at least 4 dimensions are required for input", "body": "## \u2753 Question\r\nHi I managed to compile TRTorch but it gives me very weird results when I apply it to a simple Conv2d model. \r\nThe model is as follows : \r\n```\r\nclass DummyModel(torch.nn.Module):\r\n def __init__(self,):\r\n super().__init__()\r\n self.conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3)\r\n def forward(self, x):\r\n return torch.mean(self.conv(x))\r\nmd = DummyModel().to(DEVICE)\r\ninput_ = torch.ones((1, 3, 1024, 1024)).to(DEVICE)\r\nwith torch.no_grad():\r\n traced_model = torch.jit.trace(md, input_)\r\ntorch.jit.save(traced_model, \"net.pth\")\r\n```\r\n\r\nRunning \r\n`bazel run //cpp/trtorchexec -- net.pth \"(1,3,1024,1024)\"`\r\nGives : \r\n\r\n```\r\nDEBUG: [TRTorch - Debug Build] - stride: [1, 1]\r\nDEBUG: [TRTorch - Debug Build] - padding: [0, 0]\r\nDEBUG: [TRTorch - Debug Build] - dilation: [1, 1]\r\nDEBUG: [TRTorch - Debug Build] - out_padding: [0, 0]\r\nDEBUG: [TRTorch - Debug Build] - groups: 1\r\nDEBUG: [TRTorch - Debug Build] - Weights: [10]\r\n Number of input maps: 10\r\n Number of output maps: 10\r\n Element shape: [1]\r\nERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.\r\nDEBUG: [TRTorch - Debug Build] - Output tensor shape: []\r\nINFO: [TRTorch Conversion Context] - Adding Layer %11 : Tensor = aten::mean(%10, %7) # <ipython-input-76-8dff675398f2>:6:0 (ctx.AddLayer)\r\nDEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor\r\nDEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value\r\nERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.\r\nDEBUG: [TRTorch - Debug Build] - Frozen tensor shape: []\r\nERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.\r\nWARNING: [TRTorch - Debug Build] - Mean Converter disregards dtype\r\nERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.\r\nDEBUG: [TRTorch - Debug Build] - Output shape: []\r\nINFO: [TRTorch Conversion Context] - Marking Output 11 named output_0 in engine (ctx.MarkOutput)\r\nERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.\r\nERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.\r\nERROR: [TRTorch Conversion Context] - Layer %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0 failed validation\r\nERROR: [TRTorch Conversion Context] - Network validation failed.\r\n```\r\n\r\nIs there another to specify the input size ? \r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):1.7.1\r\n - CPU Architecture:\r\n - OS (e.g., Linux):Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip \r\n - Build command you used (if compiling from source):bazel build //:libtrtorch --compilation_mode opt\r\n - Are you using local sources or building from archives:local sources\r\n - Python version:3.7.9\r\n - CUDA version:11.0\r\n - GPU models and configuration:2080 TI\r\n - Any other relevant information:Nvidia-driver : 450.51.05\r\n\r\n\r\n", "url": "https://github.com/pytorch/TensorRT/issues/344", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-17T14:59:17Z", "updated_at": "2021-02-17T17:36:29Z", "user": "MatthieuToulemont" }, { "repo": "pytorch/vision", "number": 3406, "title": "RetinaNet: TypeError: __init__() got an unexpected keyword argument 'trainable_backbone_layers'", "body": "## \ud83d\udc1b Bug\r\n\r\n`retinanet_resnet50_fpn` throws an error while passing `trainable_backbone_layers` as an argument.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```python\r\nimport torchvision\r\nmodel = torchvision.models.detection.retinanet_resnet50_fpn(trainable_backbone_layers=2)\r\n```\r\n\r\n```\r\n~/gridai/venv/lib/python3.8/site-packages/torchvision/models/detection/retinanet.py in retinanet_resnet50_fpn(pretrained, progress, num_classes, pretrained_backbone, **kwargs)\r\n 620 backbone = resnet_fpn_backbone('resnet50', pretrained_backbone,\r\n 621 returned_layers=[2, 3, 4], extra_blocks=LastLevelP6P7(256, 256))\r\n--> 622 model = RetinaNet(backbone, num_classes, **kwargs)\r\n 623 if pretrained:\r\n 624 state_dict = load_state_dict_from_url(model_urls['retinanet_resnet50_fpn_coco'],\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'trainable_backbone_layers'\r\n```\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/vision/issues/3406", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-16T05:20:25Z", "updated_at": "2021-02-27T17:22:53Z", "user": "kaushikb11" }, { "repo": "pytorch/vision", "number": 3397, "title": "Bug Report: No module named 'torchvision.models.mobilenetv2'", "body": "## \u2753 Questions and Help\r\n\r\nHi there, I encounter a bug when running this following line \r\n\r\n>>> import torch\r\n>>> res = torch.hub.load('pytorch/vision', 'resnet50')\r\n\r\nthe error is:\r\n\r\n-------------------------------------begin of error info---------------------------------\r\n\r\nUsing cache found in /root/.cache/torch/hub/pytorch_vision_master\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-21-55b890d7b167> in <module>()\r\n 1 import torch\r\n----> 2 res = torch.hub.load('pytorch/vision', 'resnet50')\r\n 3 print(res)\r\n\r\n5 frames\r\n/root/.cache/torch/hub/pytorch_vision_master/hubconf.py in <module>()\r\n 12 from torchvision.models.googlenet import googlenet\r\n 13 from torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0\r\n---> 14 from torchvision.models.mobilenetv2 import mobilenet_v2\r\n 15 from torchvision.models.mobilenetv3 import mobilenet_v3_large, mobilenet_v3_small\r\n 16 from torchvision.models.mnasnet import mnasnet0_5, mnasnet0_75, mnasnet1_0, \\\r\n\r\nModuleNotFoundError: No module named 'torchvision.models.mobilenetv2'\r\n\r\n-----------------end of error info------------------------------------------------\r\nBTW, my environment is torch-1.7.1, torchvision-0.8.2, I also try to \r\npip install torchvision.models.mobilenetv2\r\nit turns out useless.\r\n\r\nGrateful to hear any suggestions!\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/3397", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-15T12:24:50Z", "updated_at": "2021-02-15T14:35:27Z", "user": "DemonsHunter" }, { "repo": "pytorch/vision", "number": 3392, "title": "How to compile arbitrary nn modules with jit pytorch? ( RuntimeError: builtin cannot be used as a value, with a dict)", "body": "## \ud83d\udc1b Bug\r\n\r\nSimilar to https://github.com/pytorch/vision/issues/1675.\r\n\r\nSimple, I compare my value to a dict and it throws an error.\r\n\r\n```\r\n \"\"\"\r\n if type(json_data) is dict:\r\n ~~~~ <--- HERE\r\n```\r\n\r\n## To Reproduce\r\n\r\nSimple, any code that has a comparison with a dict:\r\n\r\n```\r\nclass Node(object):\r\n def __init__(self):\r\n pass\r\n\r\n @classmethod\r\n def from_json(cls, json_data):\r\n if type(json_data) is dict:\r\n node_data = next(iter(json_data))\r\n assert type(json_data[node_data]) is list\r\n node_children = [cls.from_json(child) for child in json_data[node_data]]\r\n return Node(node_data, node_children)\r\n else:\r\n return Node(json_data)\r\n\r\n```\r\n\r\n## Expected behavior\r\n\r\nJit makes my checkpoint.\r\n\r\n## Environment\r\n\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.7.1\r\n - OS (e.g., Linux): mac os x\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source): conda\r\n - Build command you used (if compiling from source): conda\r\n - Python version: 3.8\r\n - CUDA/cuDNN version: CPU\r\n - GPU models and configuration: CPU\r\n - Any other relevant information: CPU\r\n\r\n## Additional context\r\n\r\nCompiling arbitrary custom nn modules to jit\r\n\r\nerror:\r\n\r\n```\r\n/Users/brando/anaconda3/envs/coq_gym/bin/python /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py --cmd-line --multiproc --qt-support=auto --client 127.0.0.1 --port 59213 --file /Users/brando/ML4Coq/playground/running_pytorch_ocaml/treenn2jit_ckpt.py\r\nConnected to pydev debugger (build 203.7148.72)\r\n1.7.1\r\nTraceback (most recent call last):\r\n File \"/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_recursive.py\", line 680, in compile_unbound_method\r\n create_methods_and_properties_from_stubs(concrete_type, (stub,), ())\r\n File \"/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_recursive.py\", line 304, in create_methods_and_properties_from_stubs\r\n concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)\r\n File \"/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/annotations.py\", line 330, in try_ann_to_type\r\n torch.jit._script._recursive_compile_class(ann, loc)\r\n File \"/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_script.py\", line 1056, in _recursive_compile_class\r\n _compile_and_register_class(obj, rcb, _qual_name)\r\n File \"/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_script.py\", line 64, in _compile_and_register_class\r\n torch._C._jit_script_class_compile(qualified_name, ast, defaults, rcb)\r\nRuntimeError: \r\nbuiltin cannot be used as a value:\r\n File \"/Users/brando/ML4Coq/ml4coq-proj/embeddings_zoo/extract_tactic_from_lasse_data.py\", line 56\r\n term = string\r\n \"\"\"\r\n if type(json_data) is dict:\r\n ~~~~ <--- HERE\r\n node_data = next(iter(json_data))\r\n assert type(json_data[node_data]) is list\r\n'Node.from_json' is being compiled since it was called from '__torch__.embeddings_zoo.extract_tactic_from_lasse_data.Node'\r\n```\r\n\r\nhttps://stackoverflow.com/questions/66179121/how-to-fix-the-runtimeerror-builtin-cannot-be-used-as-a-value-with-a-dict-whe", "url": "https://github.com/pytorch/vision/issues/3392", "state": "closed", "labels": [ "invalid" ], "created_at": "2021-02-12T21:11:15Z", "updated_at": "2021-02-17T16:29:07Z", "user": "brando90" }, { "repo": "pytorch/pytorch", "number": 52147, "title": "Pointer passed where number is expected for PYTORCH_CUDA_FUSER_JIT_OPT_LEVEL leading to crash", "body": "## \ud83d\udc1b Bug\r\n\r\nThe CUDA API expects a `void**` for option values for functions like `cuModuleLoadDataEx`. The documentation seems to be unclear, what that should be but according to other sources (see below) that value should be simply the value casted to a `void*`, not a pointer to that value.\r\nHence the code at https://github.com/pytorch/pytorch/blob/7763c127cd5630ba4123ad89fc5243c28e91aa4a/torch/csrc/jit/codegen/cuda/executor_utils.cpp#L320 is wrong and may lead to failed executions or wrong optimization levels.\r\n\r\nI've seen this in one of the PyTorch tests (see below) where I get:\r\n```\r\n======================================================================\r\nERROR: test_unary_ops (test_jit_cuda_fuser.TestCudaFuser)\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/tmp/install_pt/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py\", line 827, in wrapper\r\n method(*args, **kwargs)\r\n File \"/dev/shm/s3248973-EasyBuild/PyTorch/1.7.1/fosscuda-2020b/pytorch-1.7.1/test/test_jit_cuda_fuser.py\", line 369, in test_unary_ops\r\n self._unary_test_helper(op)\r\n File \"/dev/shm/s3248973-EasyBuild/PyTorch/1.7.1/fosscuda-2020b/pytorch-1.7.1/test/test_jit_cuda_fuser.py\", line 328, in _unary_test_helper\r\n jit_o = t_jit(x, 2.0)\r\n File \"/tmp/install_pt/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py\", line 126, in prof_func_call\r\n return prof_callable(func_call, *args, **kwargs)\r\n File \"/tmp/install_pt/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py\", line 123, in prof_callable\r\n return callable(*args, **kwargs)\r\nRuntimeError: The following operation failed in the TorchScript interpreter2.\r\nTraceback of TorchScript (most recent call last):\r\nRuntimeError: CUDA driver error: a PTX JIT compilation failed\r\n```\r\n\r\nAnd to verify I added the following code to torch/csrc/jit/codegen/cuda/executor_utils.cpp above the call to `cuModuleLoadDataEx`:\r\n```\r\n options.push_back(CU_JIT_ERROR_LOG_BUFFER);\r\n options.push_back(CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES);\r\n std::string errors(8000, '\\0');\r\n option_vals.push_back((void*) errors.data());\r\n option_vals.push_back((void*) errors.size());\r\n```\r\n\r\nWhen printing this string on failure I got: \r\n> ptxas fatal : 32-bit integer value (3849789140) out of range\r\n\r\nThis is exactly the pointer to `jit_opt_level` which confirms the above.\r\n\r\nPS: It is likely a good idea to include the JIT error buffer in PyTorch and report it on failure.\r\n\r\nReferences:\r\n- https://stackoverflow.com/a/17070844/1930508\r\n- https://github.com/HongjianLi/cuda/blob/dd52fd563558667315de3fecea3559ac6ba2a89a/vectorAdd/vectorAdd.cpp#L74\r\n- https://github.com/MentorEmbedded/nvptx-tools/blob/59e0b755e3ab085a3a348bd001bad4f010fd9c00/nvptx-run.c#L77-L88\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. `python test_jit_cuda_fuser_legacy.py -k test_unary_ops`\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.1, master\r\n\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/52147", "state": "open", "labels": [ "oncall: jit" ], "created_at": "2021-02-11T17:04:53Z", "updated_at": "2021-02-11T17:44:14Z", "user": "Flamefire" }, { "repo": "pytorch/TensorRT", "number": 338, "title": "\u2753 [Question] What is the correct way to create a trtorch::CompileSpec for a single input? ", "body": "## \u2753 Question\r\n\r\nMy network has a single input of the following shape [1, 3, 224, 224]. I a trying to create the trtorch::CompileSpec as follows\r\n`auto compile_settings = trtorch::CompileSpec({1, 3, 224, 224});` however I am getting the following output\r\n\r\n````\r\nterminate called after throwing an instance of 'trtorch::Error'\r\n what(): [enforce fail at core/conversion/conversion.cpp:135] Expected input_tensors.size() == input_dims.size() to be true but got false\r\nExpected dimension specifications for all input tensors, but found 1 input tensors and 4 dimension specs (conversion.AddInputs)\r\n````\r\nI am wondering whether the constructor is a vector of input shapes? If so, doing \r\n\r\n````\r\nstd::vector<std::vector<int64_t>> input_dims = {{1, 3, 224, 224}};\r\nauto compile_settings = trtorch::CompileSpec(input_dims);\r\n````\r\ngives the following error\r\n\r\n````\r\nERROR: [TRTorch] - Requested converter for aten::adaptive_max_pool2d, but no such converter was found\r\nterminate called after throwing an instance of 'trtorch::Error'\r\n what(): [enforce fail at core/conversion/conversion.cpp:108] Expected converter to be true but got false\r\nUnable to convert node: %512 : Tensor, %513 : Tensor = aten::adaptive_max_pool2d(%511, %7) # /home/federico/.local/lib/python3.8/site-packages/torch/nn/functional.py:844:0 (conversion.AddLayer)\r\nSchema: aten::adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor)\r\nConverter for aten::adaptive_max_pool2d requested, but no such converter was found.\r\nIf you need a converter for this operator, you can try implementing one yourself\r\nor request a converter: https://www.github.com/NVIDIA/TRTorch/issues\r\n````\r\n\r\nSo my question is, which approach is the correct one. If the second is, I can try to implement the converter myself but I want to be sure what has to be passed to create a correct `CompileSpec`.\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.1\r\n - CPU Architecture: amd64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): `LD_LIBRARY_PATH=$(pwd)/bazel-TRTorch/external/libtorch/lib/:$(pwd)/bazel-TRTorch/external/cudnn/lib64/:$(pwd)/bazel-TRTorch/external/tensorrt/lib/:/usr/local/cuda/lib64/:$LD_LIBRARY_PATH bazel run //adv_test:adv_trtorch -c opt --jobs=3 --distdir third_party/dist_dir/x86_64-linux-gnu/`\r\n - Are you using local sources or building from archives: archives\r\n - Python version: 3.8\r\n - CUDA version: 11.0\r\n - GPU models and configuration: GTX 1050\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/338", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-10T14:23:19Z", "updated_at": "2021-02-11T08:04:42Z", "user": "federicohml" }, { "repo": "pytorch/tutorials", "number": 1354, "title": "Tensors tutorial broken?", "body": "It looks like a lot of content is missing from this tutorial: https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py.", "url": "https://github.com/pytorch/tutorials/issues/1354", "state": "closed", "labels": [], "created_at": "2021-02-10T09:58:54Z", "updated_at": "2021-02-12T07:20:38Z", "comments": 2, "user": "Attila94" }, { "repo": "pytorch/TensorRT", "number": 337, "title": "\u2753 [Question] Why bazel is not able to find libcudart-xxxxxxx.so.11.0? ", "body": "## \u2753 Question\r\n\r\nI cloned TRTorch repo and try to play with it with a sample code. I created a folder for this playground in the root path (next to WORKSPACE), add the corresponding `BUILD` and `cpp` files. However when executing `bazel build //adv_test:adv_torchscript --distdir third_party/dist_dir/x86_64-linux-gnu/` I get the following error `execroot/TRTorch/bazel-out/k8-fastbuild/bin/adv_test/adv_torchscript: error while loading shared libraries: libcudart-3f3c6934.so.11.0: cannot open shared object file: No such file or directory`\r\n\r\nThe BUILD file looks like\r\n\r\n```\r\ncc_binary(\r\n name = \"adv_torchscript\",\r\n srcs = [\"adv_torchscript.cc\"],\r\n deps = [\r\n \"@cuda\",\r\n \"@libtorch\",\r\n \"@libtorch//:caffe2\",\r\n ],\r\n)\r\n````\r\nThe cpp file looks like\r\n````\r\n#include <torch/script.h>\r\n// #include <trtorch/trtorch.h>\r\n\r\n// #include <chrono>\r\n#include <iostream>\r\n#include <string>\r\n\r\n// https://gist.github.com/zeryx/526dbc05479e166ca7d512a670e6b82d\r\n// https://github.com/pytorch/vision/issues/2691\r\n\r\nint main(int argc, char** argv) {\r\n const std::string model_file = \"./my_net_torch_script.pt\";\r\n const std::string img_file = \"./test_img.jpg\";\r\n const float num_iterations = 1000.F;\r\n\r\n bool use_gpu = false;\r\n if (argc == 2) {\r\n use_gpu = std::atoi(argv[1]) ? true : false;\r\n }\r\n\r\n std::cout << \"Device set to \" << ((use_gpu) ? \"GPU\" : \"CPU\") << std::endl;\r\n\r\n std::cout << \"Loading TorchScript Model\";\r\n torch::jit::script::Module ts_module;\r\n if (use_gpu) {\r\n ts_module = torch::jit::load(model_file, torch::kCUDA);\r\n } else {\r\n ts_module = torch::jit::load(model_file);\r\n }\r\n std::cout << \" ... OK\" << std::endl;\r\n return 0;\r\n}\r\n````\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture: amd64\r\n - OS (e.g., Linux): Ubuntu 20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source): bazel build //adv_test:adv_torchscript --distdir third_party/dist_dir/x86_64-linux-gnu/\r\n - Are you using local sources or building from archives: archives\r\n - Python version: 3.8\r\n - CUDA version: 11.2\r\n - GPU models and configuration: GTX 1050\r\n - Any other relevant information: I updated the relevant parts of WORKSPACE to use the latest and greatest of CUDNN and TensorRT, i.e. URL and sha256sum.\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/337", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-09T16:33:25Z", "updated_at": "2021-02-09T21:34:10Z", "user": "federicohml" }, { "repo": "pytorch/TensorRT", "number": 335, "title": "\u2753 [Question] Typo in \"/py/README.md\"", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nThere are typo in example in \"/py/README.md\"\r\n\r\n## Example Usage\r\n\r\n``` python\r\nimport torch\r\nimport torchvision\r\nimport trtorch\r\n\r\n# Get a model\r\nmodel = torchvision.models.alexnet(pretrained=True).eval().cuda()\r\n\r\n# Create some example data\r\ndata = torch.randn((1, 3, 224, 224)).to(\"cuda\")\r\n\r\n# Trace the module with example data\r\ntraced_model = torch.jit.trace(model, [data])\r\n\r\n# Compile module\r\ncompiled_trt_model = trtorch.compile(model, {\r\n \"input_shapes\": [data.shape],\r\n \"op_precision\": torch.half, # Run in FP16\r\n})\r\n\r\nresults = compiled_trt_model(data.half())\r\n```\r\n\r\n\r\n```\r\n# Compile module\r\ncompiled_trt_model = trtorch.compile(model, {\r\n \"input_shapes\": [data.shape],\r\n \"op_precision\": torch.half, # Run in FP16\r\n})\r\n```\r\nThe above code should be fixed like the below code.\r\n\r\n```\r\n# Compile module\r\ncompiled_trt_model = trtorch.compile(traced_model , {\r\n \"input_shapes\": [data.shape],\r\n \"op_precision\": torch.half, # Run in FP16\r\n})\r\n```\r\n\r\n## What you have already tried\r\n\r\nI fixed typo, and requested pull request.\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/335", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-09T07:43:17Z", "updated_at": "2021-02-09T23:57:31Z", "user": "developer0hye" }, { "repo": "pytorch/TensorRT", "number": 334, "title": "\u2753 [Question] Typo in \"core/conversion/conversionctx/ConversionCtx.cpp \"", "body": "## \u2753 Question\r\n\r\n<!-- Your question -->\r\n\r\nThere are typo in \"core/conversion/conversionctx/ConversionCtx.cpp \"\r\n\r\nhttps://github.com/NVIDIA/TRTorch/blob/6442fce997e1506d859fab789527fe1e282f683f/core/conversion/conversionctx/ConversionCtx.cpp#L57-L62\r\n\r\nIs this typo, right?\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\nI requested [Pull requests](https://github.com/NVIDIA/TRTorch/pull/333).\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - CPU Architecture:\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source):\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version:\r\n - CUDA version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/334", "state": "closed", "labels": [ "question" ], "created_at": "2021-02-09T07:36:32Z", "updated_at": "2021-02-09T23:57:40Z", "user": "developer0hye" }, { "repo": "pytorch/pytorch", "number": 51859, "title": "Need help when using torch jit with an thread pool. (how to use at::set_num_threads correctly)", "body": "Hi, I'm trying to using an thread pool with size N to manage N torch::jit::Module instances, and I want assign one thread to each individual torch::jit::Modules. I'm currently wrapping one torch::jit::Module with a wrapper class, and in the constructor I call at::set_num_threads(1) and at::set_num_interop_threads(1), but it seems not behaving as expected (there being only one working thread doing inference at any time, but not N threads). How should I call at::set_num_threads and at::set_num_interop_threads in my program ? Thanks for attention.\r\n\r\nIn short, how can I restrict one torch::jit::Module doing inference with only one working thread, while controlling the concurrency of different inferences by an existing thread pool ?\r\n\r\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/51859", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2021-02-07T13:09:51Z", "updated_at": "2021-02-12T08:23:52Z", "user": "w1d2s" }, { "repo": "pytorch/TensorRT", "number": 326, "title": "\u2753 [Question] Is there a way to do multithreaded half-precision compilation?", "body": "## \u2753 Question\r\n\r\nI want to compile a Torch script in a different thread than the main thread in a C++ program. However, doing so with half precision for large networks will result in a Segmentation fault.\r\n\r\nHere's a program that extracts what I want to do:\r\nhttps://github.com/SakodaShintaro/trtorch-test/blob/master/main.cpp\r\n\r\n```cpp\r\n#include <torch/script.h>\r\n#include <trtorch/trtorch.h>\r\nusing namespace std;\r\n\r\nvoid compile(bool fp16) {\r\n constexpr int64_t INPUT_CHANNEL_NUM = 256;\r\n constexpr int64_t WIDTH = 32;\r\n torch::jit::Module module = torch::jit::load(\"model.ts\");\r\n if (fp16) {\r\n module.to(torch::kCUDA, torch::kHalf);\r\n } else {\r\n module.to(torch::kCUDA);\r\n }\r\n module.eval();\r\n\r\n std::vector<int64_t> in_sizes = {1, INPUT_CHANNEL_NUM, WIDTH, WIDTH};\r\n trtorch::CompileSpec::InputRange range(in_sizes);\r\n trtorch::CompileSpec info({range});\r\n if (fp16) {\r\n info.op_precision = torch::kHalf;\r\n }\r\n module = trtorch::CompileGraph(module, info);\r\n}\r\n\r\nint main() {\r\n // fp32, this thread -> OK\r\n compile(false);\r\n cout << \"fp32, this thread -> finish\" << endl;\r\n\r\n // fp32, another thread -> OK\r\n std::thread thread0([]() { compile(false); });\r\n thread0.join();\r\n cout << \"fp32, another thread -> finish\" << endl;\r\n\r\n // fp16, this thread -> OK\r\n compile(true);\r\n cout << \"fp16, this thread -> finish\" << endl;\r\n\r\n // fp16, another thread -> NG\r\n std::thread thread1([]() { compile(true); });\r\n thread1.join();\r\n cout << \"fp16, another thread -> finish\" << endl;\r\n}\r\n```\r\n\r\n result\r\n\r\n```\r\nfp32, this thread -> finish\r\nfp32, another thread -> finish\r\nfp16, this thread -> finish\r\nSegmentation fault (core dumped)\r\n```\r\n\r\n Is there anything wrong with my code?\r\n\r\n## Environment\r\n I used a Dockerfile I made.\r\nhttps://github.com/SakodaShintaro/trtorch-test/blob/master/docker/Dockerfile\r\n\r\nIf I create a container with this image and execute `./Test`, a Segmentation fault will occur on the 4th line.\r\n\r\nIn `trtorch-test/docker`,\r\n\r\n```\r\ndocker build -t trtorch_test_image .\r\ndocker run --gpus all -it --name trtorch_test_container trtorch_test_image:latest bash\r\n./Test\r\n```\r\n\r\nI sometimes succeed in it, so try it a few times if you want to reproduce it.\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu 20.04 (on Docker)\r\n - Build command you used (if compiling from source): bazel build //:libtrtorch --compilation_mode opt\r\n - CUDA version: 11.0\r\n - GPU models and configuration: RTX 2080ti\r\n - Nvidia driver version : 460", "url": "https://github.com/pytorch/TensorRT/issues/326", "state": "closed", "labels": [ "bug", "question", "bug: triaged [verified]" ], "created_at": "2021-02-05T08:50:21Z", "updated_at": "2021-02-26T02:18:13Z", "user": "SakodaShintaro" }, { "repo": "pytorch/examples", "number": 885, "title": "DDP on GPUs invalid ordinal", "body": "there is a node with 8 gpus\uff0cand I can't train my model on any 4 of the gpus, except gpu-id is 0,1,2,3.\r\nhow can I use any permutation and combination of the 8 gpus? Thanks \r\n\r\n`-- Process 2 terminated with the following error:\r\nTraceback (most recent call last):\r\n File \"/home/lab-chen.qi/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 19, in _wrap\r\n fn(i, *args)\r\n File \"/home/lab-chen.qi/sc/resweightv1/tiny_imagenet_multi.py\", line 223, in main_worker\r\n torch.cuda.set_device(gpu)\r\n File \"/home/lab-chen.qi/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/cuda/__init__.py\", line 263, in set_device\r\n torch._C._cuda_setDevice(device)\r\nRuntimeError: CUDA error: invalid device ordinal`\r\n\r\n\r\nsome of my code\r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.distributed as dist\r\nimport torch.utils.data.distributed\r\nimport torch.multiprocessing as mp\r\nimport argparse\r\nimport os\r\n\r\n\r\n\r\nparser = argparse.ArgumentParser(description = 'multi process')\r\n\r\nparser.add_argument('--gpu-id',type =str,default='0,1,2,4')\r\nparser.add_argument('--world-size', default=1, type=int,\r\n help='number of nodes for distributed training')\r\nparser.add_argument('--rank', default=0, type=int,\r\n help='node rank for distributed training')\r\nparser.add_argument('--dist-url', default='tcp://localhost:23456', type=str,\r\n help='url used to set up distributed training')\r\nparser.add_argument('--dist-backend', default='nccl', type=str,\r\n help='distributed backend')\r\nargs = parser.parse_args()\r\n\r\n\r\n\r\n\r\n\r\n\r\ndef main():\r\n global args\r\n\r\n\r\n os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_id\r\n # args.gpu = list(map(int,args.gpu_id.split(',')))\r\n\r\n # state = {k: v for k, v in args._get_kwargs()}\r\n\r\n # ngpus_per_node = torch.cuda.device_count() #len(args.gpu)\r\n\r\n ngpus_per_node = args.gpu_id.split(',').__len__()\r\n # print(os.environ['CUDA_VISIBLE_DEVICES'])\r\n # print('\u80fd\u770b\u5230\u7684gpu',ngpus_per_node)\r\n args.nprocs = ngpus_per_node\r\n args.world_size = ngpus_per_node * args.world_size\r\n mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))\r\n\r\n\r\n# Random seed\r\n\r\n# best_acc = 0 # best test accuracy\r\n\r\ndef main_worker(local_rank,ngpus_per_node,args):\r\n # global best_acc\r\n # start from epoch 0 or last checkpoint epoch\r\n\r\n # if not os.path.isdir(args.checkpoint):\r\n # mkdir_p(args.checkpoint)\r\n # # import pdb\r\n # pdb.set_trace()\r\n gpus = os.environ['CUDA_VISIBLE_DEVICES'].split(',')\r\n gpu = int(gpus[local_rank])\r\n\r\n args.gpu = gpu\r\n best_acc = 0\r\n # print(best_acc)\r\n args.rank = args.rank * ngpus_per_node + local_rank#args.gpu[gpu]\r\n print('rank: {} / {}'.format(args.rank, args.world_size))\r\n\r\n dist.init_process_group(backend=args.dist_backend,\r\n init_method=args.dist_url,\r\n world_size=args.world_size,\r\n rank=args.rank)\r\n\r\n\r\n\r\n torch.cuda.set_device(gpu)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()`\r\n```\r\n\r\n\r\nI try this, but it doesn't work [https://github.com/PyTorchLightning/pytorch-lightning/issues/3791](https://github.com/PyTorchLightning/pytorch-lightning/issues/3791)\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/885", "state": "open", "labels": [ "distributed" ], "created_at": "2021-02-05T02:40:06Z", "updated_at": "2023-03-31T08:30:25Z", "comments": 1, "user": "ccijunk" }, { "repo": "pytorch/serve", "number": 965, "title": "How to change loadedAtStartup to be true while registering a model?", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/serve/ is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\r\nWhen a model is registered loadedAtStartup is false by default. Is this option related to model pre-load? Is the model supposed to be loaded all time if this is set to be true? And how exactly do we change it while registering a model? Thank you in advance.\r\n", "url": "https://github.com/pytorch/serve/issues/965", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2021-02-05T01:58:37Z", "updated_at": "2021-05-13T17:41:39Z", "user": "wangs0007" }, { "repo": "pytorch/pytorch", "number": 51712, "title": "UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()`", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.\r\n1.\r\n1.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/51712", "state": "closed", "labels": [], "created_at": "2021-02-04T08:03:59Z", "updated_at": "2021-02-04T15:52:25Z", "user": "vkl-git" }, { "repo": "pytorch/pytorch", "number": 51431, "title": "torch.where dtype inference is not smart", "body": "## \ud83d\udc1b Bug\r\n\r\n\r\nIf we call `torch.where(mask, float_py_scalar, int_py_scalar)`, the dtype inference will error, but it should use floating type.\r\n\r\n```py\r\nIn [198]: torch.__version__\r\nOut[198]: '1.7.0'\r\n\r\nIn [199]: x = torch.randn(3)\r\n\r\nIn [200]: x\r\nOut[200]: tensor([0.1649, 2.0497, 1.2026])\r\n\r\nIn [201]: torch.where(x > 1, 1.0, 0.0)\r\nOut[201]: tensor([0., 1., 1.])\r\n\r\nIn [202]: torch.where(x > 1, 1.0, 0)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-202-d99e0dfc5858> in <module>\r\n----> 1 torch.where(x > 1, 1.0, 0)\r\n\r\nRuntimeError: expected scalar type float but found long long\r\n\r\nIn [203]: torch.where(x > 1, 1, 0)\r\nOut[203]: tensor([0, 1, 1])\r\n\r\n```\r\n\r\nWhile one may argue for this error because `int64` and `float32` are not fully compatible, we also support \r\n1. `float32_tensor.add(1)` \r\n2. \r\n ```py\r\n In [211]: torch.where(x > 0, 1.0, 0.0)\r\n Out[211]: tensor([1., 1., 1.])\r\n \r\n In [212]: torch.where(x > 0, 1.0, 0.0).dtype\r\n Out[212]: torch.float32\r\n ```\r\n Note how we don't use float64 either.\r\n\r\nso I don't think it should be a problem. \r\n\r\nSimilarly, these errors are also quite annoying\r\n\r\n```py\r\nIn [204]: torch.where(x > 1, x, 0)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-204-c1551b46bfbc> in <module>\r\n----> 1 torch.where(x > 1, x, 0)\r\n\r\nRuntimeError: expected scalar type float but found long long\r\n\r\nIn [205]: torch.where(x > 1, x, 0.)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-205-b52b9d3df92f> in <module>\r\n----> 1 torch.where(x > 1, x, 0.)\r\n\r\nRuntimeError: expected scalar type float but found double\r\n```\n\ncc @heitorschueroff", "url": "https://github.com/pytorch/pytorch/issues/51431", "state": "closed", "labels": [ "triaged", "module: sorting and selection", "function request" ], "created_at": "2021-01-31T17:00:30Z", "updated_at": "2021-02-03T17:33:05Z", "user": "ssnl" }, { "repo": "pytorch/examples", "number": 880, "title": "How to run", "body": "", "url": "https://github.com/pytorch/examples/issues/880", "state": "closed", "labels": [], "created_at": "2021-01-31T08:26:07Z", "updated_at": "2022-03-09T19:59:23Z", "user": "1158481739" }, { "repo": "pytorch/TensorRT", "number": 305, "title": "aten::view error", "body": "## \u2753 Question\r\n\r\nDuring conversion, it seems like I found an incomplete support of the torch.view function:\r\n\r\nError as follows:\r\n`at most one dimension may be inferred`\r\n\r\nThe function it is trying to convert is this:\r\n\r\n`out.view(out.shape[0], -1, 4)`\r\n", "url": "https://github.com/pytorch/TensorRT/issues/305", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2021-01-29T21:31:49Z", "updated_at": "2021-05-11T00:06:59Z", "user": "rafale77" }, { "repo": "pytorch/pytorch", "number": 51345, "title": "how to convert torch::conv2d return value(tensor) to cv::mat", "body": "I run the following program:\r\n\r\nread a picture of 3 channel input torch::nn::conv2d(3,3,3).pad(1).stride(1),then I got the results:\r\n![results](https://user-images.githubusercontent.com/8663412/106253310-463a5380-6252-11eb-9fb9-07ac33ee8d37.png)\r\ncode:\r\n\r\n```\r\ncv::Mat img = cv::imread(\"babyx2.png\", 1);\r\ntorch::Tensor img_tensor = torch::from_blob(img.data, { img.rows, img.cols, 3 }, torch::kByte);\r\nimg_tensor = img_tensor.permute({ 2, 0, 1 }); \r\nimg_tensor = img_tensor.unsqueeze(0);\r\nimg_tensor = img_tensor.to(kFloat32);\r\ntorch::Tensor result = C1(img_tensor); //C1: torch::nn::Conv2d(torch::nn::Conv2dOptions(3, 3, 5).padding(1))\r\n.....then get the result use following method\r\nauto ToCvImage(at::Tensor tensor)\r\n{\r\n\tint width = tensor.sizes()[0];\r\n\tint height = tensor.sizes()[1];\r\n\t//auto sizes = tensor.sizes();\r\n\ttry\r\n\t{\r\n\t\tcv::Mat output_mat(cv::Size{ height, width }, CV_8UC3, tensor.data_ptr<uchar>());\r\n\r\n\t\treturn output_mat.clone();\r\n\t}\r\n\tcatch (const c10::Error& e)\r\n\t{\r\n\t\tstd::cout << \"an error has occured : \" << e.msg() << std::endl;\r\n\t}\r\n\treturn cv::Mat(height, width, CV_8UC3);\r\n}\r\n```\r\n\r\nwhat happen????? ", "url": "https://github.com/pytorch/pytorch/issues/51345", "state": "closed", "labels": [], "created_at": "2021-01-29T08:58:10Z", "updated_at": "2021-01-29T16:20:09Z", "user": "yzqxmu" }, { "repo": "pytorch/pytorch", "number": 51339, "title": "gcc 4.8.5 -std=11 how to build pytorch1.7", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\ni want to use gcc4.8.5 to make pytorch1.7 code \r\nWhat should i do on torch1.7.\r\n\r\non torch1.2 usr gcc4.8.5 is ok! but torch 1.7 is bad!\r\n", "url": "https://github.com/pytorch/pytorch/issues/51339", "state": "closed", "labels": [], "created_at": "2021-01-29T07:55:01Z", "updated_at": "2021-01-30T03:39:58Z", "user": "joinhe" }, { "repo": "pytorch/vision", "number": 3322, "title": "a question about segmentation model loading", "body": "## \u2753 Questions and Help\r\nWhy they are different\uff1f\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n![image](https://user-images.githubusercontent.com/32593161/106227085-8d5d2000-6223-11eb-9c66-fcb037faac92.png)\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3322", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2021-01-29T03:19:01Z", "updated_at": "2021-01-29T13:45:39Z", "user": "njzyxiong" }, { "repo": "pytorch/pytorch", "number": 51320, "title": "Pytorch not working properly (I don't know how to summarize it, see below)", "body": "When I have a pytorch model, I sometimes would like to extract the features before the final softmax layers or such. Here, I have a model trained and loaded from a pickle:\r\n\r\n```\r\ndef build_model():\r\n model = resnet18(pretrained=True)\r\n \r\n n_features = model.fc.in_features\r\n n_hidden = 100\r\n model.fc = torch.nn.Sequential(\r\n torch.nn.Linear(n_features, n_hidden),\r\n torch.nn.ReLU(),\r\n torch.nn.Linear(n_hidden, 2)\r\n )\r\n \r\n model.to(device)\r\n return model\r\n\r\nmodel = build_model()\r\nmodel.load_state_dict(torch.load('./model.pickle'))\r\nmodel.eval()\r\n```\r\n\r\nThen, I would suppose that the model can be rebuild from it's children:\r\n\r\n```\r\nmodules = list(model.children())\r\nencoder = nn.Sequential(*modules)\r\n```\r\n\r\nHowever, given a test tensor:\r\n\r\n```\r\n>>> x_test.shape\r\ntorch.Size([100, 3, 128, 128])\r\n```\r\n\r\nmodel(x_test) produces an output normaly, but encoder(x_test) gives RuntimeError: mat1 dim 1 must match mat2 dim 0. I don't have any idea on how to investigate it further. The error messages are quite poor. The documentation is also EXTREMELY poor and doesn't specify at all the interface of the torchvision models (for example. the \".children\" method came from a forum, because it doesn't appear anywhere in the documentation, which is insane).\n\ncc @albanD @mruberry @jbschlosser", "url": "https://github.com/pytorch/pytorch/issues/51320", "state": "open", "labels": [ "module: nn", "triaged" ], "created_at": "2021-01-29T00:17:31Z", "updated_at": "2021-02-08T23:52:23Z", "user": "ghost" }, { "repo": "pytorch/xla", "number": 2756, "title": "How to sync XLA GPU Tensor between torch and torch_xla", "body": "I'm newly to torch_xla and trying to enable torch_xla in distributed training in PyTorch with multi-node gpu. \r\nHowever, it seems torch_xla doesn't support this scenario well\uff0cfor the following reasons:\r\n1. torch_xla only support single-node multi-processing training by [xmp.spawn](https://pytorch.org/xla/release/1.7/index.html#running-on-multiple-xla-devices-with-multiprocessing)\r\n2. torch_xla GPU aten::Tensor dosen't compatible well with cuda aten::Tensor(since they are difference device)\r\n\r\nTo workaround the issue, I had try to sync xla tensor gradients and move to cuda aten::Tensor mannually before all-reduce. And something weird found:\r\n1. Each xla tensor sync create a SyncTensorGraph, the compilation slow down very much \r\n2. Xla aten::ensor conversion to cuda aten::Tensor would actually do copy\r\n\r\n## \u2753 Questions and Help\r\n1. Is there any function or API that support zero-copy between aten::cuda::Tensor & XLA_GPU aten::tensor?\r\n2. Does each SyncTensor trigger a full-subgraph XLA Compilation?\r\n3. Any best practices or good suggestions to PyTorch multi-node distributed training?", "url": "https://github.com/pytorch/xla/issues/2756", "state": "closed", "labels": [ "stale" ], "created_at": "2021-01-27T02:21:37Z", "updated_at": "2021-06-26T02:22:41Z", "user": "tanyokwok" }, { "repo": "pytorch/TensorRT", "number": 294, "title": "Python Library error after painful compilation.", "body": "## \u2753 Question\r\n\r\nAfter very painfully building the repo from source due to a lot of strangely hardcoded paths to libraries and include which had me modify both the setup.py and the WORKSPACE, I have successfully completed the compilation using bazel. However when I try to use the python extension, I get the following error upon import of the library:\r\n\r\n```\r\n import trtorch\r\n File \"/home/user/.local/lib/python3.8/site-packages/trtorch/__init__.py\", line 11, in <module>\r\n from trtorch._compiler import *\r\n File \"/home/user/.local/lib/python3.8/site-packages/trtorch/_compiler.py\", line 5, in <module>\r\n import trtorch._C\r\nImportError: /home/anhman/.local/lib/python3.8/site-packages/trtorch/lib/libtrtorch.so: undefined symbol: _ZN2at11show_configB5cxx11Ev\r\n```\r\n\r\n## What you have already tried\r\n\r\nThe last time I have seen something similar, it was due to attempts of running a compiled binary under a different version of pytorch than the one it was compiled with. It's not the case here as I compiles with 1.7.1.\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.1-cu110\r\n - CPU Architecture: x64\r\n - OS (e.g., Linux): Ubuntu20.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): bazel build //:libtrtorch --compilation_mode opt and then python3 setup.py install.\r\n - Are you using local sources or building from archives: source\r\n - Python version: 3.8.7\r\n - CUDA version: 11.2\r\n - GPU models and configuration: RTX 3070\r\n - Any other relevant information: TensorRT 7.2.2.3 and cudnn 8.1\r\n\r\n## Additional context\r\n", "url": "https://github.com/pytorch/TensorRT/issues/294", "state": "closed", "labels": [ "question" ], "created_at": "2021-01-27T01:57:24Z", "updated_at": "2021-02-15T02:41:41Z", "user": "rafale77" }, { "repo": "pytorch/pytorch", "number": 51114, "title": "How to find the module dependency?", "body": "## \u2753 There are many operations in a Model\r\n\r\nIf we run these codes below:\r\n```\r\nimport torch\r\nimport torchvision\r\nmodel = torchvision.models.resnet18()\r\ninp = torch.zeros([64, 3, 7, 7])\r\nfor temp in model.children():\r\n print(temp)\r\n```\r\n\r\nWe can get several modules:\r\n\r\n```\r\nConv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\r\nBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\nReLU(inplace=True)\r\nMaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\r\nSequential(\r\n (0): BasicBlock(\r\n (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace=True)\r\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n (1): BasicBlock(\r\n (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace=True)\r\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n)\r\nSequential(\r\n (0): BasicBlock(\r\n (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\r\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace=True)\r\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (downsample): Sequential(\r\n (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\r\n (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n )\r\n (1): BasicBlock(\r\n (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (relu): ReLU(inplace=True)\r\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\r\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n)\r\n........\r\n```\r\n## My problems are:\r\n\r\n1. We can only see constructed modules, but cannot see their input\\output dependencies: In restnet18, the input of second Sequential module are both from the first Sequential and MaxPool2d. Is there any way we can figure out the depencies among different modules \uff08maybe in Python client\uff09?\r\n\r\n2. Moudules are related to high-level operations, can we see related operations and their dependencies in Python client (the outputs of torch.jit._get_trace_graph are too low-level)?\r\n\r\n3. How can wen find back propagation dependencies in Python client?\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/51114", "state": "closed", "labels": [], "created_at": "2021-01-26T17:28:57Z", "updated_at": "2021-01-26T21:51:08Z", "user": "Xuyuanjia2014" }, { "repo": "pytorch/vision", "number": 3294, "title": "Using torchvision roi_align in libtorch c++ jit modules", "body": "## \ud83d\udc1b Bug\r\n\r\nHi, I\u2019m trying to use libtorch 1.7.1 to load a jit model that is created with pytorch 1.5.1 and torchvision 0.6.1.\r\nThis model is using torchvision::roi_align operator.\r\nWhen running the model I get this error:\r\n\r\n**Could not find any similar ops to torchvision::roi_align. This op may not exist or may not be currently supported in TorchScript.**\r\n\r\nloading the model in pytorch is working fine.\r\nAny idea why its not loading?\r\nI need to install another package to my c++ env to be able to load this model?\r\n\r\n## Expected behavior\r\n\r\nload and forward the model successfully in libtorch\r\n\r\n## Environment\r\n\r\nlibtorch version: 1.7.1\r\n\r\nCollecting environment information...\r\nPyTorch version: 1.5.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.2 LTS (x86_64)\r\nGCC version: (Ubuntu 6.4.0-17ubuntu1) 6.4.0 20180424\r\nClang version: Could not collect\r\nCMake version: version 3.18.0\r\n\r\nPython version: 3.6 (64-bit runtime)\r\nIs CUDA available: False\r\nCUDA runtime version: 10.1.243\r\nGPU models and configuration: GPU 0: Quadro P5000\r\nNvidia driver version: 418.87.01\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.17.2\r\n[pip3] numpy-indexed==0.3.5\r\n[pip3] numpy-quaternion==2019.10.3.10.26.21\r\n[pip3] numpydoc==0.9.1\r\n[pip3] pytorch3d==0.2.0\r\n[pip3] torch==1.5.1\r\n[pip3] torchvision==0.6.1\r\n[conda] Could not collect\r\n\r\n\r\nThanks", "url": "https://github.com/pytorch/vision/issues/3294", "state": "closed", "labels": [ "question", "module: ops", "topic: object detection", "module: c++ frontend" ], "created_at": "2021-01-26T07:23:02Z", "updated_at": "2022-11-28T05:56:59Z", "user": "natangold85" }, { "repo": "pytorch/vision", "number": 3293, "title": "Affine Transform: why is translate a list[int] when the code suggests it could be floating point?", "body": "https://github.com/pytorch/vision/blob/f16322b596c7dc9e9d67d3b40907694f29e16357/torchvision/transforms/functional.py#L956\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3293", "state": "open", "labels": [ "question", "module: transforms" ], "created_at": "2021-01-26T07:14:08Z", "updated_at": "2021-01-26T15:41:51Z", "user": "varung" }, { "repo": "pytorch/TensorRT", "number": 291, "title": "Questions about Value_Tensor_map and Evaluated_Value_map? (Not an issue, just try to understand them...)", "body": "I have just gone through TRTorch's 2020 GTC talk/slides/documentation focusing mainly on the graph conversion implementation part. There are some confusions of concepts and questions:\r\n\r\n1. What's the relationship between `torch::jit::Values` and `torch::jit::IValue`, Are they the same thing? I noticed they are used interchangeably in some situations and are referring to different classes in others.\r\n2. Why do we need to record Value -> ITensor map and Value->IValue map? What's the main use of these two maps?\r\n\r\nCould someone help me? Thanks in advance!\r\n", "url": "https://github.com/pytorch/TensorRT/issues/291", "state": "closed", "labels": [ "question" ], "created_at": "2021-01-25T12:40:35Z", "updated_at": "2021-01-25T19:19:48Z", "user": "maxyanghu" }, { "repo": "pytorch/elastic", "number": 140, "title": "Torch Elastic - How to make sure all nodes are in the same AZ?", "body": "## \u2753 Questions and Help\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our documentation. Here\r\nare some links that may be helpful:\r\n\r\n* [What is torchelastic?](../../README.md)\r\n* [Quickstart on AWS](../../aws/README.md)\r\n* [Usage](../../USAGE.md)\r\n* [Examples](../../examples/README.md)\r\n* API documentation\r\n * [Overview](../../USAGE.md)\r\n * [Rendezvous documentation](../../torchelastic/rendezvous/README.md)\r\n * [Checkpointing documentation](../../torchelastic/checkpoint/README.md)\r\n* [Configuring](../../USAGE.md#configuring)\r\n\r\n \r\n### Question\r\n\r\nHi, when using TorchElastic + AWS EKS, how can we ensure that multi-node training jobs have all of the nodes located in the same AZ? This is critical for multi-node training jobs, in terms of speed of data transfer and data transfer costs.\r\n\r\nOne naive way would be to just specify 1 subnet when creating the EKS cluster, but is there a way we can create an EKS cluster with multiple subnets, and when TorchElastic attempts to launch multiple nodes for a training job, it will try to launch them such that all of the nodes are located within 1 subnet/AZ (where that subnet would be one of the subnets that the EKS cluster has)? And is this possible to do with spot instances?\r\n\r\nThanks!", "url": "https://github.com/pytorch/elastic/issues/140", "state": "closed", "labels": [], "created_at": "2021-01-25T00:14:10Z", "updated_at": "2021-05-17T15:47:49Z", "user": "thecooltechguy" }, { "repo": "pytorch/vision", "number": 3283, "title": "How to install torchvision to use video_reader backend?", "body": "I simply installed torchvision from conda (as advertised on pytorch.org). But `torchvision.set_video_backend('video_reader')` prints `video_reader video backend is not available. Please compile torchvision from source and try again`. This should be mentioned in https://pytorch.org/docs/stable/torchvision/index.html#torchvision.set_video_backend and in torchvision README (including if the `video_reader` is temporarily not supported)\n\ncc @bjuncek", "url": "https://github.com/pytorch/vision/issues/3283", "state": "closed", "labels": [ "enhancement", "module: documentation", "module: video" ], "created_at": "2021-01-24T03:09:56Z", "updated_at": "2022-08-16T10:58:31Z", "user": "vadimkantorov" }, { "repo": "pytorch/vision", "number": 3281, "title": "Can we use DeeplabV3 in Salient Object Detection ?", "body": "Recently, I start doing more in Deep Learning in Semantic Segmentation. I can't figure DeepLabV3 is possible to apply in Salient Object Detection ?", "url": "https://github.com/pytorch/vision/issues/3281", "state": "closed", "labels": [ "question" ], "created_at": "2021-01-24T01:32:09Z", "updated_at": "2021-04-12T07:40:18Z", "user": "duynguyen51" }, { "repo": "pytorch/xla", "number": 2750, "title": "How to change torch tpu v3 baseline into torch tpu pod v2?", "body": "i was trying to run this working torch tpu v3 baseline : https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9 into torch tpu pod v2.\r\n\r\ni changed hardware accelerator from tpu v3-8 to tpu v2 pod in kaggle and changed used batch size = 1 and \r\n\r\n\r\n```\r\ndef _mp_fn(rank, flags):\r\n global acc_list\r\n torch.set_default_tensor_type('torch.FloatTensor')\r\n res = train_model()\r\n\r\nFLAGS={}\r\nxmp.spawn(_mp_fn, args=(FLAGS,), nprocs=32//8, start_method='fork')\r\n```\r\nbut i get error saying \"process 0 terminated with exit code 1\"\r\ni am not finding any resource or tutorial to convert tpu v3 notebook into tpu pod v2 in pytorch xla,so i wanted to give it a try myself,,,, @taylanbil need your help", "url": "https://github.com/pytorch/xla/issues/2750", "state": "closed", "labels": [], "created_at": "2021-01-23T07:48:39Z", "updated_at": "2021-01-25T21:29:29Z", "user": "mobassir94" }, { "repo": "pytorch/vision", "number": 3274, "title": "Different ENODATA code on macOS", "body": "## \ud83d\udc1b Bug\r\nIt seems macOS ENODATA code (96) is different than the Linux one (61). The Linux code is currently hard-coded in `Video.cpp`, which results in an (unnecessary?) error being shown when using the video decoder on macOS:\r\n\r\nhttps://github.com/pytorch/vision/blob/7d831a2f9b3ebab9eb8e5c899cf70b103ad6908a/torchvision/csrc/io/video/Video.cpp#L314-L318\n\ncc @bjuncek", "url": "https://github.com/pytorch/vision/issues/3274", "state": "closed", "labels": [ "question", "module: video" ], "created_at": "2021-01-22T12:05:45Z", "updated_at": "2021-01-22T17:29:52Z", "user": "stefanwayon" }, { "repo": "pytorch/serve", "number": 943, "title": "how to return Chinese characters with UTF-8 code", "body": "1. When I use torch sever, I return a list in the **postprocess function** of the handler. Each element of the list is a python dictionary and the dictionary value is Chinese characters. Torch sever directly returns a json with the unicode encoding like \"\\u59d3\". Can I control the return using UTF-8? \r\n2. In addition, Is there a corresponding document for \"model-server.jar \u201d ? What's the relationship with torch sever?\r\n\r\nWe look forward to your reply. Thanks a lot.", "url": "https://github.com/pytorch/serve/issues/943", "state": "open", "labels": [ "triaged_wait", "language" ], "created_at": "2021-01-22T09:19:00Z", "updated_at": "2021-05-27T04:36:56Z", "user": "aixuedegege" }, { "repo": "pytorch/vision", "number": 3273, "title": "What is expected Kinetics400 dataset directory structure?", "body": "Given that the dataset does not come with official downloader scripts and that most roll their own or hack some third-party scripts, it would be much clearer if https://pytorch.org/docs/stable/torchvision/datasets.html#kinetics-400 explained what directory structure is expected by `torchvision.datasets.Kinetics400`\r\n\r\nWhat is the expected dataset size? and the video file extensions?\r\n\r\nThanks!\n\ncc @pmeier", "url": "https://github.com/pytorch/vision/issues/3273", "state": "closed", "labels": [ "enhancement", "module: datasets", "module: documentation" ], "created_at": "2021-01-22T01:02:24Z", "updated_at": "2021-03-01T10:18:21Z", "user": "vadimkantorov" }, { "repo": "pytorch/vision", "number": 3267, "title": "get v0.8.1 branch compile out torchvision==0.9.0a0+7b9d30e", "body": "I clone the v0.8.1 branch and compiled it with pytorch 1.7.0, but at last the compiled version is 0.9.0, does anything wrong?\r\n", "url": "https://github.com/pytorch/vision/issues/3267", "state": "closed", "labels": [ "question" ], "created_at": "2021-01-20T09:52:05Z", "updated_at": "2021-01-20T10:29:08Z", "user": "helloyan" }, { "repo": "pytorch/pytorch", "number": 50709, "title": "conv3d in r3d_18: How to maintain the dimension?", "body": "## How to maintain the dimension in conv3d(r3d_18)?\r\n\r\n### convolution in conv3d about padding\r\n\r\n1. the input is (1, 3, 5, 112, 112)\r\n2. the model is `models.video.r3d_18(pretrained=True, progress=False)`\r\n3. the model summary \r\n```\r\nVideoResNet(\r\n (stem): BasicStem(\r\n (0): Conv3d(3, 64, kernel_size=(3, 7, 7), stride=(1, 2, 2), padding=(1, 3, 3), bias=False)\r\n (1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (2): ReLU(inplace=True)\r\n )\r\n (layer1): Sequential(\r\n (0): BasicBlock(\r\n (conv1): Sequential(\r\n (0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)\r\n (1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (2): ReLU(inplace=True)\r\n )\r\n (conv2): Sequential(\r\n (0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)\r\n (1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n (relu): ReLU(inplace=True)\r\n )\r\n (1): BasicBlock(\r\n (conv1): Sequential(\r\n (0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)\r\n (1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (2): ReLU(inplace=True)\r\n )\r\n (conv2): Sequential(\r\n (0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)\r\n (1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n (relu): ReLU(inplace=True)\r\n )\r\n )\r\n.............\r\n```\r\n4. input through the first layer in model\r\n```\r\ninput = torch.zeros(1, 3, 5, 112, 112)\r\noutput = model.stem(input)\r\n>>> torch.Size([1, 64, 5, 56, 56])\r\n```\r\n5. my question is : why the output is 1* 64* 5* 56* 56 \r\nhow to padding in pytorch\r\nthis is my Schematic diagram\r\n![image](https://user-images.githubusercontent.com/17065425/104982826-6d20aa80-5a46-11eb-9d9d-c167bcae51fb.png)\r\n\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/50709", "state": "closed", "labels": [], "created_at": "2021-01-19T03:07:16Z", "updated_at": "2021-01-20T14:08:55Z", "user": "u0251077" }, { "repo": "pytorch/vision", "number": 3261, "title": "ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. from torchvision import _C\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n>>> from torchvision import _C Traceback (most recent call last): File \"<stdin>\", line 1, in <module> ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory.\r\n\r\n## Environment\r\n\r\npython collect_env.py\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.1.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.0.130\r\nROCM used to build PyTorch: N/A\r\nOS: Ubuntu 16.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 8.4.0-1ubuntu1~16.04.1) 8.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.14.4\r\nPython version: 3.6 (64-bit runtime)\r\nIs CUDA available: True\r\nCUDA runtime version: 10.0.130\r\nGPU models and configuration:\r\nGPU 0: GeForce GTX 1080 Ti\r\nGPU 1: GeForce GTX 1080 Ti\r\nGPU 2: GeForce GTX 1080 Ti\r\nGPU 3: GeForce GTX 1080 Ti\r\nGPU 4: GeForce GTX 1080 Ti\r\nGPU 5: GeForce GTX 1080 Ti\r\nGPU 6: GeForce GTX 1080 Ti\r\nGPU 7: GeForce GTX 1080 Ti\r\nNvidia driver version: 418.39\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0\r\n/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.5.1.10\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.5\r\n[pip3] torch==1.1.0\r\n[pip3] torchvision==0.4.2\r\n[conda] cudatoolkit 10.0.130 hf841e97_6 conda-forge\r\n[conda] mkl 2020.2 256\r\n[conda] numpy 1.19.5 py36h2aa4a07_1 conda-forge\r\n[conda] pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch\r\n[conda] torchvision 0.3.0 py36_cu10.0.130_1 pytorch\r\n```\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nI was using fasterRCNN Object detector in torchvision while doing keep = nms(boxes_for_nms, scores, iou_threshold) it is giving this error. Easy way to reproduce this error is to run \r\n\r\n> from torchvision import _C\r\n\r\nPlease help. \n\ncc @fmassa @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3261", "state": "closed", "labels": [ "question", "topic: binaries" ], "created_at": "2021-01-17T17:08:27Z", "updated_at": "2021-06-16T15:08:15Z", "user": "IISCAditayTripathi" }, { "repo": "pytorch/pytorch", "number": 50657, "title": "How to maximize inference speed of models implemented with C++ API ? (not using torchscript or jit) ", "body": "I'm currently implementing some seq2seq model with LibTorch C++ API (build from torch::nn::Modules, not using jit), is there any special techniques to optimize the inference speed ? Thanks.\r\n\r\ncc @yf225 @glaringlee @VitalyFedyunin @ngimel @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/50657", "state": "closed", "labels": [ "module: performance", "module: cpp", "triaged" ], "created_at": "2021-01-17T02:55:51Z", "updated_at": "2024-06-27T07:58:38Z", "user": "w1d2s" }, { "repo": "pytorch/xla", "number": 2733, "title": "How to install Torch_XLA in my own laptop?", "body": "## \u2753 Questions and Help\r\nI want build a envirment about Torch_XLA on my own laptop by Annconda3. But I do not find any information about this. Is it difficult to use Annconda3 or pip install Torch_XLA?", "url": "https://github.com/pytorch/xla/issues/2733", "state": "closed", "labels": [], "created_at": "2021-01-15T02:38:39Z", "updated_at": "2021-04-09T04:54:46Z", "user": "TianshengSun" }, { "repo": "pytorch/examples", "number": 870, "title": "Permissions to contribute", "body": "Hi there, I thought I could contribute a few notebooks with really low barrier to entry for concepts like regression using tensors and for loops, small and highly documented shallow nets to illustrate concepts etc. I tried to push a notebook today to a branch I checked out for a PR but don't have permissions. How I can I request them? ", "url": "https://github.com/pytorch/examples/issues/870", "state": "closed", "labels": [], "created_at": "2021-01-13T13:26:02Z", "updated_at": "2022-03-09T20:16:51Z", "comments": 1, "user": "rbownes" }, { "repo": "pytorch/vision", "number": 3246, "title": "assert error len(grid_sizes) == len(strides) == len(cell_anchors)", "body": "It looks like a bug. When I do not set the AnchorGenerator() in FasterRCNN, the default anchor_sizes in ### **detection/faster_rcnn.py** line**182** shows that 'anchor_sizes = ((32,), (64,), (128,), (512,))' which cause len(cell_anchors) == 5. And I found that in the **detection/faster_rcnn.py** line**120** the anchor_size set '((32, 64, 128, 256, 512), )' and len(cell_anchors) == 1", "url": "https://github.com/pytorch/vision/issues/3246", "state": "closed", "labels": [ "question" ], "created_at": "2021-01-13T03:30:16Z", "updated_at": "2021-01-20T11:06:09Z", "user": "ghost" }, { "repo": "pytorch/pytorch", "number": 50426, "title": "How to do gathering on a tensor with two-dim indexing", "body": "### Question\r\nHi,\r\nWant to add symbolic func to a custom PyTorch op and export it to ONNX using existing ONNX ops. There is two-dim indexing operation. Have tried `index_select`, but not work. So could anyone take a look into this and help me with this?\r\n### Further information\r\n\r\nSample code\r\n```\r\ndef my_custom_op(data, x_indices, y_indices):\r\n ## suppose this op is written in c++\r\n return data[x_indice, y_indices]\r\n\r\nclass MyCustomOp(torch.autograd.Function):\r\n \r\n @staticmethod\r\n def forward(ctx, data, x_indices, y_indices):\r\n return my_custom_op(data, x_indices, y_indices)\r\n\r\n @staticmethod\r\n def symbolic(g, data, x_indices, y_indices):\r\n from torch.onnx.symbolic_opset9 import index_select, transpose\r\n data_xs = index_select(g, data, 0, x_indices)\r\n ## don't know how to do this because index_select not work for this \r\n # data_xs = transpose(g, data_xs, 0, 1)\r\n # data_ys = index_select(g, data_xs, 0, y_indices)\r\n return out\r\n\r\n```\r\nThanks in advance.", "url": "https://github.com/pytorch/pytorch/issues/50426", "state": "closed", "labels": [], "created_at": "2021-01-12T10:21:14Z", "updated_at": "2021-01-12T22:15:39Z", "user": "RunningLeon" }, { "repo": "pytorch/pytorch", "number": 50346, "title": "how to save weights when using RPC framework", "body": "Hi,\r\n\r\nI am using the RPC framework to split the model across different processes/ranks. However, I notice that calling torch.save will only save the weights of the part of the model on a single rank. I am wondering if there is a way to save the weights of all models into one file?\r\n\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @jjlilley @osalpekar @jiayisuse @mrzzd @agolynski @SciPioneer @H-Huang @cbalioglu", "url": "https://github.com/pytorch/pytorch/issues/50346", "state": "open", "labels": [ "oncall: distributed", "triaged", "module: rpc" ], "created_at": "2021-01-10T08:26:37Z", "updated_at": "2024-11-18T17:04:45Z", "user": "FrankLeeeee" }, { "repo": "pytorch/TensorRT", "number": 267, "title": "prim::ListUnpack unable to get schema", "body": "When I try to complie a model, I got such error\r\n```\r\n\u001b[1;35mDEBUG: \u001b[0mUnable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (NodeConverterRegistry.Convertable)\r\nterminate called after throwing an instance of 'trtorch::Error'\r\n what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false\r\nUnable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (conversion.VerifyCoverterSupportForBlock)\r\n```\r\nand the related graph definition is this\r\n```\r\n %15 : int[] = aten::size(%images.1) # <string>:7:9\r\n %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15)\r\n```\r\nInput shape is (1,1,3,672,672)\r\n\r\ndetailed log is here \r\n[listunpack.txt](https://github.com/NVIDIA/TRTorch/files/5786336/listunpack.txt)\r\nGDB backtrace\r\n```\r\n#0 0x00007fff63987438 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54\r\n#1 0x00007fff6398903a in __GI_abort () at abort.c:89\r\n#2 0x00007ffff7a8ddde in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\r\n#3 0x00007ffff7a99896 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\r\n#4 0x00007ffff7a99901 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\r\n#5 0x00007ffff7a99b55 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\r\n#6 0x000000000047b116 in trtorch::core::conversion::GetUnsupportedOpsInBlock[abi:cxx11](torch::jit::Block const*) (b=0x5d4b9d50) at core/conversion/conversion.cpp:390\r\n#7 0x000000000047b3a7 in trtorch::core::conversion::VerifyConverterSupportForBlock (b=0x5d4b9d50) at core/conversion/conversion.cpp:406\r\n#8 0x000000000045d784 in trtorch::core::CheckMethodOperatorSupport (mod=..., method_name=\"forward\") at core/compiler.cpp:136\r\n#9 0x000000000045ac55 in trtorch::CheckMethodOperatorSupport (module=..., method_name=\"forward\") at cpp/api/src/trtorch.cpp:14\r\n#10 0x000000000042178d in main (argc=5, argv=0x7fffffffdf68) at cpp/trtorchc/main.cpp:371\r\n```\r\n\r\nIn official pytorch source code, I find this\r\n```\r\n %16 : Tensor[] = aten::chunk(%gates, %7, %8)\r\n %ingate.1 : Tensor, %forgetgate.1 : Tensor, %cellgate.1 : Tensor, %outgate.1 : Tensor = prim::ListUnpack(%16)\r\n```\r\nDose this mean the aten::size is a operator rather than evaluator ?\r\n\r\nIn trtorch aten.cpp, we have\r\n```\r\n .evaluator({c10::Symbol::fromQualString(\"aten::size\"),\r\n [](const torch::jit::Node* n, kwargs& args) -> c10::optional<torch::jit::IValue> {\r\n LOG_WARNING(\"There may be undefined behavior using dynamic shape and aten::size\");\r\n auto tensor_var = args.at(n->input(0));\r\n if (n->inputs().size() == 1) {\r\n if (tensor_var.isITensor()) {\r\n auto tensor = tensor_var.ITensor();\r\n return util::toVec(tensor->getDimensions());\r\n } else {\r\n auto tensor = tensor_var.unwrapToTensor();\r\n return tensor.sizes();\r\n }\r\n } else {\r\n auto dim = args.at(n->input(1)).unwrapToInt();\r\n if (tensor_var.isITensor()) {\r\n auto tensor = tensor_var.ITensor();\r\n return util::toVec(tensor->getDimensions())[dim];\r\n } else {\r\n auto tensor = tensor_var.unwrapToTensor();\r\n return tensor.sizes()[dim];\r\n }\r\n }\r\n },\r\n EvalOptions().validSchemas(\r\n {\"aten::size(Tensor self) -> (int[])\", \"aten::size.int(Tensor self, int dim) -> (int)\"})})\r\n .evaluator({c10::Symbol::fromQualString(\"aten::__getitem__\"),\r\n```\r\n\r\nIn another graph, compiling have the same issue\r\n```\r\n %46 : Tensor[] = aten::split(%45, %6, %7) # /opt/tiger/conda/lib/python3.7/site-packages/torch/tensor.py:375:0\r\n %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46)\r\n\r\n\u001b[1;35mDEBUG: \u001b[0mUnable to get schema for Node %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46) (NodeConverterRegistry.Convertable)\r\nterminate called after throwing an instance of 'trtorch::Error'\r\n what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false\r\nUnable to get schema for Node %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46) (conversion.VerifyCoverterSupportForBlock)\r\n```", "url": "https://github.com/pytorch/TensorRT/issues/267", "state": "closed", "labels": [ "question" ], "created_at": "2021-01-08T09:28:32Z", "updated_at": "2021-01-22T19:51:16Z", "user": "inocsin" }, { "repo": "pytorch/vision", "number": 3233, "title": "Which paper is torchvision.ops.deform_conv2d from?", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\r\nI want to know which paper [torchvision.ops.deform_conv2d](https://pytorch.org/docs/stable/torchvision/ops.html#torchvision.ops.deform_conv2d) is from, is it DCNv1 or DCNv2?\r\n", "url": "https://github.com/pytorch/vision/issues/3233", "state": "closed", "labels": [ "question", "module: documentation" ], "created_at": "2021-01-08T09:17:08Z", "updated_at": "2021-01-08T10:11:11Z", "user": "songyuc" }, { "repo": "pytorch/pytorch", "number": 50139, "title": "How to correctly nest datasets and dataloaders?", "body": "## \u2753 Questions and Help\r\n\r\nHi, I am asking here because it seemed like the right place, if it isn't please tell me where to ask.\r\n \r\n \r\n Consider a stream of tabular data.\r\n\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\n\r\ndef data_stream():\r\n for _ in range(1000):\r\n df = pd.DataFrame({\r\n 'a': np.arange(10000),\r\n 'b': (np.arange(10000) + 10000)\r\n })\r\n yield df\r\n```\r\n\r\nPlease assume the dataframes will be large (and different).\r\n\r\n\r\nI want to create a dataloader for data that is arranged as I stated above.\r\nbatches should be of X rows of the current dataframe, until it is done (including shuffling flexibility ect.). Can throw away the last batch if it is not full.\r\nThen, go on to the next dataframe, until StopIteration.\r\n\r\nIf it were a single dataframe, I would simply use the good old torch.utils.data.Dataset with a standard dataloader, with small configuration of the number of df rows per sample and be done.\r\n\r\nIf it were a stream of single sample per stream item, I would use torch.utils.data.IterableDataset exactly like the doc states.\r\n\r\nHowever, I have both.\r\n\r\nIf I use a torch.utils.data.IterableDataset, I have to define a DataLoader for it, and I then lose the power of the DataLoader that would operate on the df itself. The same problem would arise in the other direction.\r\n\r\n___\r\n\r\nWhat's the correct way of handling data that is arranged like this?", "url": "https://github.com/pytorch/pytorch/issues/50139", "state": "closed", "labels": [], "created_at": "2021-01-06T11:44:07Z", "updated_at": "2021-01-07T00:46:10Z", "user": "noamzilo" }, { "repo": "pytorch/tutorials", "number": 1304, "title": "NLP FROM SCRATCH: TRANSLATION WITH A SEQUENCE TO SEQUENCE NETWORK AND ATTENTION", "body": "Hi\r\nI'm exgausted... how to save and load model in future?", "url": "https://github.com/pytorch/tutorials/issues/1304", "state": "closed", "labels": [], "created_at": "2021-01-06T10:45:46Z", "updated_at": "2021-06-02T19:39:35Z", "comments": 1, "user": "aloska" }, { "repo": "pytorch/TensorRT", "number": 266, "title": "How to convert model from double to float", "body": "When I try to complie torchscript model, I get this log\r\n```\r\nDEBUG: [TRTorch Conversion Context] - Found IValue containing object of type Double(requires_grad=0, device=cpu)\r\nterminate called after throwing an instance of 'trtorch::Error'\r\n what(): [enforce fail at core/util/trt_util.cpp:293] Expected aten_trt_type_map.find(t) != aten_trt_type_map.end() to be true but got false\r\nUnsupported Aten datatype\r\n```\r\n\r\nSo I try to convert model to float using this\r\n```\r\nscript_model = torch.jit.load(path)\r\nscript_model = script_model.eval()\r\nscript_model = script_model.float()\r\nscript_model.save(new_path)\r\n```\r\nAnd it still throw this error", "url": "https://github.com/pytorch/TensorRT/issues/266", "state": "closed", "labels": [ "question", "component: core" ], "created_at": "2021-01-06T09:59:10Z", "updated_at": "2022-08-12T21:10:14Z", "user": "inocsin" }, { "repo": "pytorch/pytorch", "number": 50118, "title": "torch.where scalar/tensor documentation is unclear and not formatted", "body": "## \ud83d\udcda Documentation\r\n\r\nSee:\r\n`\r\nCurrently valid scalar and tensor combination are 1. Scalar of floating dtype and torch.double 2. Scalar of integral dtype and torch.long 3. Scalar of complex dtype and torch.complex128\r\n`\r\n\r\nI believe these are supposed to be on separate lines. Also this message comes before the type information, it's not clear what. \"scalar and tensor combination\" are. It should at least mention it's talking about `x` and `y` and not `condition`.\r\n\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\n\ncc @jlin27 @mruberry @heitorschueroff", "url": "https://github.com/pytorch/pytorch/issues/50118", "state": "open", "labels": [ "module: docs", "triaged", "module: sorting and selection" ], "created_at": "2021-01-05T22:52:49Z", "updated_at": "2021-01-07T17:14:35Z", "user": "gchanan" }, { "repo": "pytorch/pytorch", "number": 50112, "title": "need a clear guide for when and how to use torch.cuda.set_device()", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\nI find myself quite unclear about `torch.cuda.set_device()`. The current documentation is very unsatisfactory, ambgious and confusing. e.g. the first 3 lines of code sample: https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics\r\n```\r\ncuda = torch.device('cuda') # Default CUDA device\r\ncuda0 = torch.device('cuda:0')\r\ncuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)\r\n```\r\nit's very ambiguous and doesn't tell me anything. What is the default device in that example?\r\n\r\nHow come `torch.cuda.set_device()` is not used here - as it's the latter that's supposed to set the default device.\r\n\r\nIf possible I would like to ask for a clarification of what @ngimel shared here: https://github.com/pytorch/pytorch/issues/49961#issuecomment-754319348 quote:\r\n\r\n> Default device is the device you are setting with torch.cuda.set_device(). It's possible to set device to 1 and then operate on the tensors on device 0, but for every function internally pytorch would be calling cudaSetDevice(0) - launch function kernel - cudaSetDevice(1) as part of setting device guards, and this is generally less efficient then setting device to 0 in the first place.\r\n\r\nShe suggested that unless I explicitly set `torch.cuda.set_device()` when switching to a different device (say 0->1) the code could incur a performance hit, because it'll first switch to device 0 and then 1 on every pytorch op if the default device was somehow 0 at that point.\r\n\r\nSo, say, if I'm setting up a DDP in the program. Do I have to call `torch.cuda.set_device(local_rank)` at some point after `torch.distributed.init_process_group()` since otherwise the default device will be `cpu` and the whole program will be slower because of that.\r\n\r\nShould pytorch flag to users when the default device isn't matching the device the op is run on?\r\n\r\nAnd say, I'm doing model parallelism as explained in this [tutorial](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) - why doesn't it do `torch.cuda.set_device()` when switching devices?\r\n\r\nWould it be possible to write a clear documentation on when to use `torch.cuda.set_device()`? Currently, it seems to be used more as a band-aid when related to device-switching bugs are encountered, since most of the time most code seems to work just fine w/o it, yet we unknowingly create a performance hit. \r\n\r\nThank you!\n\ncc @ngimel @jlin27 @mruberry", "url": "https://github.com/pytorch/pytorch/issues/50112", "state": "open", "labels": [ "module: docs", "module: cuda", "triaged", "needs design" ], "created_at": "2021-01-05T22:11:26Z", "updated_at": "2025-12-26T12:57:46Z", "user": "stas00" }, { "repo": "pytorch/examples", "number": 866, "title": "Structure of train_loader", "body": "Hi and thanks in advice for your help! I would like to upload my own set of images and to train the variational autoencoder model with my training set. I don't understand what is the structure of your train_loader. I see you use torch.utils.data.DataLoader on datasets.MNIST to obtain train_loader, but I don't understand if train_loader is the list of the images represented as numpy array or what else.", "url": "https://github.com/pytorch/examples/issues/866", "state": "closed", "labels": [], "created_at": "2021-01-04T15:57:34Z", "updated_at": "2022-03-09T21:17:33Z", "comments": 1, "user": "Silvia-Sciva" }, { "repo": "pytorch/pytorch", "number": 50030, "title": "How to realize Cross Validation using torchtext?", "body": "I want to realize cross validation using torchtext. Here is what I have done:\r\n1. First, I use TabularDataset to define a dataset from the JSON file\r\n2. Then, I use train_exs_arr = np.array(train_data.examples), d_train = train_exs_arr[train_idx].tolist() \r\n3. Then, I use Dataset to define a sub-dataset from Examples d_train\r\n4. Finally, I use BucketIterator. However, I can not access the data from BucketIterator", "url": "https://github.com/pytorch/pytorch/issues/50030", "state": "closed", "labels": [], "created_at": "2021-01-04T03:08:29Z", "updated_at": "2021-01-04T07:09:40Z", "user": "yipliu" }, { "repo": "pytorch/xla", "number": 2707, "title": "How to write pure Python function which can be ran on TPUs while using PyTorch-XLA?", "body": "I got existing code to train EfficientNet using PyTorch which contains custom augmentations like CutMix, MixUp etc. in my training loop. This runs perfectly on GPU. Now I want to change my code such that it can run on TPUs.\r\n\r\nI've made required changes to run my code on 8 TPU cores using PyTorch XLA but it's runs very slow when I use custom augmentations in training loop (even slower than GPU). When I remove them it runs significantly faster. So I think I have to make changes in my augmentation functions as well.\r\n\r\nHere is my training loop.\r\n```python\r\ndef train():\r\n for batch in train_loader:\r\n X, y = batch[0].to(device), batch[1].to(device) # device is xla\r\n cutmixup_prob = random.random()\r\n\r\n if cutmixup_prob > 0.4:\r\n X, y, y_shuffled, lam = cutmix(X, y, 0.4)\r\n\r\n # forward pass\r\n # calc. loss\r\n # backward pass\r\n xm.optimizer_step(optimizer)\r\n \r\n # calc. and return accuracy\r\n```\r\n\r\nAnd here is my complete `cutmix` function, which causes issues:\r\n\r\n```python\r\n# https://www.kaggle.com/c/bengaliai-cv19/discussion/126504\r\ndef rand_bbox(size, lam):\r\n W = size[2]\r\n H = size[3]\r\n cut_rat = np.sqrt(1. - lam)\r\n cut_w = np.int(W * cut_rat)\r\n cut_h = np.int(H * cut_rat)\r\n\r\n # uniform\r\n cx = np.random.randint(W)\r\n cy = np.random.randint(H)\r\n\r\n bbx1 = np.clip(cx - cut_w // 2, 0, W)\r\n bby1 = np.clip(cy - cut_h // 2, 0, H)\r\n bbx2 = np.clip(cx + cut_w // 2, 0, W)\r\n bby2 = np.clip(cy + cut_h // 2, 0, H)\r\n \r\n return bbx1, bby1, bbx2, bby2\r\n\r\ndef cutmix(images, targets, alpha):\r\n device = images.device\r\n indices = torch.randperm(images.size(0)).to(device)\r\n shuffled_targets = targets[indices].to(device)\r\n\r\n lam = np.random.beta(alpha, alpha)\r\n bbx1, bby1, bbx2, bby2 = rand_bbox(images.size(), lam)\r\n # Cutmix\r\n images[:, :, bbx1:bbx2, bby1:bby2] = images[indices, :, bbx1:bbx2, bby1:bby2]\r\n # adjust lambda to exactly match pixel ratio\r\n lam = 1 - ((bbx2 - bbx1) * (bby2 - bby1) / (images.size()[-1] * images.size()[-2]))\r\n return images, targets, shuffled_targets, lam\r\n```\r\n\r\nWhenever I'm creating tensors, I'm moving them to xla device, but still this slows down the training loop on TPUs.\r\n\r\nSo my question is how can I write pure python functions (here is `cutmix` is pure python function which just does some processing with image tensors) which can efficiently run on TPUs? What changes should I make here? Am I supposed to create all new variables on \"xla\" device?\r\n\r\nEDIT: I tried converting everything to tensors (with xla device) in `cutmix` function, but still no speed gain.\r\n\r\nThanks.", "url": "https://github.com/pytorch/xla/issues/2707", "state": "closed", "labels": [], "created_at": "2020-12-31T14:25:56Z", "updated_at": "2021-01-08T17:34:16Z", "user": "Kaushal28" }, { "repo": "pytorch/examples", "number": 862, "title": "Why not move images onto gpu?", "body": "https://github.com/pytorch/examples/blob/792d336019a28a679e29cf174e10cee80ead8722/imagenet/main.py#L284\r\n\r\nI'm trying to training vgg on imagenet with one node DataParallel and no multiprocessing\u3002But I find 'images.device' before computation is 'cpu', and 'target.device=cuda:0'. I'm not sure why these four lines of codes move 'images' to gpu only when I choose only one gpu(args.gpu is not None) and move 'target' to gpu even with argument device=None(args.gpu=None). \r\n\r\nI would appreciate it if someone could help me understand it.", "url": "https://github.com/pytorch/examples/issues/862", "state": "closed", "labels": [ "good first issue" ], "created_at": "2020-12-29T13:52:36Z", "updated_at": "2022-04-28T14:55:08Z", "comments": 3, "user": "I-Doctor" }, { "repo": "pytorch/pytorch", "number": 49888, "title": "How to apply functions to nested modules?", "body": "## \u2753 Questions and Help\r\n\r\nHi, all,\r\n I understood when we want to apply a certain function to layers in a model, we can call self.apply(_function). For instance, apply weight norm to all convolutional layers. I checked the document of module.apply(), where its says the function will be applied to all the children.\r\n My question is, if the model is complicated, say\r\n```python\r\nBlock1=nn.Sequential(nn.Linear(10,10), nn.Linear(10,10))\r\nBlock2=nn.Sequential(nn.Linear(10,10), nn.Linear(10,10))\r\nModel=nn.Sequential([nn.Linear(2,10), Block1, Block2])\r\n```\r\nNow if I want to apply a certain function on all linear layers (say a certain weight initialization), I can not directly call Model.apply(_function), right? Is there any elegant way to do this when nested modules are presented?\r\nThanks a lot!\r\n\r\n\r\n\n\ncc @albanD @mruberry @jbschlosser", "url": "https://github.com/pytorch/pytorch/issues/49888", "state": "closed", "labels": [ "module: nn", "triaged" ], "created_at": "2020-12-28T12:34:25Z", "updated_at": "2020-12-28T17:34:15Z", "user": "121898" }, { "repo": "pytorch/pytorch", "number": 49862, "title": "How to transform the adjacency matrix into the incidence matrix\uff1f ", "body": "## \u2753 Questions and Help\r\n\r\nHow to transform the adjacency matrix into the incidence matrix using the pytorch functions provided\uff1f It's easy to implement it using for loops, but it's Inefficient.\r\n", "url": "https://github.com/pytorch/pytorch/issues/49862", "state": "closed", "labels": [], "created_at": "2020-12-26T02:34:08Z", "updated_at": "2020-12-26T03:31:19Z", "user": "zlpure" }, { "repo": "pytorch/pytorch", "number": 49855, "title": "NN.CTCloss may be something wrong?How to decode CTC results?", "body": "pytorch 1.7.0 windows python3.7.5\r\n\r\nI tried to train the ocr rec model with this code, where Nn. Ctcloss was used : https://github.com/WenmuZhou/PytorchOCR/tree/master/tools/rec_train.py\r\nLoss went down to 0.02, ACC to 0.99. And then I try to deduce the model with https://github.com/WenmuZhou/PytorchOCR/tree/master/tools/rec_infer.py .The results are all wrong, not consistent with ACC.\r\n\r\nCan you write an example of text recognition based on Nn.CTCLOSS?", "url": "https://github.com/pytorch/pytorch/issues/49855", "state": "closed", "labels": [], "created_at": "2020-12-25T15:18:50Z", "updated_at": "2020-12-29T20:43:02Z", "user": "williamlzw" }, { "repo": "pytorch/vision", "number": 3198, "title": "Boxes with negative scores in NMS input?", "body": "Hi, I found that the use of NMS in `RegionProposalNetwork` can take on boxes with negative scores as inputs. I found this when running MaskRCNN in v0.8 release.\r\n\r\nhttps://github.com/pytorch/vision/blob/90645ccd0e774ad76200245e32222a23d09f2312/torchvision/models/detection/rpn.py#L261\r\n\r\n\r\nIn other use of NMS in `ROIHeads`, scores are thresholded to keep only boxes with positive scores:\r\nhttps://github.com/pytorch/vision/blob/90645ccd0e774ad76200245e32222a23d09f2312/torchvision/models/detection/roi_heads.py#L703\r\n\r\nI'm wondering if that lack of score thresholding in RPN is intentional or not... In TVM, we expects NMS input with negative scores to be invalid. Since NMS in PyTorch doesn't have a score threshold parameter, we didn't realize that there could be boxes with negative scores. \r\n\r\nI proposed to fix TVM's NMS conversion in https://github.com/apache/tvm/pull/7137, but since it would have a big performance implication and I heard that negative boxes don't matter in the final output anyway, I'm now inclined not to fix this in TVM side.\r\n\r\ncc @fmassa @t-vi ", "url": "https://github.com/pytorch/vision/issues/3198", "state": "closed", "labels": [ "question", "topic: object detection" ], "created_at": "2020-12-21T22:53:14Z", "updated_at": "2021-01-06T13:57:38Z", "user": "masahi" }, { "repo": "pytorch/vision", "number": 3188, "title": "Cannot Build With FFmpeg Support", "body": "## \u2753 Questions and Help\r\n\r\n### Cannot Build With FFmpeg Support\r\n\r\nHi.\r\n\r\nWhile trying to build `torchvision` from source, I've seen this output:\r\n\r\n```\r\n+ python3 setup.py build\r\nBuilding wheel torchvision-0.8.2\r\nPNG found: True\r\nlibpng version: 1.6.37\r\nBuilding torchvision with PNG image support\r\nlibpng include path: /usr/include/libpng16\r\nRunning build on conda-build: False\r\nRunning build on conda: False\r\nJPEG found: True\r\nBuilding torchvision with JPEG image support\r\nFFmpeg found: False\r\nrunning build\r\nrunning build_py\r\ncreating build\r\n\r\n(omitted)\r\n```\r\n\r\nIt showed that **`FFmpeg found: False`**. I tried `apt install ffmpeg` and built again, it still showed FFmpeg not found.\r\n\r\nThen I tried:\r\n\r\n```shell\r\napt update\r\napt install ffmpeg \\\r\n libavformat-dev libavcodec-dev libavdevice-dev \\\r\n libavutil-dev libswscale-dev libavresample-dev libavfilter-dev\r\n# deps of python package av\r\npip3 install ffmpeg av\r\n```\r\n\r\nBut it showed `FFmpeg found: False` once again.\r\n\r\nI could not find any instructions in [README](../blob/master/README.rst) about installing `ffmpeg` dependencies for building `torchvision` yet, so how could I do that, or where could I find it?\r\n\r\nThanks.\n\ncc @bjuncek", "url": "https://github.com/pytorch/vision/issues/3188", "state": "closed", "labels": [ "question", "topic: build", "module: video" ], "created_at": "2020-12-18T15:41:06Z", "updated_at": "2021-11-16T07:26:28Z", "user": "KumaTea" }, { "repo": "pytorch/vision", "number": 3184, "title": "Are these 2 lines of code necessary?", "body": "Hi,\r\nhttps://github.com/pytorch/vision/blob/master/references/video_classification/train.py#L134\r\nhttps://github.com/pytorch/vision/blob/master/references/video_classification/train.py#L169\r\n\r\nI wonder if these two lines are necessary.\r\nWhy do we need to assign transforms to dataset after loading them from cache, whose transforms have been declared when being saved.\r\nI remove them and code seems still work.\r\nThanks.\r\n", "url": "https://github.com/pytorch/vision/issues/3184", "state": "closed", "labels": [ "question" ], "created_at": "2020-12-17T16:40:06Z", "updated_at": "2021-01-21T13:10:18Z", "user": "jc-hou" }, { "repo": "pytorch/serve", "number": 917, "title": "Implement one of the TODOs: Pass request id while loading model in model_loader.py", "body": "<!--\r\nThank you for suggesting an idea to improve torchserve model serving experience.\r\n\r\nPlease fill in as much of the template below as you're able.\r\n-->\r\n**TODO**\r\nhttps://github.com/pytorch/serve/blob/6c078d6cd1f91c1614c18abf2f94d3571be1b659/ts/model_loader.py#L71\r\n\r\n```python\r\nclass TsModelLoader(ModelLoader):\r\n \"\"\"\r\n TorchServe 1.0 Model Loader\r\n \"\"\"\r\n\r\n def load(self, model_name, model_dir, handler, gpu_id, batch_size, envelope=None):\r\n \"\"\"\r\n Load TorchServe 1.0 model from file.\r\n :param model_name:\r\n :param model_dir:\r\n :param handler:\r\n :param gpu_id:\r\n :param batch_size:\r\n :param envelope:\r\n :return:\r\n \"\"\"\r\n logging.debug(\"Loading model - working dir: %s\", os.getcwd())\r\n # TODO: Request ID is not given. UUID is a temp UUID.\r\n metrics = MetricsStore(uuid.uuid4(), model_name)\r\n manifest_file = os.path.join(model_dir, \"MAR-INF/MANIFEST.json\")\r\n manifest = None\r\n if os.path.exists(manifest_file):\r\n with open(manifest_file) as f:\r\n manifest = json.load(f)\r\n```\r\n\r\n## Is your feature request related to a problem? Please describe.\r\n<!-- Please describe the problem you are trying to solve. -->\r\nThe main aim is to connect request maker(frontend) to request processor(backend) using request-id. One of the use cases can be when there is an error and we need to debug. It will be easy if we have request-id instead of random uuid\r\n\r\n## Describe the solution\r\n<!-- Please describe the desired behavior. -->\r\nWhen encoding the `ModelLoadModelRequest` into buffer, also send request-id which was used to create that particular request\r\n\r\n## Describe alternatives solution\r\n<!-- Please describe alternative solutions or features you have considered. -->\r\n", "url": "https://github.com/pytorch/serve/issues/917", "state": "closed", "labels": [ "help wanted", "question" ], "created_at": "2020-12-17T04:06:19Z", "updated_at": "2021-11-16T02:42:09Z", "user": "rishabh1212" }, { "repo": "pytorch/vision", "number": 3175, "title": "error: \u2018constexpr\u2019 call flows off the end of the function", "body": "### envs\r\nlibtorch==1.7.1\r\nvision == 0.8.2\r\n\r\n### install\r\n```bash\r\ncmake _DWITH_CUDA=on ..\r\nmake\r\n```\r\n### errors\r\nlibtorch-cxx11-abi-shared-with-deps-1.7.1/libtorch/include/ATen/core/op_registration/infer_schema.h:120:16: error: \u2018constexpr\u2019 call flows off the end of the function\r\n constexpr auto returns = createReturns<ReturnType>::call();\r\n ^~~~~~~\r\nmake[2]: *** [CMakeFiles/torchvision.dir/build.make:518: CMakeFiles/torchvision.dir/torchvision/csrc/ops/cuda/deform_conv2d_kernel.cu.o] Error 1\r\nmake[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/torchvision.dir/all] Error 2\r\n\r\ni have test cxx11/cxx14/cxx17, cxx14 and cxx17 have the same error\n\ncc @seemethere", "url": "https://github.com/pytorch/vision/issues/3175", "state": "closed", "labels": [ "question", "module: c++ frontend" ], "created_at": "2020-12-16T05:18:51Z", "updated_at": "2020-12-17T15:16:04Z", "user": "onism26" }, { "repo": "pytorch/pytorch", "number": 49445, "title": "[doc] how to prevent pytorch-nightly from being replaced by a released version on pip install", "body": "## \ud83d\udcda Documentation\r\n\r\nI found an issue with pytorch-nightly and pip install of some packages depending on pytorch. \r\n\r\nIf a user installs pytorch-nightly using:\r\n```\r\npip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U\r\n```\r\nwhich allows for pre-released versions as prescribed on https://pytorch.org/get-started/locally/, e.g.:\r\n\r\ninstalling some other packages that include `torch` in their requirements with:\r\n```\r\npip install package1 package2\r\n```\r\nwill wipe out the nightly build and install the latest release instead. \r\n\r\nI'm not 100% sure yet when this happens. I think it might be the case for python pip packages that don't have a binary wheel and need to be built from source and perhaps depend on pytorch to build.\r\n\r\nFor example this happens with `fairscale` (no binary wheel provided) but doesn't happen with `fairseq` which provides a binary wheel on pypi. It happened before with other packages - I will try to identify the correct group.\r\n\r\nThe solution in such circumstances is to pass the same `--pre --f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html` used to install the nightly to `pip install package-depending-on-pytorch` to keep the pre-released version installed. e.g.:\r\n```\r\npip install fairscale --pre --f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html\r\n```\r\n\r\n I have no idea where this could be documented.\n\ncc @ezyang @seemethere @malfet @walterddr @jlin27 @mruberry", "url": "https://github.com/pytorch/pytorch/issues/49445", "state": "open", "labels": [ "module: binaries", "module: docs", "oncall: releng", "triaged" ], "created_at": "2020-12-16T02:42:27Z", "updated_at": "2021-05-31T17:06:32Z", "user": "stas00" }, { "repo": "pytorch/vision", "number": 3169, "title": "Width Calculation For Bounding Boxes in torchvision\\models\\detection\\_utils.py", "body": "In the function encode_boxes (line 79 of torchvision\\models\\detection\\_utils.py), it seems that the width of the ground truth proposals matched is being computed as \r\n\r\nex_widths = proposals_x2 - proposals_x1\r\nex_heights = proposals_y2 - proposals_y1\r\n\r\nBut for a bounding box from ms coco [368, 413, 368, 417]. I guess this is just a matter of opinion if this is a \"valid\" bounding box, but it seems to me that x_min = x_max is valid for a box that is 1 pixel wide, and y_max-y_min pixels high. Anyway this causes the targets_dw or targets_dh to take the torch.log of 0, giving float(-inf), which can of course be easily fixed by adding +1 to the width, or the fix:\r\n\r\nex_widths = proposals_x2 - proposals_x1 + 1\r\nex_heights = proposals_y2 - proposals_y1 + 1\r\n\r\nEither that or I could just filter out these boxes with x_min = x_max or y_min = y_max", "url": "https://github.com/pytorch/vision/issues/3169", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-12-14T08:54:46Z", "updated_at": "2020-12-14T14:57:04Z", "user": "JamesMcCullochDickens" }, { "repo": "pytorch/pytorch", "number": 49304, "title": "How to save model with half precision?", "body": "## \u2753 Questions and Help\r\n\r\nMy model includes 5 resnet18, if they are saved with default precision(float32), then about 220MB space in my disk is occupied.\r\nMy idea is to reduce the storage to 110MB, so I used model.half() to apply precision 16. \r\nI used torch.save(model.state_dict(),'model.pt') to save my model, however there still is 220MB for the model storage.\r\n\r\nDoes anyone know how to deal with this? Thanks very much.", "url": "https://github.com/pytorch/pytorch/issues/49304", "state": "closed", "labels": [], "created_at": "2020-12-14T02:07:34Z", "updated_at": "2020-12-14T06:54:39Z", "user": "xinfangliu" }, { "repo": "pytorch/pytorch", "number": 49298, "title": "[question] How hard would it be to implement 4-bit precision training?", "body": "I came across the paper [Ultra-Low Precision 4-bit Training of Deep Neural Networks](https://proceedings.neurips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf) on NeurIPS 2020. I think it would be cool to implement support for it in PyTorch. I think it can be done quite efficiently on CPU using the AVX2 instruction set, as all the multiplication/addition operations can be stored in a fast cache. The operations would just make a lookup in this table.\r\n\r\nI had a look how things are implemented in the library. If I am correct, there is enough of level of abstraction to make this doable. I need to implement a kernel and add it to the ATEN's DispatchStub or something like that. If I copy-paste implementation of `\r\npytorch/aten/src/ATen/quantized/` and make it work with custom `fp4` type that should work for end-to-end training, right? To start playing around with this, like training my own MNIST, it should be enough to just implement addition and multiplication for something like MLP with relus: all computation consists only from affine operations so + and * should be enough. \r\n\r\nI would appreciate high-level guidance / help links on this. Thank you!\n\ncc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang", "url": "https://github.com/pytorch/pytorch/issues/49298", "state": "open", "labels": [ "module: internals", "triaged" ], "created_at": "2020-12-13T18:04:02Z", "updated_at": "2024-05-29T19:02:17Z", "user": "michalsustr" }, { "repo": "pytorch/vision", "number": 3168, "title": "Getting Error: NotADirectoryError: [WinError 267] The directory name is invalid. File and folder both are valid", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nI am getting the following error \r\nGetting Error: NotADirectoryError: [WinError 267] The directory name is invalid. File and folder both are valid\r\nI am using the following code:\r\n\r\n# Load all image data\r\ndata_dir = os.getcwd()\r\nfolder_name = \"train\"\r\nimage_folders = os.path.join(data_dir, folder_name)\r\ntransform = transforms.Compose([transforms.Resize((512,512)), transforms.ToTensor()])\r\nimages = []\r\nfor file in os.listdir(image_folders):\r\n #print(\"1-->\"+file)\r\n images.append(ImageFolder(os.path.join(image_folders, file), transform=transform))\r\ndatasets = torch.utils.data.ConcatDataset(images)\r\n\r\n## To Reproduce\r\nI have placed the files in the D:\\MS_Program\\DR\\Code\\train \r\nFile Extension is . JPG \r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Run the piece of code\r\n1.\r\n1.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.7.1\r\n - OS (e.g., Linux): Windows\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source): Conda\r\n - Build command you used (if compiling from source):\r\n - Python version: Python 3.7.6\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n\ncc @pmeier", "url": "https://github.com/pytorch/vision/issues/3168", "state": "closed", "labels": [ "question", "module: datasets" ], "created_at": "2020-12-13T15:33:38Z", "updated_at": "2021-02-21T16:12:31Z", "user": "manojrustagi79" }, { "repo": "pytorch/tutorials", "number": 1277, "title": "cannot import name 'extract_archive', when run seq-to-seq model in the google colab.", "body": "Why run Seq to Seq model example in the pytorch use google colab exists the problem? how to solution it?\r\nThe model example following :\r\n\r\nimport io\r\nimport torch\r\nfrom torchtext.utils import download_from_url, extract_archive\r\nfrom torchtext.data.utils import get_tokenizer\r\nfrom torchtext.vocab import build_vocab_from_iterator\r\n\r\nurl = 'https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip'\r\ntest_filepath, valid_filepath, train_filepath = extract_archive(download_from_url(url))\r\ntokenizer = get_tokenizer('basic_english')\r\nvocab = build_vocab_from_iterator(map(tokenizer,\r\n iter(io.open(train_filepath,\r\n encoding=\"utf8\"))))\r\n\r\ndef data_process(raw_text_iter):\r\n data = [torch.tensor([vocab[token] for token in tokenizer(item)],\r\n dtype=torch.long) for item in raw_text_iter]\r\n return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))\r\n\r\ntrain_data = data_process(iter(io.open(train_filepath, encoding=\"utf8\")))\r\nval_data = data_process(iter(io.open(valid_filepath, encoding=\"utf8\")))\r\ntest_data = data_process(iter(io.open(test_filepath, encoding=\"utf8\")))\r\n\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")", "url": "https://github.com/pytorch/tutorials/issues/1277", "state": "closed", "labels": [], "created_at": "2020-12-11T02:55:52Z", "updated_at": "2021-07-27T15:12:26Z", "comments": 4, "user": "funny000" }, { "repo": "pytorch/vision", "number": 3149, "title": "How can I install torchvision on Apple M1?", "body": "How can I install torchvision on Apple M1?", "url": "https://github.com/pytorch/vision/issues/3149", "state": "closed", "labels": [ "help wanted", "question", "topic: build" ], "created_at": "2020-12-10T09:21:41Z", "updated_at": "2021-06-06T05:59:32Z", "user": "huwei1024" }, { "repo": "pytorch/TensorRT", "number": 248, "title": "failed build trtorch", "body": "Hi,\r\n\r\nwhen run bazel build //:libtrtorch -c opt I got the following error:\r\n\r\nno such package '@platforms//os': The repository '@platforms' could not be resolved and referenced by '//:windows'", "url": "https://github.com/pytorch/TensorRT/issues/248", "state": "closed", "labels": [ "question" ], "created_at": "2020-12-09T08:29:33Z", "updated_at": "2020-12-10T05:11:34Z", "user": "pribadihcr" }, { "repo": "pytorch/tutorials", "number": 1272, "title": "AssertionError: Not equal to tolerance rtol=0.001, atol=1e-05", "body": "Recently I am converting the pytorch segmentation model to onnx model\u3002I can export the onnx model, pass the onnx.checker.check_model() and use the onnxruntime to do inference. But when I use np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol=1e-03, atol=1e-05) to compare ONNX Runtime and PyTorch results, there is an AssertionError, like follows:\r\n\r\nAssertionError: \r\nNot equal to tolerance rtol=0.001, atol=1e-05\r\n\r\nMismatched elements: 20827169 / 20971520 (99.3%)\r\nMax absolute difference: 1.8859415\r\nMax relative difference: 1008390.8\r\n x: array([[[[ 1.165803e+01, 1.163278e+01, 1.160753e+01, ...,\r\n 1.179392e+01, 1.176985e+01, 1.174578e+01],\r\n [ 1.167064e+01, 1.164517e+01, 1.161970e+01, ...,...\r\n y: array([[[[11.636896, 11.6166 , 11.596304, ..., 12.943967, 12.909642,\r\n 12.875318],\r\n [11.656967, 11.636346, 11.615723, ..., 12.954525, 12.920053,...\r\n\r\nThe code snippet to export the model is as follows\uff1a\r\n\r\nmodel.eval()\r\nbatch_size = 1 \r\ninput_shape = (3, 512, 512) \r\n# # x = torch.autograd.Variable(torch.randn(batch_size, *input_shape))\r\nx = torch.rand(batch_size, 3, 512, 512, requires_grad=True)\r\ntorch.onnx.export(model, x, model_file_name + '.onnx', export_params=True, opset_version=11, verbose=False)\r\n\r\nIn this tutorial, https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html, it said, if the results do not match then there is an issue in the ONNX exporter. But i don't know where is the mistake.\n\ncc @BowenBao @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/1272", "state": "closed", "labels": [ "onnx", "medium", "docathon-h2-2023" ], "created_at": "2020-12-09T06:50:04Z", "updated_at": "2023-11-07T00:44:48Z", "comments": 7, "user": "GeneralJing" }, { "repo": "pytorch/examples", "number": 855, "title": "cannot find dcgan-sample-10.png", "body": "Hello, recently I learn the code from https://github.com/pytorch/examples/tree/master/cpp/dcgan. But when I want to run \r\npython display_samples.py -i dcgan-sample-10.png\r\n\r\nI didn't find the dcgan-sample-10.png.\r\n\r\ncan you tell me how to find the image correctly?\r\n\r\nAnd when I run ./dcgan to train, I got some warning:\r\n[W Resize.cpp:19] Warning: An output with one or more elements was resized since it had shape [64, 1, 1, 1], which does not match the required output shape [64, 1, 1, 64].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (function resize_output)\r\n\r\nI didn't know how to fix it? could you help me?", "url": "https://github.com/pytorch/examples/issues/855", "state": "closed", "labels": [], "created_at": "2020-12-09T02:47:37Z", "updated_at": "2022-03-09T20:42:06Z", "comments": 1, "user": "liubamboo" }, { "repo": "pytorch/pytorch", "number": 48995, "title": "How to do polymorphism on torch::nn::ModuleHolder?", "body": "The C++ frontend tutorial https://pytorch.org/tutorials/advanced/cpp_frontend.html recommends use ModuleHolder to create our own modules, but the inheritance relation does seem not translate to ModuleHolder. So I am wondering if there is a way to have both the benefit of ModuleHolder while having polymorphism among my customized modules.\n\ncc @yf225 @glaringlee @albanD @mruberry", "url": "https://github.com/pytorch/pytorch/issues/48995", "state": "closed", "labels": [ "module: cpp", "module: nn", "triaged" ], "created_at": "2020-12-08T03:26:39Z", "updated_at": "2020-12-22T20:23:06Z", "user": "thisisi3" }, { "repo": "pytorch/pytorch", "number": 48928, "title": "When multiple GPUs run multiple processes, it is found that any process not running in GPU 0 will have some more memory (such as 200m) in GPU 0. What is the cause of this?\uff08\u591a\u4e2aGPU\u8dd1\u591a\u8fdb\u7a0b\u65f6\u5019\uff0c\u53d1\u73b0\u53ea\u8981\u4e0d\u57280\u53f7GPU\u8dd1\u7684\u8fdb\u7a0b\u90fd\u4f1a\u57280\u53f7GPU\u591a\u51fa\u4e00\u4e9b\u5185\u5b58(\u5982200M)\uff0c\u8bf7\u95ee\u8fd9\u662f\u4ec0\u4e48\u60c5\u51b5\u5bfc\u81f4\u7684\uff1f\uff09", "body": "Hello everyone, when multiple GPUs run multiple processes, we find that a process running in GPU 0 only occupies 1000m of memory; however, running a process with GPU 1 will occupy 1000m of memory in GPU 1, and it will also occupy 200m of memory in GPU 0; GPU 2 or GPU 3 are the same; we found that as long as the processes not running in GPU 0 will have 200m more memory in GPU 0, what is the cause of this? Thank you!\r\n\r\n\u5927\u5bb6\u597d\uff0c\u5728\u591a\u4e2aGPU\u8dd1\u591a\u4e2a\u8fdb\u7a0b\u7684\u65f6\u5019\u53d1\u73b0\uff0c0\u53f7GPU\u8dd1\u7684\u4e00\u4e2a\u8fdb\u7a0b\u53ea\u5360\u663e\u5b581000M\uff1b\u4f46\u662f\u75281\u53f7GPU\u8dd1\u4e00\u4e2a\u8fdb\u7a0b\u4f1a\u57281\u53f7GPU\u5360\u663e\u5b581000M\uff0c\u800c\u4e14\u4f1a\u57280\u53f7GPU\u4e5f\u5360\u7528200M\u663e\u5b58\uff1b2\u53f7\u62163\u53f7GPU\u90fd\u4e00\u6837\uff1b\u53d1\u73b0\u53ea\u8981\u4e0d\u57280\u53f7GPU\u8dd1\u7684\u8fdb\u7a0b\u90fd\u4f1a\u57280\u53f7GPU\u591a\u51fa200M\u663e\u5b58\uff0c\u8bf7\u95ee\u8fd9\u662f\u4ec0\u4e48\u60c5\u51b5\u5bfc\u81f4\u7684\uff0c\u8c22\u8c22\uff01\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd", "url": "https://github.com/pytorch/pytorch/issues/48928", "state": "open", "labels": [ "oncall: distributed" ], "created_at": "2020-12-07T11:28:55Z", "updated_at": "2021-01-21T06:46:15Z", "user": "zoufangyu1987" }, { "repo": "pytorch/pytorch", "number": 48927, "title": "how to train a \"mask keypoint r-cnn\"", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\n\r\n**Question**\r\nTo have a detection model predict bbox, mask and keypoints simultaneously, I wrote a script of \"mask keypoint r-cnn\", based on pytorch's indigenous implementation of mask r-cnn and keypoint r-cnn.\r\n\r\nTo test the baseline I use a dataset with 10 images of pedestrians, labeled with keypoints and masks. In training I sum the losses of each module together and optimize it. But the result is unsatisfactory after even 200 epochs. Neither the predicted bboxes nor keypoints and masks looks fine. Yet the loss seems already converged and no more decreasing.\r\n\r\nIn my expectation, with so few samples, it should be easy for the model to overfit the dataset.\r\n\r\nI tried ignoring one among mask loss and keypoint loss, then the model is well trained as expected, becoming a good keypoint r-cnn, or mask r-cnn. I think this proves my implementation didn't go wrong.\r\n\r\nThe question, is there advise or experience for training keypoint and mask together? Thanks in advance :)\r\n\r\n**Appendix**\r\nMy implementation of mask keypoint r-cnn:\r\n```\r\nimport torch\r\nfrom torchvision.models.utils import load_state_dict_from_url\r\nfrom torchvision.ops import MultiScaleRoIAlign\r\nfrom torchvision.models.detection.faster_rcnn import FasterRCNN\r\nfrom torchvision.models.detection.backbone_utils import resnet_fpn_backbone\r\nfrom torchvision.models.detection.mask_rcnn import MaskRCNNHeads, MaskRCNNPredictor\r\nfrom torchvision.models.detection.keypoint_rcnn import KeypointRCNNHeads, KeypointRCNNPredictor\r\nimport time\r\n\r\n\r\nclass MaskKeypointRCNN(FasterRCNN):\r\n def __init__(self, backbone, num_classes=None,\r\n # transform parameters\r\n min_size=800, max_size=1333,\r\n image_mean=None, image_std=None,\r\n # RPN parameters\r\n rpn_anchor_generator=None, rpn_head=None,\r\n rpn_pre_nms_top_n_train=2000, rpn_pre_nms_top_n_test=1000,\r\n rpn_post_nms_top_n_train=2000, rpn_post_nms_top_n_test=1000,\r\n rpn_nms_thresh=0.7,\r\n rpn_fg_iou_thresh=0.7, rpn_bg_iou_thresh=0.3,\r\n rpn_batch_size_per_image=256, rpn_positive_fraction=0.5,\r\n # Box parameters\r\n box_roi_pool=None, box_head=None, box_predictor=None,\r\n box_score_thresh=0.05, box_nms_thresh=0.5, box_detections_per_img=100,\r\n box_fg_iou_thresh=0.5, box_bg_iou_thresh=0.5,\r\n box_batch_size_per_image=512, box_positive_fraction=0.25,\r\n bbox_reg_weights=None,\r\n # Mask parameters\r\n mask_roi_pool=None, mask_head=None, mask_predictor=None,\r\n # keypoint parameters\r\n keypoint_roi_pool = None, keypoint_head = None, keypoint_predictor = None,\r\n num_keypoints = 17):\r\n\r\n out_channels = backbone.out_channels\r\n\r\n # mask predictor initialization\r\n assert isinstance(mask_roi_pool, (MultiScaleRoIAlign, type(None)))\r\n if num_classes is not None:\r\n if mask_predictor is not None:\r\n raise ValueError(\"num_classes should be None when mask_predictor is specified\")\r\n if mask_roi_pool is None:\r\n mask_roi_pool = MultiScaleRoIAlign(\r\n featmap_names=['0', '1', '2', '3'],\r\n output_size=14,\r\n sampling_ratio=2)\r\n if mask_head is None:\r\n mask_layers = (256, 256, 256, 256)\r\n mask_dilation = 1\r\n mask_head = MaskRCNNHeads(out_channels, mask_layers, mask_dilation)\r\n if mask_predictor is None:\r\n mask_predictor_in_channels = 256 # == mask_layers[-1]\r\n mask_dim_reduced = 256\r\n mask_predictor = MaskRCNNPredictor(mask_predictor_in_channels,\r\n mask_dim_reduced, num_classes)\r\n\r\n # keypoint predictor initialization\r\n assert isinstance(keypoint_roi_pool, (MultiScaleRoIAlign, type(None)))\r\n if min_size is None:\r\n min_size = (640, 672, 704, 736, 768, 800)\r\n if num_classes is not None:\r\n if keypoint_predictor is not None:\r\n raise ValueError(\"num_classes should be None when keypoint_predictor is specified\")\r\n if keypoint_roi_pool is None:\r\n keypoint_roi_pool = MultiScaleRoIAlign(\r\n featmap_names=['0', '1', '2', '3'],\r\n output_size=14,\r\n sampling_ratio=2)\r\n if keypoint_head is None:\r\n keypoint_layers = tuple(512 for _ in range(8))\r\n keypoint_head = KeypointRCNNHeads(out_channels, keypoint_layers)\r\n if keypoint_predi", "url": "https://github.com/pytorch/pytorch/issues/48927", "state": "closed", "labels": [], "created_at": "2020-12-07T09:04:08Z", "updated_at": "2020-12-08T01:38:01Z", "user": "feiyangsuo" }, { "repo": "pytorch/examples", "number": 854, "title": "why multiple token embedding by math.sqrt(self.ninp)?", "body": "Dear author, \r\n\r\nI am wondering why you multiple token's embedding by math.sqrt(self.ninp) in [model.py](https://github.com/pytorch/examples/blob/a3f28a26851867b314f4471ec6ca1c2c048217f1/word_language_model/model.py#L148) from the word_language_model example.\r\n\r\nBest", "url": "https://github.com/pytorch/examples/issues/854", "state": "closed", "labels": [], "created_at": "2020-12-07T07:11:14Z", "updated_at": "2022-03-09T21:05:40Z", "comments": 1, "user": "KK666-AI" }, { "repo": "pytorch/tutorials", "number": 1267, "title": "Weird results in the AUTOMATIC MIXED PRECISION tutorial.", "body": "I followed the [amp tutorial](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html#automatic-mixed-precision) (authored by @mcarilli). It's succinct and perspicuous. But the results show that mixed precision takes more memory than default precision. Can someone explain?\r\n\r\nMore details about the settings and results of my experiment are [here](https://discuss.pytorch.org/t/automatic-mixed-precision-increases-max-memory-used-by-tensors/104875).", "url": "https://github.com/pytorch/tutorials/issues/1267", "state": "closed", "labels": [ "question", "amp" ], "created_at": "2020-12-04T04:24:26Z", "updated_at": "2023-03-14T18:26:59Z", "user": "qimingyudaowenti" }, { "repo": "pytorch/pytorch", "number": 48770, "title": "How can I find a function to calculate correlation coefficient matrix like numpy.corrcoef () in pytorch?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/48770", "state": "closed", "labels": [], "created_at": "2020-12-03T05:40:21Z", "updated_at": "2020-12-03T16:49:24Z", "user": "jiangzhiwei2018" }, { "repo": "pytorch/serve", "number": 822, "title": "How to fix this problem", "body": "When I run the official example,I've got this problem,Does anyone have the same problem as Me?How can I solve it? thank you!", "url": "https://github.com/pytorch/serve/issues/822", "state": "closed", "labels": [], "created_at": "2020-12-02T12:22:07Z", "updated_at": "2020-12-02T15:37:40Z", "user": "shyoulala" }, { "repo": "pytorch/vision", "number": 3093, "title": "VOCSegmentation transforms.ToTensor() not working", "body": "Hi, \r\nI want to use the VOCSegmentation dataset but I always get this error:\r\n\r\n```\r\nTypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'PIL.PngImagePlugin.PngImageFile'>\r\n```\r\nThis is a code snippet to recreate the error\r\n```python\r\ntransform=transforms.Compose([\r\n transforms.Resize((256, 256)),\r\n transforms.ToTensor()\r\n ])\r\n\r\nvoc_train = VOCSegmentation(os.getcwd(), year='2012', image_set='train', transform=transform)\r\ntrain_loader = DataLoader(voc_train, batch_size=64)\r\n\r\ntrain_iter = iter(train_loader)\r\nnext(train_iter)\r\n```\r\n\r\nWhen I use the MNIST or CIFAR10 data set the code works as expected.\r\nIs there something special about the `VOCSegmentation` data set?\r\nThanks\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3093", "state": "closed", "labels": [ "question", "topic: semantic segmentation" ], "created_at": "2020-12-02T10:16:12Z", "updated_at": "2024-01-08T06:45:57Z", "user": "sirtris" }, { "repo": "pytorch/vision", "number": 3090, "title": "about retrain shufflenetv2 question", "body": "First of all, thanks for your perfect projects.\r\n\r\n## Environments\r\npyhton: 3.7\r\npytorch: 1.7+cpu\r\ntorchvison: 0.8.1+cpu\r\nsystem-os: ubuntu18.04\r\n\r\n## Hyperparameters\r\nlr: 0.001\r\nmomentum: 0.9\r\nweights_decay: 0.0001\r\nbatch_size: 16\r\n\r\n## Question introduction\r\nRecently, I was learning the source code your provided in torchvision about shufflenetv2.\r\nBut when I was fine-training the network(only training fc layer), I had a problem that network convergence is very slow. like this:\r\n```\r\n[epoch 0] accuracy: 0.246\r\n[epoch 1] accuracy: 0.253\r\n[epoch 2] accuracy: 0.28\r\n[epoch 3] accuracy: 0.305\r\n[epoch 4] accuracy: 0.338\r\n[epoch 5] accuracy: 0.353\r\n```\r\nI have read this document [https://pytorch.org/docs/stable/torchvision/models.html#classification](https://pytorch.org/docs/stable/torchvision/models.html#classification)\r\nAccording to this document, I downloaded the weights [https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth](https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth), and use same preprocessing method.\r\n```python\r\n data_transform = {\r\n \"train\": transforms.Compose([transforms.RandomResizedCrop(224),\r\n transforms.RandomHorizontalFlip(),\r\n transforms.ToTensor(),\r\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),\r\n \"val\": transforms.Compose([transforms.Resize(256),\r\n transforms.CenterCrop(224),\r\n transforms.ToTensor(),\r\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])}\r\n```\r\nBut with conditions unchanged, I just replace the model with resnet34 your provided in torchvision, and I can get great results. like this:\r\n```\r\n[epoch 0] accuracy: 0.968\r\n```\r\n\r\nStrangely, When fine-training shfflenetv2 if I change the learning rate from 0.001 to 0.1, I can get the following results:\r\n```\r\n[epoch 0] accuracy: 0.85\r\n[epoch 1] accuracy: 0.848\r\n.....\r\n[epoch 29] accuracy: 0.899\r\n```\r\nDoes fine-training shufflenet network need such a large learning rate?\r\n\r\nI guess the preprocessing algorithm is not like that. Because if I use the mobilenetv2 network, I can get better results under the same conditions. Could you help me find out what's wrong? Thank you very much.\r\n\r\n\r\n## Code\r\n[https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/master/pytorch_classification/Test7_shufflenet/train.py](https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/master/pytorch_classification/Test7_shufflenet/train.py)\r\n", "url": "https://github.com/pytorch/vision/issues/3090", "state": "open", "labels": [ "question" ], "created_at": "2020-12-02T02:35:51Z", "updated_at": "2021-01-25T00:55:45Z", "user": "WZMIAOMIAO" }, { "repo": "pytorch/xla", "number": 2657, "title": "Using iterative datasets with pytorch XLA is very slow on TPU, how to use it correctly ", "body": "## Environment info\r\n- Platform: TPU\r\n- Python version: 3.7\r\n\r\n## Information\r\nI am running the following codes on TPU and GPU and on TPU this is very slow. I am not sure if the way I define dataloader for iterative dsatasets is correct or not. Here is how I define the dataloader, https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496 \r\n\r\nI shard the data per-tpu core here: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L326 \r\n\r\nCould you point me if I am not using distributed data samplers and shard the data per core, how I can do distributed trianing properly? thanks \r\n \r\n## To reproduce\r\n```\r\ngit clone git@github.com:google-research/ruse.git\r\ngo to iter branch \r\npip install -r requirements.txt\r\npython setup.py develop\r\ncd seq2seq\r\npython xla_spawn.py finetune_t5_trainer.py configs/mrpc_adapter_tpu.json\r\n```", "url": "https://github.com/pytorch/xla/issues/2657", "state": "closed", "labels": [], "created_at": "2020-12-02T01:01:14Z", "updated_at": "2020-12-06T00:00:11Z", "user": "rabeehkarimimahabadi" }, { "repo": "pytorch/vision", "number": 3083, "title": "Getting an error when modifying the faster_rcnn model to add inception_v3 backbone model", "body": "I was following this tutorial [Modifying the model to add a different backbone](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#modifying-the-model-to-add-a-different-backbone). When I replace the mobilenet_v2 model with inception_v3, the code does not work and gives the following error:\r\n```\r\n File \"/home/gpu-user/projects/building-outline-detection/src/models/faster_rcnn/vision/engine.py\", line 46, in train_one_epoch\r\n loss_dict = model(images, targets)\r\n File \"/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py\", line 99, in forward\r\n proposals, proposal_losses = self.rpn(images, features, targets)\r\n File \"/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torchvision/models/detection/rpn.py\", line 330, in forward\r\n features = list(features.values())\r\nAttributeError: 'InceptionOutputs' object has no attribute 'values'\r\n```\r\nI am using the following environment:\r\n\r\n* Ubuntu 18.04.4 LTS\r\n* CUDA Version: 10.2\r\n* Python: 3.8.6\r\n* Pytorch: 1.7.0\r\n\r\nIt will be great if someone can help me in resolving this issue.\r\nThanks", "url": "https://github.com/pytorch/vision/issues/3083", "state": "open", "labels": [ "question" ], "created_at": "2020-12-01T20:19:44Z", "updated_at": "2020-12-02T12:10:21Z", "user": "js-kalsi" }, { "repo": "pytorch/vision", "number": 3068, "title": "torchvison.ops.nms uses too much gpu memory", "body": "hi there, i have a quesetion nms operator. \r\nIf i use torchvision.ops.nms to filter bbox, about 900MB GPU memory is used, where the input box and score are put into GPU. But there is no problem if the box and score in cpu. meanwhile the time cost of gpu is 0.0007s, 0.0018s in cpu.\r\ni do not know actually why this operator uses such much GPU mem. or is there any configuration about nms to save gpu mem?\r\n\r\nmy torchvision version is 0.4.0. thanks~", "url": "https://github.com/pytorch/vision/issues/3068", "state": "closed", "labels": [ "question" ], "created_at": "2020-12-01T07:44:41Z", "updated_at": "2021-03-23T15:48:01Z", "user": "ThomsonW" }, { "repo": "pytorch/vision", "number": 3064, "title": "I cannot reach the ori accuracy by training the ResNeXt-50 on the ImageNet. ", "body": "I use the ['PyTorch ImageNet Training' example](https://github.com/pytorch/examples/tree/master/imagenet) and the ['models'](https://github.com/pytorch/vision/tree/master/torchvision/models) of TorchVision 0.4.2 to train ResNeXt-50 twice but got 23.52% and 23.57% (Top-1) on ImageNet Val set, which do not reach the ori Err. (22.2%). Besides, I find that the hyper parameters setting of ['PyTorch ImageNet Training' example](https://github.com/pytorch/examples/tree/master/imagenet) is same to the [original paper](https://arxiv.org/abs/1611.05431). Can you give me some advices for training to reach the ori Err. ?\r\n\r\nThe val acc alongside training is shown below: \r\n<summary>\r\nlogs\r\n<details>\r\nTop-1\r\n16.894\r\n32.488\r\n36.116\r\n40.272\r\n45.394\r\n46.328\r\n50.126\r\n50.336\r\n52.242\r\n54.414\r\n53.096\r\n54.438\r\n55.662\r\n54.972\r\n55.902\r\n57.204\r\n54.932\r\n57.068\r\n55.586\r\n56.9\r\n58.018\r\n56.67\r\n58.564\r\n57.272\r\n58.224\r\n57.736\r\n57.816\r\n58.292\r\n57.618\r\n56.664\r\n70.7\r\n71.502\r\n72.04\r\n72.452\r\n72.69\r\n72.754\r\n73.03\r\n72.996\r\n72.504\r\n72.812\r\n72.318\r\n72.294\r\n72.584\r\n72.318\r\n72.42\r\n72.528\r\n72.238\r\n72.14\r\n71.76\r\n71.91\r\n72.282\r\n72.508\r\n72.156\r\n71.424\r\n72.3\r\n72.48\r\n72.42\r\n72.61\r\n72.61\r\n72.178\r\n75.62\r\n75.86\r\n76.16\r\n76.184\r\n76.26\r\n76.252\r\n76.376\r\n76.3\r\n76.404\r\n76.48\r\n76.326\r\n76.368\r\n76.304\r\n76.386\r\n76.462\r\n76.36\r\n76.452\r\n76.396\r\n76.258\r\n76.308\r\n76.334\r\n76.228\r\n76.252\r\n76.304\r\n76.15\r\n76.298\r\n76.362\r\n76.15\r\n76.17\r\n76.058\r\n</details>\r\n</summary>\r\n\r\nEDIT: (vfdev-5) I updated the message and put the training logs into summary/details block.\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/3064", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2020-11-30T19:53:15Z", "updated_at": "2021-04-25T16:12:11Z", "user": "PoonKinWang" }, { "repo": "pytorch/pytorch", "number": 48576, "title": "how to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp", "body": "## \u2753 Questions and Help\r\n\r\n### how to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp.\r\n\r\n\r\nHow to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp\r\n\r\nwhen run model in single gpu twice, the weight is always same; \r\nwhen run model in ddp twice , the weight is different in grad apply. \r\n\r\nI suspect that the gradient error is accumulated in the Ring Allreduce.\r\n\r\n\r\n\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd", "url": "https://github.com/pytorch/pytorch/issues/48576", "state": "closed", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2020-11-30T09:24:09Z", "updated_at": "2020-12-07T01:42:03Z", "user": "lezasantaizi" }, { "repo": "pytorch/vision", "number": 3058, "title": "How to solve this error? RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend", "body": "## \u2753 Questions and Help\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nI'm beginner of ML and trying to use some solution based on pytorch (called detectron2)\r\nWhen the solution inferred the image, I always got the below error.\r\n\r\nRuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. 'torchvision::nms' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].\r\n\r\nActually, I didn't get this error and couldn't search anything about this on google.\r\nIs there anybody who knows the way to handle this?\r\n\r\nInfo:\r\nI installed the CUDA v11.1 from https://developer.nvidia.com/cuda-downloads \r\ntorch version: 1.7.0\r\ntorchvision version: 0.8.0", "url": "https://github.com/pytorch/vision/issues/3058", "state": "open", "labels": [ "needs reproduction", "module: ops" ], "created_at": "2020-11-30T08:17:04Z", "updated_at": "2024-01-18T01:41:05Z", "user": "manmani3" }, { "repo": "pytorch/vision", "number": 3056, "title": "torchvision.roi_align does not support TPU", "body": "Hello. \r\nWe are using TPU in GCP.\r\n\r\nWe are currently modifying the code to allow the TPU to return to Detectron2.\r\nHowever, there is an error that roi_align in Torchvision is not supported by TPU.\r\nPlease check the bottom. Can you solve it for me?\r\n\r\n`File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torchvision/ops/roi_align.py\", line 51, in roi_align\r\n return torch.ops.torchvision.roi_align(input, rois, spatial_scale, output_size[0], output_size[1], sampling_ratio, aligned)\r\nRuntimeError: Could not run 'torchvision::roi_align' with arguments from the 'XLA' backend. 'torchvision::roi_align' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].`", "url": "https://github.com/pytorch/vision/issues/3056", "state": "open", "labels": [ "question" ], "created_at": "2020-11-28T12:16:32Z", "updated_at": "2020-11-30T10:06:01Z", "user": "CheonJiEun" }, { "repo": "pytorch/pytorch", "number": 48528, "title": "I am unable to install, How to install it?", "body": "## \u2753 Questions and Help\r\n\r\n### I am unable to install pytorch like the way they said\r\n\r\nHere are all the screen shots to describe error is most of the detail\r\n\r\n#### Website\r\n<img width=\"960\" alt=\"chrome page\" src=\"https://user-images.githubusercontent.com/71920621/100496065-07ffb580-3177-11eb-9e8d-7445d613e97f.PNG\">\r\n\r\n#### Command Prompt\r\n<img width=\"614\" alt=\"cmd\" src=\"https://user-images.githubusercontent.com/71920621/100496068-1221b400-3177-11eb-8402-856ac1d037d7.PNG\">\r\n\r\n#### System Info\r\n<img width=\"900\" alt=\"sys1\" src=\"https://user-images.githubusercontent.com/71920621/100496070-18179500-3177-11eb-8a80-6c0d1638f557.PNG\">\r\n<img width=\"900\" alt=\"sys2\" src=\"https://user-images.githubusercontent.com/71920621/100496073-1cdc4900-3177-11eb-97d7-e57f1cbf6659.PNG\">\r\n\n\ncc @ezyang @seemethere @malfet @walterddr @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @mszhanyi @skyline75489", "url": "https://github.com/pytorch/pytorch/issues/48528", "state": "closed", "labels": [ "module: binaries", "module: windows", "triaged" ], "created_at": "2020-11-28T07:13:38Z", "updated_at": "2020-11-30T15:44:32Z", "user": "ghost" }, { "repo": "pytorch/vision", "number": 3049, "title": "In function `ROIPool_forward(at::Tensor const&, at::Tensor const&, double, long, long)':", "body": "https://github.com/pytorch/vision/issues/1849, i try this,but it cannot work. Please help me . Thanks a lot.\r\n\r\nIn function `ROIPool_forward(at::Tensor const&, at::Tensor const&, double, long, long)':\r\nundefined reference to `ROIPool_forward_cuda(at::Tensor const&, at::Tensor const&, float, int, int)'\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/3049", "state": "closed", "labels": [ "question" ], "created_at": "2020-11-26T10:34:31Z", "updated_at": "2020-12-24T08:58:28Z", "user": "wj1017090777" }, { "repo": "pytorch/TensorRT", "number": 242, "title": "Failure when add aten::gt converter", "body": "I was trying add new conveter aten::gt.Scalar(Tensor self, Scalar other) -> Tensor, but it failed in test_case\r\n\r\n\r\nin core/conversion/conveters/impl/element_wise.cpp, I add this\r\n```\r\n .pattern({\"aten::gt.Scalar(Tensor self, Scalar other) -> (Tensor)\",\r\n [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {\r\n // TODO: Remove with functionalization\r\n auto self = args[0].ITensorOrFreeze(ctx);\r\n auto otherScalar = args[1].unwrapToScalar().to<float>();\r\n auto other = tensor_to_const(ctx, torch::tensor({otherScalar}));\r\n auto gt =\r\n add_elementwise(ctx, nvinfer1::ElementWiseOperation::kGREATER, self, other, util::node_info(n));\r\n TRTORCH_CHECK(gt, \"Unable to create Greater layer from node: \" << *n);\r\n\r\n gt->setName(util::node_info(n).c_str());\r\n auto out = ctx->AssociateValueAndTensor(n->outputs()[0], gt->getOutput(0));\r\n\r\n LOG_DEBUG(\"Output tensor shape: \" << out->getDimensions());\r\n return true;\r\n }})\r\n```\r\n\r\nin tests/core/convters/test_element_wise.cpp, I add this\r\n```\r\nTEST(Converters, ATenGtWithScalarConvertsCorrectly) {\r\n const auto graph = R\"IR(\r\n graph(%0 : Tensor):\r\n %scalar : float = prim::Constant[value=0.5]()\r\n %1 : Tensor = aten::gt(%0, %scalar)\r\n return (%1))IR\";\r\n pointwise_test_helper(graph, true);\r\n}\r\n```\r\n\r\nAnd I use following command to build and test\r\n```\r\nbazel build //:libtrtorch --compilation_mode opt --distdir third_party/dist_dir/x86_64-linux-gnu\r\nbazel build //tests/core/converters:test_converters --compilation_mode opt --distdir third_party/dist_dir/x86_64-linux-gnu\r\nbazel run //tests/core/converters:test_element_wise\r\n```\r\n\r\nAnd get this error message\r\n```\r\n[ RUN ] Converters.ATenGtWithScalarConvertsCorrectly\r\nDEBUG: [TRTorch - Debug Build] - Running JIT version\r\nDEBUG: [TRTorch - Debug Build] - Running TRT version\r\nDEBUG: [TRTorch - Debug Build] - Settings requested for TensorRT engine:\r\n Operating Precision: Float32\r\n Make Refittable Engine: 0\r\n Debuggable Engine: 0\r\n Strict Types: 0\r\n GPU ID: 0\r\n Allow GPU Fallback (if running on DLA): 0\r\n Min Timing Iterations: 2\r\n Avg Timing Iterations: 1\r\n Max Workspace Size: 1048576\r\n Max Batch Size: Not set\r\n Device Type: GPU\r\n GPU ID: 0\r\n Engine Capability: Default\r\n Calibrator Created: 0\r\nINFO: [TRTorch Conversion Context] - Converting Block\r\nINFO: [TRTorch Conversion Context] - Adding Input 0 named input_0 in engine (conversion.AddInputs)\r\nDEBUG: [TRTorch Conversion Context] - Input shape set to [5]\r\nDEBUG: [TRTorch Conversion Context] - Evaluating %1 : float = prim::Constant[value=0.5]()\r\nDEBUG: [TRTorch Conversion Context] - Found the value to be: 0.5\r\nINFO: [TRTorch Conversion Context] - Adding Layer %2 : Tensor = aten::gt(%0, %1) (ctx.AddLayer)\r\nDEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor\r\nDEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value\r\nDEBUG: [TRTorch - Debug Build] - Frozen tensor shape: [5]\r\nDEBUG: [TRTorch - Debug Build] - Weights: [1]\r\n Number of input maps: 1\r\n Number of output maps: 1\r\n Element shape: [1]\r\nDEBUG: [TRTorch Conversion Context] - Freezing tensor 0x662238f0 as an IConstantLayer\r\nDEBUG: [TRTorch - Debug Build] - Output tensor shape: [5]\r\nINFO: [TRTorch Conversion Context] - Marking Output 2 named output_0 in engine (ctx.MarkOutput)\r\nDEBUG: [TRTorch Conversion Context] - Applying generic optimizations to the graph for inference.\r\nDEBUG: [TRTorch Conversion Context] - Original: 2 layers\r\nDEBUG: [TRTorch Conversion Context] - After dead-layer removal: 2 layers\r\nDEBUG: [TRTorch Conversion Context] - After Myelin optimization: 2 layers\r\nDEBUG: [TRTorch Conversion Context] - After scale fusion: 2 layers\r\nDEBUG: [TRTorch Conversion Context] - After vertical fusions: 2 layers\r\nDEBUG: [TRTorch Conversion Context] - After final dead-layer removal: 1 layers\r\nDEBUG: [TRTorch Conversion Context] - After tensor merging: 1 layers\r\nDEBUG: [TRTorch Conversion Context] - After concat removal: 1 layers\r\nDEBUG: [TRTorch Conversion Context] - Graph construction and optimization completed in 0.000104867 seconds.\r\nDEBUG: [TRTorch Conversion Context] - Constructing optimization profile number 0 out of 1\r\n*************** Autotuning format combination: Float(1) -> Bool(1) ***************\r\nDEBUG: [TRTorch Conversion Context] - --------------- Timing Runner: {%2 : Tensor = aten::gt(%0, %1)} (Myelin)\r\nDEBUG: [TRTorch Conversion Context] - Tactic: 0 is the only option, timing skipped\r\nDEBUG: [TRTorch Conversion Context] - Fastest Tactic: 0 Time: 0\r\nDEBUG: [TRTorch Conversion Context] - Formats and tactics selection completed in 0.0941442 seconds.\r\nDEBUG: [TRTorch Conversion Context] - After reformat layers: 1 layers\r", "url": "https://github.com/pytorch/TensorRT/issues/242", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-11-26T06:45:23Z", "updated_at": "2021-01-22T00:34:40Z", "user": "inocsin" }, { "repo": "pytorch/pytorch", "number": 48444, "title": "How to export to onnx with nms?", "body": "Hi, I am trying to add nms in pytorch detection model and export it to onnx so that I can convert the onnx to tensorrt7.1, so how can I export a model with nms ? Any examples or documents?\r\nThanks.", "url": "https://github.com/pytorch/pytorch/issues/48444", "state": "closed", "labels": [], "created_at": "2020-11-25T08:32:56Z", "updated_at": "2020-11-26T00:58:23Z", "user": "Edwardmark" }, { "repo": "pytorch/TensorRT", "number": 240, "title": "\u2753 [Question] How to solve aten::floor converter not found?", "body": "## \u2753 Question\r\n\r\nHow to solve aten::floor converter not found?\r\n\r\n## What you have already tried\r\n\r\n I am trying to convert a jit trace of a Fast SCNN network into TensorRT. I've confirmed that the trace was created in python3.6 using PyTorch 1.6.0. When printing the trace graph I do not even see the aten::floor operator. I also cannot locate a torch.floor operator in the original PyTorch model structure so I'm not sure what is even calling this operator? Here is the resulting error:\r\n```\r\nRuntimeError: [enforce fail at core/conversion/conversion.cpp:112] Expected converter to be true but got false\r\nUnable to convert node: %376 : Tensor = aten::floor(%324) # /home/nmonhollen/tensorrt/venv/lib/python3.6/site-packages/torch/nn/functional.py:3010:0 (conversion.AddLayer)\r\nSchema: aten::floor.int(int a) -> (int)\r\nConverter for aten::floor requested, but no such converter was found.\r\nIf you need a converter for this operator, you can try implementing one yourself\r\nor request a converter: https://www.github.com/NVIDIA/TRTorch/issues\r\n```\r\n\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.6.0\r\n - CPU Architecture: x86-64\r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): \r\n - Are you using local sources or building from archives: local sources (cuDNN=7.6.5, TensorRT=7.0.0.11)\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2\r\n - GPU models and configuration: \r\n - Any other relevant information:\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/240", "state": "closed", "labels": [ "feature request", "question" ], "created_at": "2020-11-24T16:13:34Z", "updated_at": "2021-04-22T00:54:15Z", "user": "nmonhollen" }, { "repo": "pytorch/pytorch", "number": 48390, "title": "what is the different between https://download.pytorch.org/whl/torch_stable.html and the tag", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/48390", "state": "closed", "labels": [], "created_at": "2020-11-23T12:40:06Z", "updated_at": "2020-11-26T01:06:09Z", "user": "jihuacao" }, { "repo": "pytorch/TensorRT", "number": 235, "title": "\u2753[Question] Dynamic shape for ResNet-50", "body": "## \u2753 Question\r\n\r\nHi, I try to convert ResNet-50 with dynamic shape: \r\n```\r\n {\r\n \"min\": (1, 3, 224, 224),\r\n \"opt\": (1, 3, 224, 224),\r\n \"max\": (3, 3, 224, 224)\r\n }\r\n``` \r\n, but i get this error:\r\n```\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred\r\nSegmentation fault (core dumped)\r\n```\r\n\r\nCode: \r\n```\r\nimport torch\r\nimport trtorch\r\n\r\ntorch_model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet50', pretrained=False)\r\nscript_model = torch.jit.script(torch_model.eval().cuda())\r\ntrt_model = trtorch.compile(script_model, {\r\n \"input_shapes\": [{\r\n \"min\": (1, 3, 224, 224),\r\n \"opt\": (1, 3, 224, 224),\r\n \"max\": (3, 3, 224, 224)\r\n }],\r\n \"op_precision\": torch.float32,\r\n})\r\n```\r\n\r\n## What you have already tried\r\n\r\nI run this [code](https://github.com/NVIDIA/TRTorch/issues/193#issuecomment-718162687). It works correct.\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - docker image: nvcr.io/nvidia/tensorrt :20.03-py3\r\n - PyTorch Version (e.g., 1.0): 1.6.0, installed with pip\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): Ubuntu 18.04\r\n - How installed TRTorch: pip install https://github.com/NVIDIA/TRTorch/releases/download/v0.1.0/trtorch-0.1.0-cp36-cp36m-linux_x86_64.whl\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2\r\n - GPU models and configuration: RTX 2060 SUPER", "url": "https://github.com/pytorch/TensorRT/issues/235", "state": "closed", "labels": [ "question" ], "created_at": "2020-11-23T05:02:13Z", "updated_at": "2021-02-23T23:25:09Z", "user": "gavrin-s" }, { "repo": "pytorch/vision", "number": 3040, "title": "I am not able to obtain results with custom backbone", "body": "_I am following the tutorial about FasterRCNN and I would like to test my network as backbone of the net:\r\n\r\nUCapsNet return 512 features maps\r\nI am training on VocPascal 2007_\r\n\r\n\r\nFRCN_model = FasterRCNN(backbone_model.Ucapsnet, 21, rpn_anchor_generator=backbone_model.anchor_generator, box_roi_pool=backbone_model.roi_pooler)\r\nFRCN_model = FRCN_model.to(device)\r\n\r\nparams = [p for p in FRCN_model.parameters() if p.requires_grad]\r\noptimizer = torch.optim.SGD(params, lr=0.02, momentum=0.9, weight_decay=1e-4)\r\n\r\npbar = tqdm(range(n_epochs))\r\nfor epoch in pbar:\r\n train_one_epoch(FRCN_model, optimizer, dataloaders['train'], device, epoch, print_freq=10)\r\n evaluate(FRCN_model, dataloaders['val'], device=device)\r\n\r\n\r\n**I got**:\r\nAveraged stats: model_time: 1605886336.0000 (1605886304.8101) evaluator_time: 0.0275 (0.0285)\r\nAccumulating evaluation results...\r\nDONE (t=0.06s).\r\nIoU metric: bbox\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\n\r\n\r\nIn training, the loss is dropping slowly to 1.15 but in evaluation, i do not get anything. \r\n\r\nPlease help me understand\r\n\r\ncc @fmassa ", "url": "https://github.com/pytorch/vision/issues/3040", "state": "open", "labels": [ "question", "module: documentation" ], "created_at": "2020-11-20T15:45:08Z", "updated_at": "2020-11-24T08:08:56Z", "user": "Riretta" }, { "repo": "pytorch/vision", "number": 3036, "title": "Faster R-CNN raise errors when input tensor has require_grad=True", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nI am using the pretrained Faster R-CNN model in torchvision as a sub-model in my own image generating model. In fact,I need the Faster R-CNN to backward properly when training my whole model.\r\nBut I found that when i feed the Faster R-CNN model in torchvision with the input having require_grad=True,it will raise following errors.\r\n```\r\nimport torch\r\nimport torchvision\r\n\r\nif __name__ == '__main__':\r\n Faster_RCNN_ins=torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, progress=True,\r\n num_classes=91,pretrained_backbone=True)\r\n Faster_RCNN_ins.eval()\r\n Faster_RCNN_ins(torch.zeros(2,3,256,256,requires_grad=True))\r\n```\r\nOR\r\n```\r\nimport torch\r\nimport torchvision\r\n\r\nif __name__ == '__main__':\r\n Faster_RCNN_ins=torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, progress=True,\r\n num_classes=91,pretrained_backbone=True)\r\n Faster_RCNN_ins.eval()\r\n out_tep=nn.Conv2d(3,3,3,stride=1,padding=1)(torch.zeros(2,3,256,256))\r\n Faster_RCNN_ins(out_tep)\r\n```\r\nboth code blocks will raise error:\r\n```\r\n File \"/data/gaoyan/style_transfer/scripts/tep.py\", line 23, in <module>\r\n Faster_RCNN_ins(torch.zeros(2,3,256,256,requires_grad=True))\r\n File \"/data/gaoyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/gaoyan/.local/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py\", line 80, in forward\r\n images, targets = self.transform(images, targets)\r\n File \"/data/gaoyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/gaoyan/.local/lib/python3.6/site-packages/torchvision/models/detection/transform.py\", line 111, in forward\r\n images = self.batch_images(images)\r\n File \"/data/gaoyan/.local/lib/python3.6/site-packages/torchvision/models/detection/transform.py\", line 211, in batch_images\r\n pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\r\nRuntimeError: A view was created in no_grad mode and its base or another view of its base has been modified inplace with grad mode enabled. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.\r\n```\r\n\r\n\r\n\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nIt can forward and backward wihout errors.\r\n## Environment\r\n\r\noutput of the environment script\r\n```\r\nPyTorch version: 1.7.0\r\nIs debug build: True\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: CentOS Linux 7 (Core) (x86_64)\r\nGCC version: (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6)\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\n\r\nPython version: 3.6 (64-bit runtime)\r\nIs CUDA available: True\r\nCUDA runtime version: 10.1.243\r\nGPU models and configuration: \r\nGPU 0: Tesla V100-SXM2-32GB\r\nGPU 1: Tesla V100-SXM2-32GB\r\nGPU 2: Tesla V100-SXM2-32GB\r\nGPU 3: Tesla V100-SXM2-32GB\r\n\r\nNvidia driver version: 440.33.01\r\ncuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.4\r\n[pip3] pytorch-model-summary==0.1.2\r\n[pip3] torch==1.7.0\r\n[pip3] torchstat==0.0.7\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchvision==0.8.1\r\n[conda] Could not collect\r\n```\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nI think the bug may lay in torchvision/models/detection/transform.py\r\n```\r\n def batch_images(self, images, size_divisible=32):\r\n # type: (List[Tensor], int) -> Tensor\r\n if torchvision._is_tracing():\r\n # batch_images() does not export well to ONNX\r\n # call _onnx_batch_images() instead\r\n return self._onnx_batch_images(images, size_divisible)\r\n\r\n max_size = self.max_by_axis([list(img.shape) for img in images])\r\n stride = float(size_divisible)\r\n max_size = list(max_size)\r\n max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)\r\n max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)\r\n\r\n batch_shape = [len(images)] + max_size\r\n batched_imgs = images[0].new_full(batch_shape, 0)\r\n for img, pad_img in zip(images, batched_imgs):\r\n pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\r\n\r\n return batched_imgs\r\n```\r\nthis funciton cannot work when images is a list of tensors with require_grad=True\r\n", "url": "https://github.com/pytorch/vision/issues/3036", "state": "closed", "labels": [ "question", "wontfix", "module: models", "topic: object detection" ], "created_at": "2020-11-20T06:38:35Z", "updated_at": "2020-11-20T09:38:14Z", "user": "EZ4NO1" }, { "repo": "pytorch/TensorRT", "number": 232, "title": "How do you activate trtorch::CompileGraph with multi inputs?", "body": "## \u2753 Question\r\n\r\nCan you provide an example of using more than one input please?\r\n\r\n\r\n## What you have already tried\r\n\r\nFor example I tried to do the following:\r\n` auto Input1= torch::randn({ 4, 24, 64, 64 }, { torch::kCUDA });\r\n auto Input2= torch::randn({ 1, 24, 1, 1 }, { torch::kCUDA });\r\n\r\nstd::vector<trtorch::CompileSpec::InputRange> inputRanges;\r\n\r\ninputRanges.push_back(Input1.sizes());\r\ninputRanges.push_back(Input2.sizes());\r\n\r\nauto trt_mod = trtorch::CompileGraph(module, inputRanges);\r\n\r\n`\r\n\r\nA std::out_of_range exception was raised.\r\n\r\nI can't be sure that the exception root cause related to the multi inputs that I used but for now I have no other suspicious.\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.6\r\n - CPU Architecture: Jetson Xavier AGX\r\n - OS (e.g., Linux): JetPack 4.4\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: Local\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2\r\n - GPU models and configuration: Jetson Xavier AGX\r\n - Any other relevant information: JetPack 4.4\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/232", "state": "closed", "labels": [ "question" ], "created_at": "2020-11-19T14:35:22Z", "updated_at": "2020-11-24T15:56:11Z", "user": "OronG13" }, { "repo": "pytorch/vision", "number": 3030, "title": "randomroate by some change", "body": "```\r\ndef mapper(dataset_dict):\r\n dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below\r\n image = utils.read_image(dataset_dict[\"file_name\"], format=\"BGR\") \r\n transform_list = [\r\n \r\n T.ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice')\r\n ,T.RandomRotation([10,15])\r\n \r\n ]\r\n image, transforms = T.apply_transform_gens(transform_list, image)\r\n dataset_dict[\"image\"] = torch.as_tensor(image.transpose(2, 0, 1).astype(\"float32\"))\r\n\r\n \r\n #print('image_shape->',image.shape,image.shape[:2])\r\n\r\n annos = [\r\n utils.transform_instance_annotations(obj, transforms, image.shape[:2])\r\n for obj in dataset_dict.pop(\"annotations\")\r\n if obj.get(\"iscrowd\", 0) == 0\r\n ]\r\n\r\n instances = utils.annotations_to_instances(annos, image.shape[:2])\r\n dataset_dict[\"instances\"] = instances\r\n #dataset_dict[\"instances\"] = utils.filter_empty_instances(instances)\r\n return dataset_dict\r\n```\r\nthis is my mapper for augmentation.\r\nis T.RandomRotation([10,15]) happen every image? or by some change. \r\nif it apply to every images. how should I apply it by only some percentage?\r\n\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/3030", "state": "open", "labels": [ "question", "module: transforms" ], "created_at": "2020-11-19T14:22:27Z", "updated_at": "2020-11-20T09:39:28Z", "user": "SlowMonk" }, { "repo": "pytorch/TensorRT", "number": 231, "title": "How to solve \"Unable to get schema\" issue", "body": "I was trying to compile torchscript model, and the log says \"Unable to get schema for Node\". What should I do to fix this problem?\r\n\r\n\r\n```\r\n %2 : int = prim::Constant[value=2]()\r\n %3 : int = prim::Constant[value=6]()\r\n %4 : bool = prim::Constant[value=0]()\r\n %5 : None = prim::Constant()\r\n %6 : int[] = prim::Constant[value=[2]]()\r\n %7 : bool = prim::Constant[value=1]()\r\n %8 : int = prim::Constant[value=1]()\r\n %9 : Tensor = prim::Constant[value={255}]()\r\n %10 : Tensor = prim::Constant[value={0.447}]()\r\n %11 : Tensor = prim::Constant[value={0.226}]()\r\n %12 : Float(32:27, 3:9, 3:3, 3:1) = prim::Constant[value=<Tensor>]()\r\n %13 : int[] = prim::Constant[value=[2, 2]]()\r\n\r\n........\r\n\r\nDEBUG: Unable to get schema for Node %323 : Tensor = aten::mean(%3, %6, %7, %5) # tasks/moco_simclr/export/export.py:21:0 (NodeConverterRegistry.Convertable)\r\nterminate called after throwing an instance of 'trtorch::Error'\r\n what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false\r\nUnable to get schema for Node %323 : Tensor = aten::mean(%3, %6, %7, %5) # tasks/moco_simclr/export/export.py:21:0 (conversion.VerifyCoverterSupportForBlock)\r\n\r\n```", "url": "https://github.com/pytorch/TensorRT/issues/231", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-11-19T09:53:50Z", "updated_at": "2020-12-26T00:11:00Z", "user": "inocsin" }, { "repo": "pytorch/pytorch", "number": 48241, "title": "How to use torch.onnx.export with customed input datatype, like SparseTensor?", "body": "## \u2753 Questions and Help\r\nIn this repo [torchsparse](https://github.com/mit-han-lab/torchsparse), there is a customed datatype [SparseTensor`](https://github.com/mit-han-lab/torchsparse/blob/d2a5817c1b30565ffdfcd191b171a0957db408a8/torchsparse/sparse_tensor.py#L6).\r\n```python\r\nclass SparseTensor:\r\n def __init__(self, feats, coords, cur_tensor_stride=1):\r\n self.F = feats\r\n self.C = coords\r\n self.s = cur_tensor_stride\r\n self.coord_maps = {}\r\n self.kernel_maps = {}\r\n\r\n def check(self):\r\n if self.s not in self.coord_maps:\r\n self.coord_maps[self.s] = self.C\r\n\r\n def cuda(self):\r\n assert type(self.F) == torch.Tensor\r\n assert type(self.C) == torch.Tensor\r\n self.F = self.F.cuda()\r\n self.C = self.C.cuda()\r\n return self\r\n\r\n def detach(self):\r\n assert type(self.F) == torch.Tensor\r\n assert type(self.C) == torch.Tensor\r\n self.F = self.F.detach()\r\n self.C = self.C.detach()\r\n return self\r\n\r\n def to(self, device, non_blocking=True):\r\n assert type(self.F) == torch.Tensor\r\n assert type(self.C) == torch.Tensor\r\n self.F = self.F.to(device, non_blocking=non_blocking)\r\n self.C = self.C.to(device, non_blocking=non_blocking)\r\n return self\r\n\r\n def __add__(self, other):\r\n tensor = SparseTensor(self.F + other.F, self.C, self.s)\r\n tensor.coord_maps = self.coord_maps\r\n tensor.kernel_maps = self.kernel_maps\r\n return tensor\r\n```\r\n\r\nAnd I want to export to ONNX model, but when I ran `torch.onnx.export`, I got this ERROR:\r\n```\r\nRuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. \r\nDictionaries and strings are also accepted but their usage is not recommended. \r\nBut got unsupported type SparseTensor\r\n```\r\nThis problem may be same to other custome data types. \r\n\r\nI also noticed this line in [torch.onnx.__init__.py](https://github.com/pytorch/pytorch/blob/6da26fe79b7045fac743c81ca8d38c5340de17ab/torch/onnx/__init__.py#L45)\r\nWhat do you mean by this ?\r\n> Any non-Tensor arguments (including None) will be hard-coded into the exported model\r\n\r\n\r\nThanks in advance for any help!\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/48241", "state": "closed", "labels": [], "created_at": "2020-11-19T07:05:42Z", "updated_at": "2020-11-19T22:25:24Z", "user": "zeng-hello-world" }, { "repo": "pytorch/tutorials", "number": 1247, "title": "Training with batch size > 1 for adverserial example generation", "body": "The tutorial notebook on [Adverserial Training](https://github.com/pytorch/tutorials/blob/master/beginner_source/fgsm_tutorial.py) uses a batch size of 1. What code changes are needed if we want to train on a batch size of say 16. My understanding is, we only need to change the logic of \r\n `final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability`\r\n ` # Now we have batch size > 1`\r\n `final_pred.squeeze_()`\r\n `indexes = final_pred == target`\r\n `correct += torch.sum(indexes).item()`\r\n\r\nIs there something else needed. With this code change, I get values that are very similar to the case of batch_size=1 although not the same values. Any help would be appreciated.", "url": "https://github.com/pytorch/tutorials/issues/1247", "state": "closed", "labels": [ "question" ], "created_at": "2020-11-18T16:35:29Z", "updated_at": "2023-03-14T21:02:30Z", "user": "chinmay5" }, { "repo": "pytorch/TensorRT", "number": 230, "title": "\u2753 [Question] We don't have an op for aten::addmm", "body": "## \u2753 Question\r\n\r\nI'm trying to convert a modified version of Yolov3 to TesnorRT, I have the model scripted to TorchScript and I'm trying to run trtorchexec on it\r\nI'm getting an error\r\n```\r\nChecking operator support\r\nterminate called after throwing an instance of 'c10::Error'\r\n what(): 0 INTERNAL ASSERT FAILED at \"../torch/csrc/jit/ir/alias_analysis.cpp\":461, please report a bug to PyTorch. We don't have an op for aten::addmm but it isn't a special case. Argument types: Tensor, int[], int[], int[], int[], bool,\r\nException raised from analyzeImpl at ../torch/csrc/jit/ir/alias_analysis.cpp:461 (most recent call first):\r\n```\r\n\r\nI'm using the `pytorch_update` branch (since I need to use pytorch 1.7.0 & cuda 11.1), and I've merge master into it to get the latest updates (https://github.com/lablabla/TRTorch/tree/pytorch_update)\r\n\r\n## What you have already tried\r\n\r\n<!-- A clear and concise description of what you have already done. -->\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.0\r\n - CPU Architecture:\r\n - OS (e.g., Linux): Ubuntu 16.04\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): Built TRTorch from sources, Bazel downloads prebuilt 1.7.0\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives: local sources\r\n - Python version: 3.8.5\r\n - CUDA version: 11.1\r\n - GPU models and configuration: GeForce GTX 980\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nI saw this commit https://github.com/NVIDIA/TRTorch/commit/c5b6202 so I figured `aten:addmm` should be supported, but I guess I'm missing something\r\n", "url": "https://github.com/pytorch/TensorRT/issues/230", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-11-18T15:41:45Z", "updated_at": "2021-04-20T00:02:56Z", "user": "lablabla" }, { "repo": "pytorch/vision", "number": 3022, "title": "MaskRCNN Training on Images with no Annotations", "body": "Hi all,\r\nI am working on a little MaskRCNN training program and ran into an issue. I know it is common practice to remove any images from the dataset that lack annotations upon initializing the dataset which I am doing. However, I am running a series of transforms using albumentations on my image and my mask. One of these transforms is a random crop and sometimes the resulting mask image no longer contains any instances. I was trying to find a way to pass in an empty tensor of some kind without much success. Would it be common practice just to remove it from the batch, and if so what happens if you had a batch size of 1 or an image that only had one annotation and the chances the random crop came across it are really low. I was able to create an empty tensor and pass it in but then received this error.\r\n\r\n`RuntimeError: cannot reshape tensor of 0 elements into shape [0, -1] because the unspecified dimension size -1 can be any value and is ambiguous`\r\n\r\nThis is because my box tensor had a shape of 0, 4 which is what I want since there are no instances. I read some of the other issue reports and they talked about creating a background class and just making a small bounding box and having an empty segmentation mask but this seems a little hacky and I was wondering if there would be a better solution for my specific use case.\r\n", "url": "https://github.com/pytorch/vision/issues/3022", "state": "open", "labels": [ "question", "awaiting response", "topic: object detection" ], "created_at": "2020-11-18T15:23:52Z", "updated_at": "2020-11-30T10:42:01Z", "user": "gatordevin" }, { "repo": "pytorch/TensorRT", "number": 229, "title": "Build trtorch failed in ubuntu", "body": "I try to build the project with bazel but failed.\r\n\r\nmy environment:\r\ngcc: 7.5.0\r\ng++: 7.5.0\r\ncuda: 10.2\r\ncudnn: 7.6.5\r\ntensorRT: 7.0.0.11\r\n\r\nerror log:\r\n[log.txt](https://github.com/NVIDIA/TRTorch/files/5559949/log.txt)\r\n\r\n$ bazel build //:libtrtorch --compilation_mode opt\r\n\r\nStarting local Bazel server and connecting to it...\r\nINFO: Analyzed target //:libtrtorch (39 packages loaded, 2546 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /home/vincent/Projects/TRTorch/cpp/trtorchc/BUILD:10:10: Linking of rule '//cpp/trtorchc:trtorchc' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/cpp/trtorchc/trtorchc-2.params\r\n\r\nUse --sandbox_debug to see verbose messages from the sandbox\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(helpers.o):helpers.cpp:function nvinfer1::getNvrtcMajorVersion(): error: undefined reference to 'nvrtcVersion'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainSyncUserReleasing_impl_init_v3: error: undefined reference to 'dlopen'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainSyncUserReleasing_impl_init_v3: error: undefined reference to 'dlsym'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainSyncUserReleasing_impl_init_v3: error: undefined reference to 'dlclose'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainResourceDestroy_impl_init_v3: error: undefined reference to 'dlopen'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainResourceDestroy_impl_init_v3: error: undefined reference to 'dlsym'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainResourceDestroy_impl_init_v3: error: undefined reference to 'dlclose'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainDestroy_impl_init_v3: error: undefined reference to 'dlopen'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainDestroy_impl_init_v3: error: undefined reference to 'dlsym'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainDestroy_impl_init_v3: error: undefined reference to 'dlclose'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxMarkA_impl_init_v3: error: undefined reference to 'dlopen'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxMarkA_impl_init_v3: error: undefined reference to 'dlsym'\r\nexternal/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxMarkA_impl_init_v3: error: undefined reference to 'dlclose'\r\n\r\ncould you help solve this problem, thanks a lot\r\n@narendasan", "url": "https://github.com/pytorch/TensorRT/issues/229", "state": "closed", "labels": [ "question" ], "created_at": "2020-11-18T12:23:02Z", "updated_at": "2020-11-20T02:23:10Z", "user": "inocsin" }, { "repo": "pytorch/TensorRT", "number": 226, "title": "How to build from sources on Windows", "body": "## \u2753 Question\r\n\r\nHow shall I edit the WORKSPACE file in order to build tag 0.1.0 from sources on Windows?\r\n\r\n## What you have already tried\r\n\r\n1. I successfully did the build from sources process for Jetson Xavier AGX, see:\r\n[https://github.com/NVIDIA/TRTorch/issues/222](url)\r\n\r\n1. Based on the material that I was already had from the Jetson process I tried to do the same for my Windows by editing the WORKSPACE based on my Windows setup.\r\nI changed all required new_local_repository arguments of the cuda, torch, cudnn and tensorrt based on my Windows installations \r\n1. Activate the following command:\r\nbazel build //:libtrtorch\r\n\r\nThe following errors report was generated:\r\n\r\nINFO: Repository rules_python instantiated at:\r\n no stack (--record_rule_instantiation_callstack not enabled)\r\nRepository rule git_repository defined at:\r\n C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>\r\nERROR: An error occurred during the fetch of repository 'rules_python':\r\n Traceback (most recent call last):\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl\", line 177\r\n _clone_or_update(ctx)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl\", line 36, in _clone_or_update\r\n git_repo(ctx, directory)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 91, in git_repo\r\n _update(ctx, git_repo)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 103, in _update\r\n fetch(ctx, git_repo)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 129, in fetch\r\n _git_maybe_shallow(ctx, <5 more arguments>)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 171, in _git_maybe_shallow\r\n _error(ctx.name, <2 more arguments>)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 181, in _error\r\n fail(<1 more arguments>)\r\nerror running 'git fetch origin refs/heads/*:refs/remotes/origin/* refs/tags/*:refs/tags/*' while working with @rules_python:\r\nBUG: run-command.c:519: disabling cancellation: Invalid argument\r\nERROR: no such package '@rules_python//python': Traceback (most recent call last):\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl\", line 177\r\n _clone_or_update(ctx)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl\", line 36, in _clone_or_update\r\n git_repo(ctx, directory)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 91, in git_repo\r\n _update(ctx, git_repo)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 103, in _update\r\n fetch(ctx, git_repo)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 129, in fetch\r\n _git_maybe_shallow(ctx, <5 more arguments>)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 171, in _git_maybe_shallow\r\n _error(ctx.name, <2 more arguments>)\r\n File \"C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl\", line 181, in _error\r\n fail(<1 more arguments>)\r\nerror running 'git fetch origin refs/heads/*:refs/remotes/origin/* refs/tags/*:refs/tags/*' while working with @rules_python:\r\nBUG: run-command.c:519: disabling cancellation: Invalid argument\r\nINFO: Elapsed time: 1.097s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully (0 packages loaded)\r\n\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version (e.g., 1.0): 1.6\r\n - CPU Architecture: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2592 Mhz, 4 Core(s), 8 Logical Processor(s)\r\n - OS (e.g., Linux): Windows\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3\r\n - Build command you used (if compiling from source):\r\n - Are you using local sources or building from archives:\r\n - Python version: 3.6.8\r\n - CUDA version: 11.0\r\n - GPU models and configuration: Quadro M2000M\r\n - Any other relevant information: TensorRT 7.2.1, CuDNN 8.0.1\r\n\r\n\r\n## Additional context\r\n\r\nI have a good experience with TensorRT development on my Windows setup so I know that from NVIDIA libraries setup point of view everything should b", "url": "https://github.com/pytorch/TensorRT/issues/226", "state": "closed", "labels": [ "question", "channel: windows" ], "created_at": "2020-11-17T11:57:18Z", "updated_at": "2022-09-02T18:12:18Z", "user": "OronG13" }, { "repo": "pytorch/pytorch", "number": 48075, "title": "How to convert syncbn to batchnormND?", "body": "I want to run a model with syncBn in cpu, so I have to convert syncBN to batchNormND, how can I do that?\r\nI just found a way to convert from bn to syncbn, but how to do the opposite? Thanks in advance.\r\n[convert2syncbn](https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html?highlight=sync#torch.nn.SyncBatchNorm.convert_sync_batchnorm)\n\ncc @albanD @mruberry", "url": "https://github.com/pytorch/pytorch/issues/48075", "state": "closed", "labels": [ "module: nn", "triaged", "enhancement" ], "created_at": "2020-11-17T02:41:22Z", "updated_at": "2020-11-18T02:42:34Z", "user": "Edwardmark" }, { "repo": "pytorch/pytorch", "number": 48074, "title": "How use libtorch(or other API) to implement \"contiguous\", \"view\", \"permute\", \"transpose\" in c++?", "body": "## How use libtorch to implement \"contiguous\", \"view\", \"permute\", \"transpose\" in c++?\r\nHello~ I need to transplant python to c++, I don't know how to implement \"contiguous\", \"view\", \"permute\" in c++. I found that libtorch can help me, but I have not find all the \"Tensor operations\" which I need,such as \"contiguous\", \"view\", \"permute\", \"transpose\".\r\nThe python code shows bellow:\r\n\r\n def process_input_bmm(self, x):\r\n bsz = x.size(0) # 18 # x.shape()=[18,192]\r\n # [B x N] --> [B x g x N/g]\r\n x = x.contiguous().view(bsz, self.n_groups, -1) # [18, 2, 96]\r\n\r\n # [B x g x N/g] --> [g x B x N/g]\r\n x = x.transpose(0, 1) # transpose so that group is first # [2,18,96]\r\n\r\n # [g x B x N/g] x [g x N/g x M/g] --> [g x B x M/g]\r\n x = torch.bmm(x, self.weights) # multiply with Weights #[2,18,96]\r\n\r\n # add bias\r\n if self.use_bias:\r\n x = torch.add(x, self.bias)\r\n\r\n if self.feature_shuffle:\r\n # [g x B x M/g] --> [B x M/g x g]\r\n # [2,18,96] --> [18,96,2]\r\n x = x.permute(1, 2, 0) # permute:\u5e8f\u53f7\u6539\u53d8\u7684\u610f\u601d\u3002\r\n\r\n # [B x M/g x g] --> [B x g x M/g]\r\n # [18, 96, 2] --> [18,2,96]\r\n x = x.contiguous().view(bsz, self.n_groups, -1)\r\n\r\n else:\r\n # [g x B x M/g] --> [B x g x M/g]\r\n x = x.transpose(0, 1) # transpose so that batch is first\r\n\r\n # feature map normalization\r\n if self.normalization_fn is not None:\r\n x = self.normalization_fn(x)\r\n\r\n # feature map activation (or thresholding)\r\n if self.act_fn is not None: # self.act_fn:swish\r\n # print(\"act_fun in glt: \",self.act_fn) #Swish((sigmoid): Sigmoid())\r\n x = self.act_fn(x)\r\n\r\n return x\r\n\r\n def forward(self, x):\r\n \"\"\"\r\n :param x: Input of shape [T x B x N] (should work with [B x T x N]\r\n :return:\r\n \"\"\"\r\n if x.dim() == 2:\r\n x = self.process_input_bmm(x)\r\n elif x.dim() == 3:\r\n T, B, N = x.size() # [18,1,192]\r\n x = x.contiguous().view(B * T, -1) # [1*18,192]\r\n x = self.process_input_bmm(x)\r\n x = x.contiguous().view(T, B, -1)\r\n else:\r\n raise NotImplementedError\r\n\r\n # dropout\r\n if self.use_dropout:\r\n x = self.drop_layer(x)\r\n return x\r\n\r\nThe code I need help to write in C++ are as bellow:\r\n \r\n x = x.contiguous().view(bsz, self.n_groups, -1) \r\n x = x.transpose(0, 1) \r\n x = x.permute(1, 2, 0)\r\n\r\nIf you can help me, please answer me with C++ code which work with libtorch, or the method to implement \"contiguous\", \"view\", \"permute\", \"transpose\" by libtorch(or other API)!\r\nThank you Very much!!!\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/48074", "state": "closed", "labels": [], "created_at": "2020-11-17T02:16:24Z", "updated_at": "2020-11-17T15:44:11Z", "user": "wxyhv" }, { "repo": "pytorch/pytorch", "number": 47980, "title": "How to avoid `torch.onnx.export` use INT64?", "body": "In order to do inference in browser/JavaScript, I used `torch.onnx.export()` to get the onnx model. \r\n\r\nHowever, the exported model used INT64 which is invalid for the JavaScript environment. I tried to change the data type in ONNX manually but it brings more error.\r\nMay I know how to force the `torch.onnx.export` use INT32? Or is there any way to deal with the INT64 before getting the ONNX model?\r\n\r\nThank you!\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/47980", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2020-11-15T04:11:06Z", "updated_at": "2021-04-21T11:23:35Z", "user": "waittim" }, { "repo": "pytorch/tutorials", "number": 1237, "title": "DistributedDataParallel tutorial should use actual data", "body": "The current DistributedDataParallel tutorial feeds in data randomly generated on the spot. This is useful to a point but, since all real world applications will use a dataloader, it would be good to have a complete example with even MNIST that implements DistributedDataParallel. \"https://pytorch.org/tutorials/intermediate/dist_tuto.html\" implements the code required to use real data but also isn't using DistributedDataParallel thus leaving it up to the reader to determine which pieces they need to implement themselves and which pieces are included with DistributedDataParallel. Using real data and DistributedDataParallel would answer that question right away. One key question this would answer is how does the partitioning happen. Is the partitioning fully left to the user or is it handled by DistributedDataParallel like it is with DataParallel? I'm assuming the first one but it would be nice to have a clear example of it.", "url": "https://github.com/pytorch/tutorials/issues/1237", "state": "closed", "labels": [], "created_at": "2020-11-13T18:51:01Z", "updated_at": "2023-03-14T21:05:55Z", "comments": 1, "user": "rmcavoy" }, { "repo": "pytorch/vision", "number": 2999, "title": "CMake build failed with error: 'class c10::OperatorHandle' has no member named 'typed'", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n1. Install PyTorch that was built myself, with build information:\r\n```\r\n#python3\r\nPython 3.6.8 (default, Apr 20 2020, 14:49:33)\r\n[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import torch\r\n>>> print(torch.__config__.show())\r\nPyTorch built with:\r\n - GCC 6.3\r\n - C++ Version: 201402\r\n - Intel(R) MKL-DNN v1.2.0 (Git Hash 70f8b879ea7a0c38caedb3320b7c85e8497ff50d)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 10.0\r\n - NVCC architecture flags: -gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75\r\n - CuDNN 7.6.3\r\n - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS=-D_GLIBCXX_USE_CXX11_ABI=0 -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,\r\n```\r\n2. Resolve similar problem with the solution: https://github.com/pytorch/vision/issues/2001#issuecomment-611923412\r\n\r\n> @bmanga tow-names works.\r\n> just add these lines to the end of the CMakeLists.txt\r\n> ```\r\n> set_property(TARGET torch_cuda PROPERTY INTERFACE_COMPILE_OPTIONS \"\") \r\n> set_property(TARGET torch_cpu PROPERTY INTERFACE_COMPILE_OPTIONS \"\")\r\n> ```\r\n\r\n3. Build vision with following commands:\r\n```\r\nsource /opt/rh/devtoolset-6/enable\r\nTORCH_DIR=/usr/local/lib64/python3.6/site-packages/torch\r\nexport CUDA_HOME=/usr/local/cuda\r\nexport CUDA_NVCC_EXECUTABLE=${CUDA_HOME}/bin/nvcc\r\nexport PATH=${CUDA_HOME}/bin/:$PATH\r\nexport TORCH_CUDA_ARCH_LIST=\"6.0 6.1 7.0 7.5\"\r\n\r\nmkdir build\r\ncd build\r\ncmake .. -DCMAKE_PREFIX_PATH=${TORCH_DIR} -DCMAKE_EXPORT_COMPILE_COMMANDS=ON\r\nmake -j\r\n```\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n```\r\n[ 88%] Building CXX object CMakeFiles/torchvision.dir/torchvision/csrc/cpu/nms_cpu.cpp.o\r\n[ 94%] Building CXX object CMakeFiles/torchvision.dir/torchvision/csrc/vision.cpp.o\r\nIn file included from /home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/vision.cpp:14:0:\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h: In function 'at::Tensor roi_align(const at::Tensor&, const at::Tensor&, double, int64_t, int64_t, int64_t, bool)':\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:29:25: error: 'class c10::OperatorHandle' has no member named 'typed'\r\n .typed<decltype(roi_align)>();\r\n ^~~~~\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:29:31: error: expected primary-expression before 'decltype'\r\n .typed<decltype(roi_align)>();\r\n ^~~~~~~~\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h: In function 'at::Tensor _roi_align_backward(const at::Tensor&, const at::Tensor&, double, int64_t, int64_t, int64_t, int64_t, int6\r\n4_t, int64_t, int64_t, bool)':\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:77:12: error: 'class c10::OperatorHandle' has no member named 'typed'\r\n .typed<decltype(_roi_align_backward)>();\r\n ^~~~~\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:77:18: error: expected primary-expression before 'decltype'\r\n .typed<decltype(_roi_align_backward)>();\r\n ^~~~~~~~\r\nIn file included from /home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/vision.cpp:17:0:\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/nms.h: In function 'at::Tensor nms(const at::Tensor&, const at::Tensor&, double)':\r\n/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/nms.h:19:25: error: 'class c10::OperatorHandle' has no member named 'typed'\r\n .typed<decltype(nms)>();\r\n ", "url": "https://github.com/pytorch/vision/issues/2999", "state": "closed", "labels": [ "question", "topic: build" ], "created_at": "2020-11-13T09:47:07Z", "updated_at": "2020-11-13T13:18:19Z", "user": "tanyokwok" }, { "repo": "pytorch/vision", "number": 2994, "title": "How to dynamically split tensor", "body": "## How to split a tensor dynamically by split_sizes, not by constant shape\r\nTrying to convert mask-rcnn to onnx and run on onnxruntime.\r\nFollowing code try to split mask_pred by num_mask_roi_per_img\r\nHowever, while run in onnxruntime, num_mask_roi_per_img becomes constant value, for instance (68,) which is number of boxes while tracing.\r\n\r\n```\r\n# split batch mask prediction back to each image\r\nnum_mask_roi_per_img = [ det_bbox.shape[0] for det_bbox in det_bboxes ]\r\nmask_preds = mask_pred.split(num_mask_roi_per_img, 0)\r\n```\r\nIt got this error while run in onnxruntime with another input image.\r\n\r\n> \r\n\r\n<class 'onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument'>\", \"[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running SplitToSequence node. Name:'SplitToSequence_1396' Status Message: split_size_sum (68) != split_dim_size (23)\"\r\n\r\nCould anyone help me with this?\r\nMany thanks in advance.\r\n\n\ncc @neginraoof", "url": "https://github.com/pytorch/vision/issues/2994", "state": "closed", "labels": [ "topic: object detection", "module: onnx" ], "created_at": "2020-11-12T13:27:05Z", "updated_at": "2022-07-21T09:11:35Z", "user": "RunningLeon" }, { "repo": "pytorch/pytorch", "number": 47823, "title": "How to frozen weights in TorchScript IR?", "body": "Hi, i just add a pass in TorchScript IR to convert BertLayer to fastertransformer Encoder, however i find model is slow after convert to TorchScript. I get Nvprof result and find a time consuming activity:\r\n```\r\nType Time(%) Time Calls Avg Min Max Name\r\n GPU activities: 57.50% 1.49484s 25200 59.319us 3.2000us 151.55us _ZN2at6native27unrolled_elementwise_kernelIZZZNS0_21copy_device_to_deviceERNS_14TensorIteratorEbENKUlvE0_clEvENKUlvE2_clEvEUlfE_NS_6detail5ArrayIPcLi2EEE16OffsetCalculatorILi1EjESC_NS0_6memory15LoadWithoutCastENSD_16StoreWithoutCastEEEviT_T0_T1_T2_T3_T4_\r\n```\r\nI watched my final TorchScript IR, and i guess it's reason is each time it runs it will do aten::contiguous several times, like:\r\n```\r\n%1752 : Float(*, *, requires_grad=1, device=cuda:0) = aten::contiguous(%1153, %21)\r\n```\r\naten::contiguous is needed for Tensors which will be send to custom op because they will be convert by .transpose(-1, -2) first, but aten::contiguous seems time consuming. So is there any way that i can convert model weights to constant in TorchScript IR so that aten::contiguous(weights) will be convert to Constant Tensor, or if i can do something to avoid aten::contiguous? Thankyou very much!\r\n\r\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/47823", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2020-11-12T02:51:07Z", "updated_at": "2020-11-12T07:54:27Z", "user": "Sun-Knight-Soral" }, { "repo": "pytorch/pytorch", "number": 47681, "title": "How to install Pytorch on AIX7.2 without internet access?", "body": "I am trying to install Pytorch on AIX7.2 server without internet access. I have pytorch-1.0.2.tar.gz from PYPI website and run the PIP installation as ```python -m pip install Flask --no-build-isolation --no-index --find-links ./ $pkg``` where $pkg is pytorch-1.0.2.tar.gz. However, it has the following error. How to fix it? Is it possible to install pytorch on a server without internet access?\r\n\r\nThanks.\r\n```\r\nLooking in links: ./\r\nProcessing ./pytorch-1.0.2.tar.gz\r\nBuilding wheels for collected packages: pytorch\r\n Building wheel for pytorch (setup.py): started\r\n Building wheel for pytorch (setup.py): finished with status 'error'\r\n ERROR: Command errored out with exit status 1:\r\n command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/tmp/pip-req-build-84fugyap/setup.py'\"'\"'; __file__='\"'\"'/tmp/pip-req-build-84fugyap/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d /tmp/pip-wheel-640v38y9\r\n cwd: /tmp/pip-req-build-84fugyap/\r\n Complete output (5 lines):\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/tmp/pip-req-build-84fugyap/setup.py\", line 15, in <module>\r\n raise Exception(message)\r\n Exception: You tried to install \"pytorch\". The package named for PyTorch is \"torch\"\r\n ----------------------------------------\r\n ERROR: Failed building wheel for pytorch\r\n Running setup.py clean for pytorch\r\nFailed to build pytorch\r\nInstalling collected packages: pytorch\r\n Running setup.py install for pytorch: started\r\n Running setup.py install for pytorch: finished with status 'error'\r\n ERROR: Command errored out with exit status 1:\r\n command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/tmp/pip-req-build-84fugyap/setup.py'\"'\"'; __file__='\"'\"'/tmp/pip-req-build-84fugyap/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record /tmp/pip-record-k2dyu_63/install-record.txt --single-version-externally-managed --compile --install-headers /opt/freeware/include/python3.7m/pytorch\r\n cwd: /tmp/pip-req-build-84fugyap/\r\n Complete output (5 lines):\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/tmp/pip-req-build-84fugyap/setup.py\", line 11, in <module>\r\n raise Exception(message)\r\n Exception: You tried to install \"pytorch\". The package named for PyTorch is \"torch\"\r\n ----------------------------------------\r\nERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/tmp/pip-req-build-84fugyap/setup.py'\"'\"'; __file__='\"'\"'/tmp/pip-req-build-84fugyap/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record /tmp/pip-record-k2dyu_63/install-record.txt --single-version-externally-managed --compile --install-headers /opt/freeware/include/python3.7m/pytorch Check the logs for full command output.\r\n```\n\ncc @malfet @seemethere @walterddr", "url": "https://github.com/pytorch/pytorch/issues/47681", "state": "open", "labels": [ "module: build", "triaged" ], "created_at": "2020-11-10T17:02:24Z", "updated_at": "2020-11-11T02:05:42Z", "user": "bergen288" }, { "repo": "pytorch/serve", "number": 779, "title": "Hi, any suggestion on how to serve yolov5 on torchserve ?", "body": "<!--\r\nThank you for suggesting an idea to improve torchserve model serving experience.\r\n\r\nPlease fill in as much of the template below as you're able.\r\n-->\r\n\r\n## Is your feature request related to a problem? Please describe.\r\n<!-- Please describe the problem you are trying to solve. -->\r\nI'd like to serve yolov5 model, but there is no template in the example.\r\n## Describe the solution\r\n<!-- Please describe the desired behavior. -->\r\n\r\nserve model from https://github.com/ultralytics/yolov5/\r\n\r\n## Describe alternatives solution\r\n<!-- Please describe alternative solutions or features you have considered. -->\r\n", "url": "https://github.com/pytorch/serve/issues/779", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2020-11-10T03:18:54Z", "updated_at": "2023-07-31T17:53:42Z", "user": "yuanyuangoo" }, { "repo": "pytorch/tutorials", "number": 1227, "title": "Yolov5 quantization : problem with FloatFunctional()", "body": "I'm trying quantize [Yolov5 (object detection)](https://github.com/ultralytics/yolov5). And i'm following [this tutorial](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html) to do static quantization. As per tutorial I'm changing all torch.add s to torch.nn.quantized.FloatFunctional() like this.\r\n\r\n`return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))` to \r\n`return torch.nn.quantized.FloatFunctional().add(x , self.cv2(self.cv1(x))) if self.add else self.cv2(self.cv1(x))`\r\n\r\nwhen the model is calibrating it's working fine. But when it comes to evaluating the quantized model, I'm getting this error.\r\n`RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Met 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, Named, Autograd, Profiler, Tracer].`\r\n\r\nNow I changed FloatFunctional() to Qfunctional() hoping to get a result, then I got an error during the calibration stage.\r\n\r\nCan someone help me? Thanks in advance.\n\ncc @jerryzh168 @jianyuh", "url": "https://github.com/pytorch/tutorials/issues/1227", "state": "closed", "labels": [ "question", "module: quantization" ], "created_at": "2020-11-09T16:38:33Z", "updated_at": "2023-03-16T22:31:13Z", "user": "bingiflash" }, { "repo": "pytorch/tutorials", "number": 1225, "title": "Seq2Seq Transformer Tutorial", "body": "I'm having difficulty understanding a few aspects of the Seq2Seq transformer tutorial (https://pytorch.org/tutorials/beginner/transformer_tutorial.html)\r\n\r\n1. The tutorial says that it implements the architecture from Attention Is All You Need, but I don't see a TransformerDecoder used anywhere. It instead looks like only a TransformerEncoder is used. How does this example work without the decoder?\r\n2. The tutorial says that it uses a softmax to output probabilities over the dictionary, but I only see a linear output layer. Where is the softmax applied?\r\n3. Is this model learning to predict one word ahead (e.g. [hi how are you] -> [how are you doing])? I can't find the actual task described anywhere, only the inputs and targets in terms of an alphabet\r\n\r\nAppreciate any help.\r\n\r\n\n\ncc @pytorch/team-text-core @Nayef211", "url": "https://github.com/pytorch/tutorials/issues/1225", "state": "closed", "labels": [ "module: torchtext", "docathon-h1-2023", "easy" ], "created_at": "2020-11-08T20:39:19Z", "updated_at": "2023-06-09T16:32:37Z", "comments": 5, "user": "mmwebster" }, { "repo": "pytorch/pytorch", "number": 47577, "title": "How to implement Iterative dataset with multiple workers ", "body": "Hi\r\nI have a TFDS dataset, which I convert it to an iterative dataset in pytorch, this is not clear for me how to make it work with multiple-workers, here is the minimal code to show what I mean, could you help me please complete it with different workers, and provide me with how I can implement worker_init_fn(worker_id) for this case. I also need to implement distributed sampler for this data class which I also appreciate your help on this. thanks \r\n\r\n```\r\nfrom torch.utils.data import Dataset, DataLoader\r\nimport torch\r\nimport tensorflow_datasets as tfds\r\nimport tensorflow as tf\r\nimport itertools\r\nfrom itertools import cycle, islice\r\n\r\n\r\n\r\ndef get_dummy_dataset():\r\n inputs = [\"input 1\",\r\n \"input 2\",\r\n \"input 3\",\r\n \"input 4\"]\r\n target = [\"target 1\",\r\n \"target 2\",\r\n \"target 3\",\r\n \"target 4\"]\r\n features = {\"inputs\": inputs, \"targets\": target}\r\n def my_fn(features):\r\n ret = {}\r\n for k, v in features.items():\r\n ret[f'{k}_plaintext'] = v\r\n return ret\r\n dataset = tf.data.Dataset.from_tensor_slices(features)\r\n dataset = dataset.map(my_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)\r\n return dataset\r\n\r\n\r\nclass WMTDataset(torch.utils.data.IterableDataset):\r\n def __init__(self, batch_size):\r\n super(WMTDataset).__init__()\r\n dataset = get_dummy_dataset()\r\n self.dataset_size = 4\r\n self.batch_size = batch_size\r\n self.dataset = self.create_dataset(dataset)\r\n\r\n def __len__(self):\r\n return self.dataset_size\r\n\r\n def __iter__(self):\r\n return self.dataset\r\n\r\n def create_dataset(self, dataset):\r\n dataset = dataset.batch(self.batch_size, drop_remainder=False)\r\n return itertools.cycle(dataset)\r\n\r\n\r\n\r\niterable_dataset = WMTDataset(batch_size=2)\r\nloader = DataLoader(iterable_dataset, batch_size=None)\r\nfor batch in islice(loader, 2):\r\n print(\"#########batch \", batch)\r\n```\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/47577", "state": "closed", "labels": [], "created_at": "2020-11-08T13:01:52Z", "updated_at": "2020-11-09T16:03:07Z", "user": "rabeehkarimimahabadi" }, { "repo": "pytorch/pytorch", "number": 47574, "title": "How to add custom CUDA function as torchScript Node?", "body": "Hi, i want to add my CUDA function as a torchScript Node, but i can't use torchScript extention op as i can't let other people to use so file. It's there a way? Thank you very much!\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/47574", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2020-11-08T10:28:25Z", "updated_at": "2021-02-27T07:58:59Z", "user": "Sun-Knight-Soral" }, { "repo": "pytorch/pytorch", "number": 47573, "title": "How to add custom CUDA function as torchScript Node?", "body": "Hi, i want to add my CUDA function as a torchScript Node, but i can't use torchScript extention op as i can't let other people to use so file. It's there a way? Thank you very much!\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/47573", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2020-11-08T10:28:06Z", "updated_at": "2020-11-08T17:15:02Z", "user": "Sun-Knight-Soral" }, { "repo": "pytorch/pytorch", "number": 47572, "title": "How to add custom CUDA function as torchScript node?", "body": "Hi, i want to add my CUDA function to torchScript as a Node, but i don't want to use torchScript extention op as i can't let other people to load so file, is there any way? Thankyou very much!\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/47572", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2020-11-08T10:25:02Z", "updated_at": "2020-11-08T17:15:35Z", "user": "Sun-Knight-Soral" }, { "repo": "pytorch/pytorch", "number": 47548, "title": "how to extract more than two variables using default_collate from torch.utils.data.dataloader?", "body": "Kindly help as how to extract more than just two variables (x,y) using default_collate from torch.utils.data.dataloader.", "url": "https://github.com/pytorch/pytorch/issues/47548", "state": "closed", "labels": [], "created_at": "2020-11-07T05:52:49Z", "updated_at": "2020-11-09T16:00:31Z", "user": "Jayashree-Pougajendy" }, { "repo": "pytorch/xla", "number": 2613, "title": "How to get function return from xmp.spawn distributed processes", "body": "Wonder how do we get value returned from spawned functions.\r\nFor example, if accuracy is calculated in each core, and i want it to be returned to the main function\r\n\r\n```\r\ndef _mp_fn():\r\n #some training and valuation code here\r\n\r\n return accuracy\r\n```\r\n\r\n```\r\naccuracy = xmp.spawn(_mp_fn, nprocs=8)\r\n```\r\n\r\nIn multiprocessor library we can do something like\r\n\r\n```\r\nif __name__ == '__main__':\r\n p = Pool(processes=20)\r\n data = p.map(job, [i for i in range(20)])\r\n p.close()\r\n print(data)\r\n```\r\n\r\nHow do we do it with xmp.spawn?", "url": "https://github.com/pytorch/xla/issues/2613", "state": "closed", "labels": [], "created_at": "2020-11-06T19:58:07Z", "updated_at": "2020-11-11T14:51:43Z", "user": "8key" }, { "repo": "pytorch/pytorch", "number": 47491, "title": "How to get averaged loss in multi-gpu training ?", "body": "Hi,\r\n\r\nI am using multi-gpu training, following the tutorial:\r\nhttps://pytorch.org/docs/stable/notes/ddp.html\r\n\r\nI am trying to construct the curves of training and validation losses for visulization. But it seems I can only access the loss of one gpu.\r\n\r\nI know that the losses of multi-gpu will be averaged before back propagation. So how to get the averaged loss ? \r\n\r\nThank you !", "url": "https://github.com/pytorch/pytorch/issues/47491", "state": "closed", "labels": [], "created_at": "2020-11-06T05:44:59Z", "updated_at": "2020-11-06T05:58:38Z", "user": "shuuchen" }, { "repo": "pytorch/text", "number": 1071, "title": "How to get the translation results from tensor in seq2seq model", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\nI am try to implement my own MT engine, i am following the steps in https://github.com/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb\r\nI also propose a question on https://stackoverflow.com/questions/64694786/pytorch-build-seq2seq-mt-model-but-how-to-get-the-translation-results-from-the\r\n``` \r\n\r\n\r\nSRC = Field(tokenize=tokenize_en,\r\n init_token='<sos>',\r\n eos_token='<eos>',\r\n lower=True)\r\n\r\nTRG = Field(tokenize=tokenize_de,\r\n init_token='<sos>',\r\n eos_token='<eos>',\r\n lower=True)\r\n```\r\nAfter training the model,the link only share a way to batch evaluate but i want to try single string and get the translation results. for example i want my model to translate the input \"Boys\" and get the German translations.\r\n\r\n```\r\nsavedfilemodelpath='./pretrained_model/2020-09-27en-de.pth'\r\nmodel.load_state_dict(torch.load(savedfilemodelpath))\r\nmodel.eval()\r\ninputstring = 'Boys'\r\nprocessed=SRC.process([SRC.preprocess(inputstring)]).to(device)\r\noutput=model(processed,processed)\r\noutput_dim = output.shape[-1]\r\noutputs = output[1:].view(-1, output_dim)\r\nfor item in outputs:\r\n print('item shape is {} and item.argmax is {}, and words is {}'.format(item.shape,item.argmax(),TRG.vocab.itos[item.argmax()]))\r\n\r\n```\r\nSo my question is that it it right to get the translation results by:\r\nFirst: convert the string to tensor\r\n```\r\ninputstring = 'Boys'\r\nprocessed=SRC.process([SRC.preprocess(inputstring)]).to(device)\r\n```\r\nSecond: send the tensor to the model. As the model have a TRG param.I have to give the tensor,am i able not given the TRG tensor?\r\n```\r\noutput=model(processed,processed)\r\noutput_dim = output.shape[-1]\r\noutputs = output[1:].view(-1, output_dim)\r\n```\r\nThird\uff1athrough the return tensor, i use the argmax to get the translation results? is it right?\r\n\r\nOr how can i get the right translation results?\r\n```\r\nfor item in outputs:\r\n print('item shape is {} and item.argmax is {}, and words is {}'.format(item.shape,item.argmax(),TRG.vocab.itos[item.argmax()+1]))\r\n```\r\nThanks a lot.", "url": "https://github.com/pytorch/text/issues/1071", "state": "closed", "labels": [], "created_at": "2020-11-06T02:12:34Z", "updated_at": "2020-11-06T06:05:17Z", "user": "Oscarjia" }, { "repo": "pytorch/pytorch", "number": 47483, "title": "Update how to build PyTorch with CUDA Windows instructions", "body": "PyTorch currently could not be build using recommended `14.11.25503` minimal toolchain, see:\r\nhttps://github.com/pytorch/pytorch/blame/b4b0fa637178baf9147416b550c7db70de6a5fa3/README.md#L258\r\n\r\nBut if one tries to following this instructions using PyTorch-1.7 or newer it will fail with as shown in:\r\nhttps://github.com/pytorch/pytorch/issues/46208#issuecomment-707352250\n\ncc @malfet @seemethere @walterddr @jlin27 @mruberry @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @smartcat2010 @mszhanyi", "url": "https://github.com/pytorch/pytorch/issues/47483", "state": "closed", "labels": [ "module: build", "module: windows", "module: docs", "triaged", "windows-triaged" ], "created_at": "2020-11-06T01:27:58Z", "updated_at": "2020-11-16T16:16:00Z", "user": "malfet" }, { "repo": "pytorch/xla", "number": 2606, "title": "how to make sure pytorch xla is doing data parallelism", "body": "Hi\r\nwhen I call xm.spawn to distribute a work over multiple TPU cores, how can I make sure this is actually working and getting use of all cores? thanks", "url": "https://github.com/pytorch/xla/issues/2606", "state": "closed", "labels": [], "created_at": "2020-11-05T16:30:15Z", "updated_at": "2020-11-30T18:18:00Z", "user": "rabeehkarimimahabadi" }, { "repo": "pytorch/pytorch", "number": 47439, "title": "how to use torch.utils.checkpoint + gru with variable length sequence?", "body": "I just want to use torch.utils.checkpoint on GRU to save gpu memory.\r\n\r\n```py\r\ndef check(self, packed):\r\n out, _ = self.rnn(packed)\r\n padded = pad_packed_sequence(out, batch_first=True)\r\n\r\n return padded\r\n\r\ndef forward(self, x, lengths):\r\n \"\"\"Handles variable size captions\r\n \"\"\"\r\n x = self.embed(x)\r\n\r\n packed = pack_padded_sequence(x, lengths, batch_first=True)\r\n padded = checkpoint(self.check, packed)\r\n```\r\nmy code is shown above.\r\n\r\ni got a warning:\r\n**UserWarning: None of the inputs have requires_grad=True. Gradients will be None**\r\nbecause packed is a PackedSequence, it has no attribute requires_grad\r\n\r\nthen, i tried another way to do it\r\n```py\r\ndef check(self, x, lengths):\r\n packed = pack_padded_sequence(x, lengths, batch_first=True)\r\n out, _ = self.rnn(packed)\r\n padded = pad_packed_sequence(out, batch_first=True)\r\n\r\n return padded\r\n\r\ndef forward(self, x, lengths):\r\n \"\"\"Handles variable size captions\r\n \"\"\"\r\n x = self.embed(x)\r\n padded = checkpoint(self.check, x, lengths)\r\n```\r\nthan i got a error. \r\n\r\nTraceback (most recent call last):\r\n File \"D:\\\u5b89\u88c5\u7a0b\u5e8f\\PyCharm 2019.2.3\\helpers\\pydev\\pydevd.py\", line 2073, in <module>\r\n main()\r\n File \"D:\\\u5b89\u88c5\u7a0b\u5e8f\\PyCharm 2019.2.3\\helpers\\pydev\\pydevd.py\", line 2067, in main\r\n globals = debugger.run(setup['file'], None, None, is_module)\r\n File \"D:\\\u5b89\u88c5\u7a0b\u5e8f\\PyCharm 2019.2.3\\helpers\\pydev\\pydevd.py\", line 1418, in run\r\n return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)\r\n File \"D:\\\u5b89\u88c5\u7a0b\u5e8f\\PyCharm 2019.2.3\\helpers\\pydev\\pydevd.py\", line 1425, in _exec\r\n pydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"D:\\\u5b89\u88c5\u7a0b\u5e8f\\PyCharm 2019.2.3\\helpers\\pydev\\_pydev_imps\\_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"D:/study/workspace/Python/xxxx/train.py\", line 300, in <module>\r\n main()\r\n File \"D:/study/workspace/Python/xxxx/train.py\", line 144, in main\r\n train(opt, train_loader, model, epoch, val_loader)\r\n File \"D:/study/workspace/Python/xxxx/train.py\", line 181, in train\r\n model.train_emb(*train_data)\r\n File \"D:\\study\\workspace\\Python\\SCAN\\model.py\", line 632, in train_emb\r\n loss.backward()\r\n File \"D:\\Environment\\Anaconda\\envs\\PyTorch\\lib\\site-packages\\torch\\tensor.py\", line 185, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"D:\\Environment\\Anaconda\\envs\\PyTorch\\lib\\site-packages\\torch\\autograd\\__init__.py\", line 127, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\n<b>RuntimeError: element 1 of tensors does not require grad and does not have a grad_fn</b>\r\n\r\nso, i want to know how can i use torch.utils.checkpoint on gru with variable length sequence\r\n\r\nthank you \r\n\r\ncc @zou3519", "url": "https://github.com/pytorch/pytorch/issues/47439", "state": "open", "labels": [ "module: rnn", "triaged" ], "created_at": "2020-11-05T13:25:06Z", "updated_at": "2023-11-02T13:26:34Z", "user": "liuyyy111" }, { "repo": "pytorch/vision", "number": 2963, "title": "detector as feature extractor", "body": "Hello,\r\n\r\nI am using mask rcnn for detection. So basically fine tuning. However I want extract feature for each object that is being detected. \r\nSo possibly extracting feature vector just before last layer. How can I do that ? forward hooks ? \r\n\r\nI was also looking into https://github.com/pytorch/vision/blob/master/torchvision/models/_utils.py ? could not get it working. \r\n\r\nAlso how to use jit for the same ?\r\n\r\nAny leads would be helpful. @fmassa \r\n\r\nCheers! \r\n", "url": "https://github.com/pytorch/vision/issues/2963", "state": "open", "labels": [ "question" ], "created_at": "2020-11-04T22:49:19Z", "updated_at": "2020-11-10T12:39:51Z", "user": "gaussiangit" }, { "repo": "pytorch/vision", "number": 2959, "title": "Allow torchvision.io to pass through ToTensor()", "body": "## \ud83d\ude80 Ensure torchvision.io is a drop-in replacement with current workflows\r\n\r\nThe following snippet will fail.\r\n```\r\nimg = torchvision.io.read_image()\r\nimg = torchvision.transforms.ToTensor()(img)\r\n```\r\n\r\n## Pitch\r\nConsider making native io compatible with existing transform workflows by allowing the tensor type to pass through `ToTensor()`. This would still scale down tensor values to the range 0-1 and not impact downstream transformations.", "url": "https://github.com/pytorch/vision/issues/2959", "state": "closed", "labels": [ "question", "needs discussion" ], "created_at": "2020-11-04T03:56:27Z", "updated_at": "2020-11-20T09:46:26Z", "user": "jgbradley1" }, { "repo": "pytorch/vision", "number": 2955, "title": "[RFC] How to handle BC breaking changes on Model weights or hyper-parameters", "body": "## \ud83d\ude80 Feature\r\nIn order to fix bugs we are sometimes forced to introduce BC breaking changes. While the process of such introductions is clear when it comes to code changes, it's not when it comes to model weights or hyper-parameters. Thus we should define when, why and how to introduce BC-breaking changes when it comes to model weights or model hyper-parameters.\r\n\r\n## Motivation\r\n\r\nWe have recently bumped to a few issues that motivate this. Here are a few examples:\r\n- On #2326 we discovered a bug in the initialization of some weights of all detection models. If we fix the bug on code, we should probably retrain the models. What happens if their accuracy improves? How do we make them available to our users? \r\n- How do we handle cases such as #2599 where in order to fix a bug we need to update the hyper-parameters of the model?\r\n\r\n## Approaches\r\n\r\nThere are quite a few different approaches for this:\r\n1. Replace the old parameters and Inform the community about the BC breaking changes. Example: #2942\r\n - Reasonable approach when the accuracy improvement is substantial or the effect on the model behaviour is negligible.\r\n - Keeps the code-base clean from workarounds and minimizes the number of weights we provide.\r\n - Can potentially cause issues to users who use transfer learning.\r\n2. Write code/workarounds to minimize the effect of the changes on existing models. Example: #2940\r\n - Reasonable approach when the changes lead to slight decrease in accuracy.\r\n - Minimizes the effects on users who used pre-trained models.\r\n - Introduces ugly workarounds on the code and increases the number of weights we provide.\r\n3. Introduce versioning on model weights:\r\n - Appropriate when introducing significant changes on the models.\r\n - Keeps the code-base clean from workarounds.\r\n - Forces us to maintain multiple versions of weights and model config.\r\n\r\nIt's worth discussing whether we want to adapt our approach depending on the characteristics of the problem or if we want to go with one approach for all cases. Moreover it's worth investigating whether we need to handle differently changes on weights vs changes on hyper-parameters used on inference.\r\n\r\ncc @fmassa @cpuhrsch @vfdev-5 @mthrok ", "url": "https://github.com/pytorch/vision/issues/2955", "state": "open", "labels": [ "needs discussion", "version incompatibility" ], "created_at": "2020-11-03T12:10:36Z", "updated_at": "2021-09-04T16:37:54Z", "user": "datumbox" }, { "repo": "pytorch/vision", "number": 2951, "title": "Imagenet Pre-trained model for other Depth Multiplier", "body": "On the mnasnet model under mnasnet.py file, the link provided for imagenet pretrained model is only for two depth multiplier, as shown in the code below:\r\n\r\n_MODEL_URLS = {\r\n \"mnasnet0_5\":\r\n \"https://download.pytorch.org/models/mnasnet0.5_top1_67.823-3ffadce67e.pth\",\r\n \"mnasnet0_75\": None,\r\n \"mnasnet1_0\":\r\n \"https://download.pytorch.org/models/mnasnet1.0_top1_73.512-f206786ef8.pth\",\r\n \"mnasnet1_3\": None\r\n}\r\n\r\n\r\nCan you provide the link for Imagenet pre-trained model for mnasnet0_75 and mnasnet1_3?\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2951", "state": "open", "labels": [ "question", "module: models" ], "created_at": "2020-11-03T06:58:39Z", "updated_at": "2020-11-06T00:56:35Z", "user": "NaifahNurya" }, { "repo": "pytorch/vision", "number": 2943, "title": "the divide mistake of positive and negative samples", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nWhen dividing positive and negative samples, the gt_boxes index that anchor matches to is 0 will be mistaken as negative samples\r\nfor matched_idxs_per_image in matched_idxs: \r\n\r\n> positive = torch.nonzero(matched_idxs_per_image >= 1).squeeze(1) \r\n\r\n> negative = torch.nonzero(matched_idxs_per_image == 0).squeeze(1)", "url": "https://github.com/pytorch/vision/issues/2943", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-10-31T07:58:29Z", "updated_at": "2020-11-06T10:29:01Z", "user": "ghost" }, { "repo": "pytorch/pytorch", "number": 47147, "title": "libtorch 1.6.0: How to make the data of each batch have different sizes", "body": "libtorch 1.6.0 win10 x64 .\r\n\r\nI wrote an OCR model of the dataset.The word is encoded with different lengths as the label input.How to make the data of each batch have different sizes?\r\nexample: \r\ndata:123.png datasize:[batchsize,3,180,32] ,label: 123,labelsize:[batchsize,3]\r\ndata:3234.png datasize:[batchsize,3,180,32] ,label: 3234,lablesize:[batchsize,4]\r\n[batchsize,3]!=[batchsize,4]\r\nThe dataset in Pytorch supports different sizes, but libtorch does not.\r\n", "url": "https://github.com/pytorch/pytorch/issues/47147", "state": "closed", "labels": [], "created_at": "2020-10-31T04:30:47Z", "updated_at": "2020-11-01T03:52:41Z", "user": "williamlzw" }, { "repo": "pytorch/pytorch", "number": 47118, "title": "How to specify the instances for batches", "body": "I am trying to solve a multi-task learning problem where I want to implement a homogeneous epoch sampling strategy (i.e in a single batch, instances from only one task are present and such batches are shuffled).\r\n\r\nFor example, Bij represents ith batch during training is of jth task\r\nLet's assume tasks are A,B,C\r\nB1A, B2B, B3A, B4C, B5B, ....\r\nSo a batch contains instances of one task only.\r\n\r\nHow can this be achieved?\r\n", "url": "https://github.com/pytorch/pytorch/issues/47118", "state": "closed", "labels": [], "created_at": "2020-10-30T15:13:54Z", "updated_at": "2020-10-30T16:38:44Z", "user": "nrjvarshney" }, { "repo": "pytorch/vision", "number": 2919, "title": "How to change the num_classes from 1000 in vgg?", "body": "I use\r\n model = vgg.vgg16(pretrained=True, progress = True, num_classes=10)\r\nand use pretrained model 'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',\r\nthen, the error happend:\r\nRuntimeError: Error(s) in loading state_dict for VGG:\r\n\tsize mismatch for classifier.6.weight: copying a param with shape torch.Size([1000, 4096]) from checkpoint, the shape in current model is torch.Size([10, 4096]).\r\n\tsize mismatch for classifier.6.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([10]).\r\n\r\nwhen i use model = vgg.vgg16(pretrained=True, progress = True, num_classes=1000)\r\nerror above not occur, but after seizing a long time, cuda out of memory.\r\nso how can i fix these?\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2919", "state": "closed", "labels": [ "question" ], "created_at": "2020-10-28T07:43:00Z", "updated_at": "2020-10-28T14:53:37Z", "user": "SunJJ1996" }, { "repo": "pytorch/pytorch", "number": 46902, "title": "How to use clang as a cuda compiler instead of nvcc?", "body": "I want to ask if we can use clang as a cuda compiler instead of nvcc, such as 'TF_CUDA_CLANG', 'CLANG_CUDA_COMPILER_PATH' options similar to tensorflow/third_party/gpus/cuda_configure.bzl?\r\n\n\ncc @malfet @seemethere @walterddr", "url": "https://github.com/pytorch/pytorch/issues/46902", "state": "open", "labels": [ "module: build", "triaged", "enhancement" ], "created_at": "2020-10-27T05:52:23Z", "updated_at": "2020-11-10T03:48:41Z", "user": "HangJie720" }, { "repo": "pytorch/vision", "number": 2894, "title": "Activation function for object proposals in RoI (test time)", "body": "## \ud83d\ude80 Feature\r\nReplace softmax with sigmoid in `postprocess_detections `method in `roi_heads`:\r\nhttps://github.com/pytorch/vision/blob/5cb77a20c3c65ca6199fdf1c1bc642af7447d311/torchvision/models/detection/roi_heads.py#L677 \r\n\r\n## Motivation\r\nIn the current implementation, score is class-dependent (softmax), but NMS is class-independent. So the question is, can/should one RoI output more than 1 prediction. \r\n## Pitch\r\n\r\nIf there are` C` classes, each RoI outputs `C` score and bounding box predictions (two tensors, size` (1, C)` and `(4,C)` resp.) at test stage (`postprocess_detections `method). Non-max suppression is done independently of the class (i.e. boxes overlapping more than NMS are kept if they are different classes). But the normalization function is not class-independent:\r\n\r\n`pred_scores = F.softmax(class_logits, -1)`\r\n\r\nSo if there are two positive classes, pred_scores vector will be, e.g. [0.9, 0.1], and at some point both of these scores will be compared to `box_score_thresh`. Obviously one of them is very likely to be rejected. Therefore, I don\u2019t quite understand this implementation. It should be either:\r\n\r\n```\r\npred_scores = F.sigmoid(class_logits, -1)\r\npreds = torch.nonzero(pred_scores.sigmoid()>box_score_thresh)\r\n```\r\n\r\nto compute the scores independently, or\r\n\r\n```\r\npreds = class_logits.max(-1)\r\npreds.values[preds.indices>0].sigmoid()>box_score_thresh\r\n``` \r\n\r\nto extract the best prediction from every RoI. Then the predictions will be independent. I think it needs to be re-implemented or at least added as an argument to choose from. Mask predictions are done independently in this way:\r\n\r\nhttps://github.com/pytorch/vision/blob/5cb77a20c3c65ca6199fdf1c1bc642af7447d311/torchvision/models/detection/roi_heads.py#L73\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2894", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-10-26T11:34:58Z", "updated_at": "2020-10-26T12:51:20Z", "user": "AlexTS1980" }, { "repo": "pytorch/examples", "number": 837, "title": "License of the fast-neural-style models?", "body": "Are the fast-neural-style models that are downloadable through\r\n\r\nhttps://github.com/pytorch/examples/blob/0f0c9131ca5c79d1332dce1f4c06fe942fbdc665/fast_neural_style/download_saved_models.py#L27\r\n\r\nalso licensed under the [BSD-3-Clause license](https://github.com/pytorch/examples/blob/master/LICENSE)?", "url": "https://github.com/pytorch/examples/issues/837", "state": "closed", "labels": [], "created_at": "2020-10-26T06:59:51Z", "updated_at": "2021-03-04T06:12:57Z", "comments": 3, "user": "pmeier" }, { "repo": "pytorch/vision", "number": 2884, "title": "GroupedBatchSampler related bug in vision/references/detection/train.py ", "body": "I strongly suspect that there is a bug in the detection trainer code that uses `GroupedBatchSampler` to group images by aspect ratio.\r\n\r\n```\r\nif args.distributed:\r\n train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\r\n test_sampler = torch.utils.data.distributed.DistributedSampler(dataset_test)\r\nelse:\r\n train_sampler = torch.utils.data.RandomSampler(dataset)\r\n test_sampler = torch.utils.data.SequentialSampler(dataset_test)\r\n\r\nif args.aspect_ratio_group_factor >= 0:\r\n group_ids = create_aspect_ratio_groups(dataset, k=args.aspect_ratio_group_factor)\r\n train_batch_sampler = GroupedBatchSampler(train_sampler, group_ids, args.batch_size)\r\n```\r\nhttps://github.com/pytorch/vision/blob/cffac640d703196ea9a369166fa8ae587cb5e64d/references/detection/train.py#L80\r\n\r\nDue to the random shuffle done by `DistributedSampler` and `RandomSampler`, there is an inconsistency between `train_sampler` and `group_ids`. Specifically: `group_ids` is with respect to the original dataset order (as dictated by `dataset`), but `GroupedBatchSampler` will index into `group_ids` using the indices output by `train_sampler`, eg:\r\n\r\n```\r\ndef __iter__(self):\r\n buffer_per_group = defaultdict(list)\r\n samples_per_group = defaultdict(list)\r\n\r\n num_batches = 0\r\n for idx in self.sampler:\r\n group_id = self.group_ids[idx]\r\n```\r\nhttps://github.com/pytorch/vision/blob/cffac640d703196ea9a369166fa8ae587cb5e64d/references/detection/group_by_aspect_ratio.py#L53\r\n\r\nThe impact is: `GroupedBatchSampler` will use retrieve the wrong aspect ratios when attempting to batch images with the same aspect ratio together, resulting in batches that are sub-optimally aspect-ratio balanced.\r\n\r\nIf my understanding is correct, then: to fix this, we'd need to change the `train.py` to ensure that the `train_sampler` and `group_ids` are consistent.\r\nI haven't yet had the time to write a small, contained test case that demonstrates the bug, but just in case I'll create this issue while it's on my mind.", "url": "https://github.com/pytorch/vision/issues/2884", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2020-10-24T07:48:48Z", "updated_at": "2020-10-26T12:36:38Z", "user": "erickim555" }, { "repo": "pytorch/tutorials", "number": 1203, "title": "Put some more better practices in custom operator tutorial", "body": "Given our experience with internal users of custom operator registration API, there are some more important things the tutorial should cover:\r\n\r\n* Handling non-contiguous inputs\r\n* How to use TensorIterator for easy pointwise operators\r\n* (FB only) The rest of the scaffolding you need for fbcode\r\n\r\ncc @dzhulgakov ", "url": "https://github.com/pytorch/tutorials/issues/1203", "state": "open", "labels": [ "C++", "torchscript" ], "created_at": "2020-10-23T21:15:48Z", "updated_at": "2021-07-27T22:04:45Z", "comments": 0, "user": "ezyang" }, { "repo": "pytorch/vision", "number": 2878, "title": "Hello , i found a mismatch between the implemented torchvision.models.detection.backbone_utils.resnet_fpn_backbone() in github and what we get by installing via pip, the one in github is having returned_layer and extra_blocks as parameters but one we get by installign doesnt have any of these parameters,", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/vision/issues/2878", "state": "closed", "labels": [ "question" ], "created_at": "2020-10-23T10:18:55Z", "updated_at": "2020-10-23T10:38:52Z", "user": "akashprakas" }, { "repo": "pytorch/pytorch", "number": 46760, "title": "How to define a new data type in native_functions.yaml?", "body": "How to define a new data type in native_functions.yaml? \r\nSuch as there is exist a data type \"int[]\"\uff0cbu i want a data type \"float[]\"\uff0cwhat sould i do?\r\nLooking forward to your advice, I will be very grateful\uff01\r\n\n\ncc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang", "url": "https://github.com/pytorch/pytorch/issues/46760", "state": "open", "labels": [ "module: internals", "triaged" ], "created_at": "2020-10-23T09:04:27Z", "updated_at": "2020-10-26T15:16:34Z", "user": "max-niu" }, { "repo": "pytorch/TensorRT", "number": 193, "title": "\u2753 [Question] How does max_batch_size work? ", "body": "## \u2753 Question\r\n\r\nHow one should use `max_batch_size` compilation option? There's not a lot said about it on [documentation](https://nvidia.github.io/TRTorch/py_api/trtorch.html) apart of that it should greater than 0.\r\n\r\n## What you have already tried\r\n\r\nHere's the toy example I'm playing with:\r\n```\r\nimport torch\r\nimport trtorch\r\n\r\ntorch.manual_seed(0)\r\n\r\nsize = (1, 1)\r\ntorch_model = torch.nn.Linear(*size)\r\nscript_model = torch.jit.script(torch_model.eval().cuda())\r\ntrt_model = trtorch.compile(script_model, {\r\n \"input_shapes\": [size],\r\n \"op_precision\": torch.half,\r\n \"max_batch_size\": 2\r\n})\r\n\r\nprint(\"Single value:\")\r\nx1 = torch.rand(size).cuda()\r\nprint(torch_model(x1).tolist(), trt_model(x1.half()).tolist())\r\n\r\nprint(\"Batch:\")\r\nx2 = torch.rand((2, 1)).cuda()\r\nprint(torch_model(x2).tolist(), trt_model(x2.half()).tolist())\r\n```\r\nI'm expecting the output to be the same for both PyTorch and TRTorch models for both `x1` and `x2`. Here's the output I'm getting (notice the error message and missing second value from TRTorch model on the last line):\r\n```\r\n$ python test.py \r\nSingle value:\r\n[[0.53578120470047]] [[0.5357810258865356]]\r\nBatch:\r\nERROR: [__torch__.torch.nn.modules.linear.Linear_trt_engine] - Parameter check failed at: engine.cpp::setBindingDimensions::948, condition: profileMaxDims.d[i] >= dimensions.d[i]\r\n[[0.5354551076889038], [0.5341419577598572]] [[0.5354547500610352]]\r\n```\r\nI expected that setting `max_batch_size=2` would do the trick but apparently it does not.\r\n## Environment\r\n - x86 CPU Architecture, 1660Ti GPU, Linux OS;\r\n - Python 3.8.6;\r\n - CUDA 10.1;\r\n - PyTorch 1.5.1, installed using pip;\r\n - TRTorch 0.0.3, installed using `pip install https://github.com/NVIDIA/TRTorch/releases/download/v0.0.3/trtorch-0.0.3-cp38-cp38-linux_x86_64.whl`\r\n", "url": "https://github.com/pytorch/TensorRT/issues/193", "state": "closed", "labels": [ "question" ], "created_at": "2020-10-21T15:49:13Z", "updated_at": "2020-10-30T08:22:33Z", "user": "ateraz" }, { "repo": "pytorch/vision", "number": 2853, "title": "How to get corresponding feature regions of final detections from feature map of backbone?", "body": "Hi, \r\n\r\nFor every output detection [x1, y1, x2, y2], I would like to extract its corresponding region in the feature map output of the backbone of Faster-RCNN. Similarly, I want to extract the corresponding region in the feature map for the target (groundtruth) bounding boxes.\r\n\r\nCan you point me to how this should be done?\r\n\r\nThank you. \r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2853", "state": "open", "labels": [ "question", "topic: object detection" ], "created_at": "2020-10-21T13:24:56Z", "updated_at": "2020-10-27T12:11:54Z", "user": "igygi" }, { "repo": "pytorch/vision", "number": 2850, "title": "Can pretrained resnet-50 extract feature from a higher resolution picture?", "body": "Can pre-trained ResNet-50 extract feature from a higher resolution picture?\r\n\r\nTypically, when we use Resnet to extract features, we need to crop the image into 224 x 224 then pass the image to ResNet.\r\n\r\nI want to know if we want a larger image( e.g. 720 x 720) to be processed, we have to modify the network and re-train the network? Can we directly use the original pre-train network? Is the quality of feature extraction guaranteed?\r\n\r\nThanks! \r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2850", "state": "closed", "labels": [ "question" ], "created_at": "2020-10-21T04:45:24Z", "updated_at": "2020-10-26T12:58:27Z", "user": "Frank-Dz" }, { "repo": "pytorch/vision", "number": 2832, "title": "About the segmentation.", "body": "In the reference, I replaced the cross entropy in the semantic segmentation module with weighted cross entropy. The result was worse. The weight is calculated based on the training set. If the cross entropy is replaced by focal loss, the effect is also poor. Why is this? Still, the best loss function for semantic segmentation is cross entropy.\r\nI sincerely need your help!\r\n", "url": "https://github.com/pytorch/vision/issues/2832", "state": "closed", "labels": [ "question" ], "created_at": "2020-10-19T01:53:05Z", "updated_at": "2020-10-19T07:26:20Z", "user": "ghost" }, { "repo": "pytorch/text", "number": 1045, "title": "How to get the original sentences from train_iter object?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\nIs there a easy way to print out the original input sentences instead of tensor objects? For example:\r\n\r\n```\r\ndef eval(data_iter, model, args):\r\n model.eval()\r\n corrects, avg_loss = 0, 0\r\n for batch in data_iter:\r\n feature, target = batch.text, batch.label\r\n print (feature.original_sentence)\r\n```\r\n", "url": "https://github.com/pytorch/text/issues/1045", "state": "closed", "labels": [], "created_at": "2020-10-16T18:30:17Z", "updated_at": "2020-10-16T19:25:24Z", "user": "sunyangfu" }, { "repo": "pytorch/pytorch", "number": 46450, "title": "How to use GPU Tensor in diffrent GPUStreams with multi threads", "body": "thread task codes as follow\uff1a\r\nvoid* task_routine3(void** arg)\r\n{\r\n struct timeval time_cur;\r\n auto options = torch::TensorOptions().device(torch::kCUDA, 0);\r\n torch::Device device(torch::kCUDA, 0);\r\n\r\n pthread_t tid = pthread_self();\r\n std::cout << tid << \"Start time:\" << time_cur.tv_sec << \":\" << time_cur.tv_usec << std::endl;\r\n at::cuda::CUDAStream mystream = at::cuda::getStreamFromPool();\r\n at::cuda::setCurrentCUDAStream(mystream);\r\n\r\n {\r\n at::cuda::CUDAStreamGuard guard(mystream);\r\n std::cout << \"Stream ID: \" << mystream.id() << std::endl;\r\n\r\n torch::Tensor* pt_base_feature_cpu = (torch::Tensor*) arg[0];\r\n torch::Tensor* pt_match_feature_cpu = (torch::Tensor*) arg[1];\r\n\r\n for(int i = 0; i < 10; i++)\r\n {\r\n torch::Tensor base_feature = (pt_base_feature_cpu->slice(0, i*50000, (i+1)*50000, 1)).to(device);\r\n torch::Tensor match_feature = (*pt_match_feature_cpu).to(device);\r\n\r\n torch::Tensor tensor_tmp;\r\n torch::Tensor tensor_sum;\r\n std::tuple<torch::Tensor, torch::Tensor> sort_ret;\r\n\r\n tensor_tmp = torch::sub(base_feature, match_feature);\r\n tensor_tmp = torch::pow(tensor_tmp, 2);\r\n tensor_sum = torch::sum(tensor_tmp, 1);\r\n sort_ret = torch::topk(tensor_sum, 1);\r\n }\r\n }\r\n}\r\n\r\nI use thread pools to run the thread-func in multi-threads. I found that running time using single thread is seem to multi threads. T want to using multi threads to save running time.\r\nHow can I do it? Anyone can help me?\r\n\r\n\r\n## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/46450", "state": "closed", "labels": [], "created_at": "2020-10-16T06:44:57Z", "updated_at": "2020-10-16T22:33:05Z", "user": "litttl" }, { "repo": "pytorch/vision", "number": 2804, "title": "loss\u540e\u9762\u62ec\u53f7\u662f\u4ec0\u4e48Epoch: [0] [ 440/3560] eta: 0:22:22 lr: 0.00997769948251307 loss: 0.5050 (0.8583)", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\u8bf7\u95ee\u6709\u8c01\u77e5\u9053loss\u540e\u9762\u62ec\u53f7\u91cc\u7684\u662f\u4ec0\u4e48\u5417\uff0c\u4e5f\u662floss\u5417\uff0c\u90a3\u662f\u4ec0\u4e48loss\r\n", "url": "https://github.com/pytorch/vision/issues/2804", "state": "closed", "labels": [ "question" ], "created_at": "2020-10-14T06:19:14Z", "updated_at": "2020-10-14T08:19:45Z", "user": "ghost" }, { "repo": "pytorch/vision", "number": 2788, "title": "Error with torchvision.io.read_image with models", "body": "## \ud83d\udc1b Bug\r\n\r\n![image](https://user-images.githubusercontent.com/47158509/95675524-87d8bd00-0bd5-11eb-8e50-8fe1ddad0aa0.png)\r\n\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behaviour:\r\n\r\nHere is simple code to reproduce the error.\r\nNotice that I'm not passing any transforms to image. Since `torchvision.io.read_image` will read normalized images only.\r\n\r\n```\r\n# from PIL import Image, ImageDraw\r\nimport torch\r\nfrom torchvision.models.detection import fasterrcnn_resnet50_fpn\r\n# from typing import Dict\r\nfrom torchvision.io.image import read_image\r\n\r\nimg_path = \"../test/assets/grace_hopper_517x606.jpg\"\r\n\r\nif __name__ == \"__main__\":\r\n # img = torch.rand(3, 226, 226) # This Works\r\n img = read_image(img_path) # This does not.\r\n ## img = Image.open(img_path) ## This works\r\n ## img = T.ToTensor()(img) ## With this\r\n\r\n img = torch.unsqueeze(img, 0)\r\n print(img.shape)\r\n model = fasterrcnn_resnet50_fpn()\r\n model = model.eval()\r\n out = model(img)\r\n print(out)\r\n\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nWe should get output. This works if tensor is simply `torch.randn(1, 3, 226, 226)` and it should be same with `read_image`.\r\n\r\n## Environment\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.6 torchvision: master \r\n - OS (e.g., Linux): Windows\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source): source\r\n - Build command you used (if compiling from source): `pip install .`\r\n - Python version: 3.6\r\n - CUDA/cuDNN version: None\r\n - GPU models and configuration: None\r\n\r\n## Additional context\r\n\r\nMaybe I have misinterpreted what `read_image` does.\r\n\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/2788", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2020-10-11T09:51:20Z", "updated_at": "2023-12-16T16:40:18Z", "user": "oke-aditya" }, { "repo": "pytorch/pytorch", "number": 46137, "title": "How to build on Arch Linux", "body": "# How to build on Arch Linux\r\n\r\nBuild from source doco:\r\nhttps://github.com/pytorch/pytorch#from-source\r\n\r\n## Cheat sheet:\r\nCreate new environment:\r\n```\r\nconda update -n base conda\r\nconda create --name pytorch-build\r\nactivate pytorch-build\r\n```\r\n\r\nInstall dependencies listed here:\r\nhttps://github.com/pytorch/pytorch#install-dependencies\r\n\r\n```\r\ngit submodule sync --recursive\r\ngit submodule update --init --recursive\r\nmake -j4\r\n```\r\n\r\nBuild doco:\r\n\r\n cd docs\r\n pip install -r requirements.txt\r\n # If necessary: pip install --ignore-installed certifi\r\n make html\r\n\r\n### Only if necessary\r\n\r\nI was getting errors importing packages that I had explicitly installed:\r\n\r\n conda update --all\r\n\r\nThe above submodule update will likely fix the below issues that needed to be \"solved\" otherwise:\r\n\r\nIf `glog` is required:\r\n\r\n sudo pacman -S --asdeps google-glog\r\n\r\nHave make find pthread:\r\n\r\n CMAKE_THREAD_LIBS_INIT=\"-pthread\" make\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/46137", "state": "closed", "labels": [], "created_at": "2020-10-10T07:33:32Z", "updated_at": "2020-10-12T04:37:49Z", "user": "HaleTom" }, { "repo": "pytorch/pytorch", "number": 46081, "title": "where is the Source Code of torch.mode operator?", "body": "Hi, Developers,\r\n\r\nI use PyTorch 1.5 (build from source code) and want to check the source code of the implementation of **torch.mode**.\r\nHowever, I cannot find the **THFloatTensor_mode(values_, indices_, self_, dim, keepdim)**, where is it?\r\n\r\nReally want to get your reply.\r\n", "url": "https://github.com/pytorch/pytorch/issues/46081", "state": "closed", "labels": [], "created_at": "2020-10-09T06:43:35Z", "updated_at": "2020-10-10T08:42:08Z", "user": "ddummkopfer" }, { "repo": "pytorch/serve", "number": 712, "title": "how to register model present in local file system", "body": "I have `my-model`, present in `/path/to/models`; the path is local file system path.\r\n\r\n**command to start `torchserve`**: `docker run -p 8080:8080 -p 8081:8081 --name my-serve pytorch/torchserve:0.2.0-cpu`\r\n\r\nThen when I try to register `my-model` -> `curl -X POST \"http://localhost:8081/models?url=/path/to/models/my-model.mar\"`, I get:\r\n\r\n\t{\r\n\t \"code\": 404,\r\n\t \"type\": \"ModelNotFoundException\",\r\n\t \"message\": \"Model not found in model store: /path/to/models/my-model.mar\"\r\n\t}\r\n\r\nThe _register api call_ link is broken on the [docs](https://pytorch.org/serve/server.html#arguments) and I couldn't find anywhere for local file system.", "url": "https://github.com/pytorch/serve/issues/712", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2020-10-06T10:29:33Z", "updated_at": "2020-10-06T14:01:18Z", "user": "paniabhisek" }, { "repo": "pytorch/pytorch", "number": 45856, "title": "What option(USE_NNAPI \"Use NNAPI\") is used for?", "body": "Hello all,\r\n\r\nthere is an `option(USE_NNAPI \"Use NNAPI\" OFF)` within [CMakeLists.txt#L179](https://github.com/pytorch/pytorch/blob/cf48872d28f945d47793f63e19c54dd15bf580f7/CMakeLists.txt#L179) \r\nI'd like to know if this option is on, what is in this case enabled?\r\nI do have a device with NN-API driver - is this help to use this device NN-API backend?\r\n\r\nThank you.", "url": "https://github.com/pytorch/pytorch/issues/45856", "state": "closed", "labels": [], "created_at": "2020-10-05T18:08:27Z", "updated_at": "2020-10-05T21:19:31Z", "user": "peter197321" }, { "repo": "pytorch/elastic", "number": 130, "title": "How to programmatically determine if a training job has finished using `kubectl`? ", "body": "## \u2753 Questions and Help\r\nHow to programmatically determine if a training job has finished using `kubectl`? \r\nThe field `status.replicaStatuses.Worker.succeeded` seems to indicate the number of succeeded pods.\r\nHow does one determine if the whole job has succeeded? \r\nThis is useful when the training job is part of a workflow (e.g. orchestrated by argo or airflow). \r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nBefore submitting, please ensure you have gone through our documentation. Here\r\nare some links that may be helpful:\r\n\r\n* [What is torchelastic?](../../README.md)\r\n* [Quickstart on AWS](../../aws/README.md)\r\n* [Usage](../../USAGE.md)\r\n* [Examples](../../examples/README.md)\r\n* API documentation\r\n * [Overview](../../USAGE.md)\r\n * [Rendezvous documentation](../../torchelastic/rendezvous/README.md)\r\n * [Checkpointing documentation](../../torchelastic/checkpoint/README.md)\r\n* [Configuring](../../USAGE.md#configuring)\r\n\r\n \r\n### Question\r\n<!-- your question here -->", "url": "https://github.com/pytorch/elastic/issues/130", "state": "closed", "labels": [], "created_at": "2020-10-03T06:12:30Z", "updated_at": "2020-10-28T08:18:54Z", "user": "darthsuogles" }, { "repo": "pytorch/pytorch", "number": 45797, "title": "How to set a correct random seed?", "body": "\r\nHi, I am using pytorch==1.3/1.1 with 4/2 GPUs to training network.\r\nI use the following code to set random seed at the beginning of program:\r\n`\r\ndef seed_torch(seed=1029):\r\n random.seed(seed)\r\n os.environ['PYTHONHASHSEED'] = str(seed)\r\n np.random.seed(seed)\r\n torch.manual_seed(seed)\r\n torch.cuda.manual_seed(seed)\r\n torch.cuda.manual_seed_all(seed) # if you are using multi-GPU.\r\n torch.backends.cudnn.benchmark = False\r\n torch.backends.cudnn.deterministic = True\r\n\r\n\r\nseed_torch()\r\n`\r\nThe only random function I called during training/testing time is torch.randint.\r\nHowever, I found that though I have set the random seed, the testing result is still different every time.\r\nIf I replace torch.randint with torch.zeros, I can get same accuracy even without setting the random seed.\r\n\r\nI do not know why?\r\nCan any one help me about this?", "url": "https://github.com/pytorch/pytorch/issues/45797", "state": "closed", "labels": [], "created_at": "2020-10-03T02:49:13Z", "updated_at": "2020-10-05T20:47:56Z", "user": "densechen" }, { "repo": "pytorch/serve", "number": 711, "title": "[Question] How to debug custom handlers?", "body": "Hi! I am very excited about torch serve, so thanks to all contributors to this awesome tool! \r\nI have already run my custom model with custom postprocessing and here is a question that I am struggling to find an answer on. Any help would be very appreciated!\r\n\r\nQuestion:\r\nHow to debug my custom handlers? In other words, how can I see what is happening with the data on each step (i.e initialize, preprocess, inference, postprocess, and my custom one) in my IDE while sending requests to running torchserve server? I was able to fix simple issues using the `ts_log.log` file and it was helpful. BUT it becomes not very comfortable for me once I want to do something more complicated.\r\n\r\nThanks for any help! \r\n", "url": "https://github.com/pytorch/serve/issues/711", "state": "open", "labels": [ "triaged_wait" ], "created_at": "2020-10-02T18:38:28Z", "updated_at": "2023-04-14T00:06:23Z", "user": "veronikayurchuk" }, { "repo": "pytorch/vision", "number": 2740, "title": "ValueError: All bounding boxes should have positive height and width. Found invaid box [500.728515625, 533.3333129882812, 231.10546875, 255.2083282470703] for target at index 0.", "body": "i am training detecto for custom object detection. anyone who can help me as soon as possible. i will be very grateful to you.\r\nhere is the code.\r\n from detecto import core, utils, visualize\r\n dataset = core.Dataset('content/sample_data/newdataset/car/images/')\r\n model = core.Model(['car'])\r\n model.fit(dataset)\r\n\r\nhere is the output:\r\n\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-8-02dc210525d1> in <module>()\r\n 4 model = core.Model(['car'])\r\n 5 \r\n----> 6 model.fit(dataset)\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)\r\n 91 raise ValueError(\"All bounding boxes should have positive height and width.\"\r\n 92 \" Found invalid box {} for target at index {}.\"\r\n---> 93 .format(degen_bb, target_idx))\r\n 94 \r\n 95 features = self.backbone(images.tensors)\r\n\r\nValueError: All bounding boxes should have positive height and width. Found invaid box [500.728515625, 533.3333129882812, 231.10546875, 255.2083282470703] for target at index 0.\r\n", "url": "https://github.com/pytorch/vision/issues/2740", "state": "closed", "labels": [ "question", "topic: object detection" ], "created_at": "2020-10-02T06:11:29Z", "updated_at": "2024-05-13T09:19:30Z", "user": "kashf99" }, { "repo": "pytorch/serve", "number": 706, "title": "How to serve model trained over mmdetection framework?", "body": "I have trained my model using the MMdetection framework. After training the model, I have a checkpoint file in the .pth format and config file which helps in making inference/prediction. To draw an inference or making prediction steps take the following lines to generate predictions- \r\n\r\n from mmdet.apis import init_detector, inference_detector, show_result,show_result_pyplot\r\n import mmcv\r\n config_file = 'path to configuration file path'\r\n checkpoint_file = checkpoint path\r\n img= 'image_path'\r\n model = init_detector(config_file, checkpoint_file, device='cuda:0')\r\n result = inference_detector(model, img)\r\n\r\nCan you help me making it possible to serve mmdetection model through torch serve?", "url": "https://github.com/pytorch/serve/issues/706", "state": "closed", "labels": [ "bug", "triaged_wait" ], "created_at": "2020-09-30T10:36:39Z", "updated_at": "2022-07-30T13:40:08Z", "user": "Atul997" }, { "repo": "pytorch/tutorials", "number": 1171, "title": "Efficiency of dcgan tutorial", "body": "When I run the [dcgan_faces_tutorial.py](https://github.com/pytorch/tutorials/blob/master/beginner_source/dcgan_faces_tutorial.py) script, I have noticed that two python processes are created on the CPU according to the top command.\r\n\r\n```\r\n 27062 mahmood 20 0 9404132 1.5g 90116 D 31.9 1.6 5:55.09 python3\r\n 27004 mahmood 20 0 12.0g 3.3g 929032 S 7.3 3.5 1:56.48 python3\r\n```\r\n\r\nAlso, the GPU utilization according to nvidia-smi is pretty low\r\n\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 GeForce RTX 208... Off | 00000000:41:00.0 On | N/A |\r\n| 45% 54C P2 94W / 260W | 1956MiB / 10984MiB | 9% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| 0 1196 G /usr/lib/xorg/Xorg 16MiB |\r\n| 0 1226 G /usr/bin/gnome-shell 49MiB |\r\n| 0 14727 G /usr/lib/xorg/Xorg 62MiB |\r\n| 0 14898 G /usr/bin/gnome-shell 94MiB |\r\n| 0 27004 C python3 1625MiB |\r\n+-----------------------------------------------------------------------------+\r\n```\r\nIs that normal? I don't think so.\n\ncc @datumbox @nairbv @fmassa @NicolasHug @YosuaMichael", "url": "https://github.com/pytorch/tutorials/issues/1171", "state": "closed", "labels": [ "question", "module: vision", "docathon-h1-2023", "medium" ], "created_at": "2020-09-29T12:21:24Z", "updated_at": "2023-06-01T07:18:28Z", "user": "mahmoodn" }, { "repo": "pytorch/examples", "number": 830, "title": "imagenet how to download the classicication dataset?", "body": "", "url": "https://github.com/pytorch/examples/issues/830", "state": "closed", "labels": [], "created_at": "2020-09-29T07:05:24Z", "updated_at": "2022-03-09T21:31:26Z", "user": "henbucuoshanghai" }, { "repo": "pytorch/text", "number": 1013, "title": "How to pass new pre-trained embeddings while sharing the same vocabulary across torchtext.Field?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\nI was trying to concatenate two embedding layers in my CNN from two different pre-trained embeddings, before applying my convolutions.\r\n\r\nHere's the basic workflow:\r\n\r\n # Load pre-trained embeddings (dim=100)\r\n from torchtext.vocab import Vectors\r\n vectors_amazon = Vectors(name='gensim_embeddings_amazon.txt', cache='{}/{}'.format(PROJECT_FOLDER, OUTPUT_FOLDER))\r\n vectors_imdb = Vectors(name='gensim_embeddings_imdb.txt', cache='{}/{}'.format(PROJECT_FOLDER, OUTPUT_FOLDER))\r\n\r\nThen I create my `text_field` and `label_field` as follow:\r\n\r\n # Custom_tokenizer is my tokenizer, MAX_SIZE=20000\r\n text_field = Field(sequential=True, use_vocab=True, tokenize=custom_tokenizer)\r\n label_field = LabelField()\r\n\r\n text_field.build_vocab(train_data,\r\n max_size=MAX_SIZE,\r\n vectors=vectors_amazon)\r\n label_field.build_vocab(train_data)\r\n\r\nand after creating my train/valid/test iterator with `BucketIterator`, I create my CNN `model` for sentiment analysis.\r\n\r\nSo far so good, the problem is that I'd like to create another embedding considering also the `vectors_imdb` and I'm stuck here, since for the embedding layer I'd do the following:\r\n\r\n pretrained_embeddings = text_field.vocab.vectors\r\n model.embedding.weight.data.copy_(pretrained_embeddings)\r\n\r\nbut I have no idea how I can pass to a `model.second_embedding.weight.data` the values in `vectors_imdb` while keeping the correct alignment between embeddings and sharing the same vocab (coming from my training data)\r\n\r\nI tried something like \r\n\r\n second_text_field = text_field\r\n second_text_field.vocab.set_vectors(text_field.vocab.stoi, vectors_imdb, 100)\r\n\r\nbut of course it doesn't work since changing vectors in `second_text_field` also modify `text_field` ones.\r\n\r\nHow can I modify vectors while keeping the correct mapping word:vector representation?\r\nI'm afraid the only way is to loop through `vectors_imdb` keeping only words that are in my vocab, sorting them so that the two embeddings match and pass the result to the second_embedding layer, right?\r\n", "url": "https://github.com/pytorch/text/issues/1013", "state": "open", "labels": [ "legacy" ], "created_at": "2020-09-28T17:27:14Z", "updated_at": "2020-10-05T13:38:07Z", "user": "jacopo-repossi" }, { "repo": "pytorch/vision", "number": 2717, "title": "What's the pretrained_settings for r2plus1d_18?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nHi,\r\nI want to know the pretrained_settings for r2plus1d_18, which has not been implemented, including input_space,input_size,input_range,mean,std. In pytorch.pretrainedmodels, they implement resnet152, vgg19_bn, inceptionv4 with those pretrained_settings, which are necessary for TransformImage to transform the image. If include pretrained_settings, it will be more user-friendly for r3d_18,mc3_18,r2plus1d_18 models to extract vidoe frame features.Many thanks!\r\n![image](https://user-images.githubusercontent.com/33551398/94428918-1c015800-01c4-11eb-910d-bb4e39d9fd13.png)\r\n\n\ncc @bjuncek", "url": "https://github.com/pytorch/vision/issues/2717", "state": "closed", "labels": [ "question", "module: models", "module: video" ], "created_at": "2020-09-28T12:00:52Z", "updated_at": "2020-09-29T12:20:12Z", "user": "XinyuLyu" }, { "repo": "pytorch/vision", "number": 2711, "title": "Test the model with an image", "body": "Hello\r\nI compressed Mask R-CNN and finished the training, but the checkpoint (.pth) is different from the regular one, what can I do if I want to load the checkpoint to Mask R-CNN to test its speed with a new image? Thank you! ", "url": "https://github.com/pytorch/vision/issues/2711", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-09-27T16:26:52Z", "updated_at": "2020-09-28T10:00:04Z", "user": "jiaerfei" }, { "repo": "pytorch/pytorch", "number": 45387, "title": "How to build from a release tar package ?", "body": "Hi, I download newest release tar package and find it can not build pytorch, Can anyone help meo build this release version from source?\r\n\r\n```\r\nwget https://github.com/pytorch/pytorch/archive/v1.6.0.tar.gz\r\ntar xf v1.6.0.tar.gz\r\ncd pytorch-1.6.0\r\npython3 setup.py install\r\n```\r\n\r\noutput:\r\nfatal: not a git repository (or any parent up to mount point /)\r\nStopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).\r\nBuilding wheel torch-1.6.0a0\r\n-- Building version 1.6.0a0\r\nCould not find /home/xzpeng/pytorch/pytorch-1.6.0/third_party/gloo/CMakeLists.txt\r\nDid you run 'git submodule update --init --recursive'?\r\n\r\nDoes the release tar package support building from source?\r\n\n\ncc @ezyang @seemethere @malfet @walterddr", "url": "https://github.com/pytorch/pytorch/issues/45387", "state": "closed", "labels": [ "module: binaries", "triaged" ], "created_at": "2020-09-27T03:09:03Z", "updated_at": "2020-09-29T19:11:21Z", "user": "haoren3696" }, { "repo": "pytorch/pytorch", "number": 45386, "title": "How to convert pytorch model to Nvidia faster transformer?", "body": "Hi, i want to detect Bert model structure in pytorch model and convert the structure by Nvidia faster transformer op automatically.\r\nIs there any existing project? If not, i want to develop one, so should i develop on origin pytorch or TorchScript? Should i develop a pass to detect Bert in TorchScript IR and replace it by faster transformer op?\r\nThankyou very much!\r\n", "url": "https://github.com/pytorch/pytorch/issues/45386", "state": "closed", "labels": [], "created_at": "2020-09-27T02:08:14Z", "updated_at": "2020-09-28T15:56:34Z", "user": "wangxiang2713" }, { "repo": "pytorch/examples", "number": 826, "title": "language model bug?", "body": "https://github.com/pytorch/examples/blob/master/word_language_model/data.py#L46\r\n\r\non the last iteration of the loop ids holds onto the tensor before catting idss but no other iteration of ids is.\r\nI get significantly better results after adding:\r\nids = []\r\nBefore catting idss into ids.", "url": "https://github.com/pytorch/examples/issues/826", "state": "closed", "labels": [], "created_at": "2020-09-25T18:14:49Z", "updated_at": "2020-09-25T18:44:08Z", "comments": 0, "user": "wesboyt" }, { "repo": "pytorch/tutorials", "number": 1166, "title": "Char RNN classification with batch size", "body": "I'm replicating [this example](https://github.com/pytorch/tutorials/blob/master/intermediate_source/char_rnn_classification_tutorial.py) for a **classification** with a **char-rnn**.\r\n```python\r\nfor iter in range(1, n_iters + 1):\r\n category, line, category_tensor, line_tensor = randomTrainingExample()\r\n output, loss = train(category_tensor, line_tensor)\r\n current_loss += loss\r\n```\r\nI see that every epoch only 1 example is taken and random. I would like that each epoch **all the dataset** is taken with a specific **batch size** of examples. I can adjust the code to do this myself but I was wondering if some flags already exist.\r\n\r\nThank you\r\n", "url": "https://github.com/pytorch/tutorials/issues/1166", "state": "closed", "labels": [ "question", "Text" ], "created_at": "2020-09-25T17:57:29Z", "updated_at": "2024-12-11T17:57:11Z", "user": "paulthemagno" }, { "repo": "pytorch/pytorch", "number": 45331, "title": "How to print C++ log like GRAPH_DEBUG?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/45331", "state": "closed", "labels": [], "created_at": "2020-09-25T06:37:24Z", "updated_at": "2020-09-25T14:36:45Z", "user": "liym27" }, { "repo": "pytorch/pytorch", "number": 45328, "title": "How long would it takes for pytorch cuda version to support RTX30 series?", "body": "## \ud83d\ude80 Feature\r\nAs the title. When would pytorch cuda version support RTX30 series?", "url": "https://github.com/pytorch/pytorch/issues/45328", "state": "closed", "labels": [], "created_at": "2020-09-25T05:55:51Z", "updated_at": "2020-10-04T10:58:05Z", "user": "GregXu247" }, { "repo": "pytorch/pytorch", "number": 45266, "title": "how to register hook on +, - ,*, / or how to get the input of them?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nhow to register hook on +, - ,*, / or how to get the input of them ?", "url": "https://github.com/pytorch/pytorch/issues/45266", "state": "closed", "labels": [], "created_at": "2020-09-24T09:20:57Z", "updated_at": "2020-09-25T15:08:27Z", "user": "Stick-To" }, { "repo": "pytorch/tutorials", "number": 1163, "title": "How do I go back to the old view of the site", "body": "Hi, I like the old version of the Pytorch website on the tutorial that I can view everything at once. But now they changed that I can only view like 7 to 5 of it on one page. How do I go back to the old one?", "url": "https://github.com/pytorch/tutorials/issues/1163", "state": "open", "labels": [], "created_at": "2020-09-21T09:32:53Z", "updated_at": "2020-09-21T09:32:53Z", "user": "AliceSum" }, { "repo": "pytorch/pytorch", "number": 45059, "title": "How to view C++ error report stack of pytorch?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/45059", "state": "closed", "labels": [ "triaged" ], "created_at": "2020-09-21T09:09:54Z", "updated_at": "2020-09-22T16:28:17Z", "user": "liym27" }, { "repo": "pytorch/xla", "number": 2502, "title": "How to prevent cyclic computation graph in torch xla multiprocessing?", "body": "## \u2753 Questions and Help\r\nHi,\r\n\r\nI am trying to build a dynamics simulator software on the TPU. On the high level, it basically needs to do this (I have pre-trained the model separately elsewhere):\r\n\r\n```\r\nfor i in range(num_of_step):\r\n forces = model(positions)\r\n new_positions = update_functions(forces, positions)\r\n positions = new_positions\r\n```\r\n\r\nWhen I do this workflow on a single TPU, it is quite fast. While I am aware that there is generally an issue with slow `tensor.item()` call, the following sequence will work quite fast for me:\r\n\r\n```\r\nfor i in range(num_of_step):\r\n positions = torch.Tensor(structure.positions).to(xla_device, non_blocking=True)\r\n forces = model(positions)\r\n cpu_forces = forces.to(cpu_device).numpy()\r\n structure.positions = update_function(cpu_forces, structure.positions)\r\n```\r\n\r\nHowever, when I do xla multiprocessing on the TPU, somehow the `.to(cpu_device)` call will be extremely slow. The code snippet looks like this:\r\n\r\n```\r\ndef _map_fn(xla_index):\r\n xla_device = xm.xla_device()\r\n for i in range(num_of_step):\r\n positions = torch.Tensor(structure.positions).to(xla_device, non_blocking=True)\r\n model_indices = indexing_function(positions, xla_index)\r\n forces = model(positions, model_indices).detach()\r\n forces = xm.all_reduce(xm.REDUCE_SUM, forces).detach().clone()\r\n cpu_forces = forces.to(cpu_device).numpy()\r\n structure.positions = update_function(cpu_forces, structure.positions)\r\n xm.rendezvous('sync_step')\r\nxmp.spawn(_map_fn, nprocs=8, start_method='fork')\r\n```\r\n\r\nIf I were to modify the software to eliminate `.to(cpu_device)` call, I can get this to run fast:\r\n```\r\ndef _map_fn(xla_index):\r\n xla_device = xm.xla_device()\r\n structure.positions = torch.Tensor(structure.positions).to(xla_device, non_blocking=True)\r\n\r\n for i in range(num_of_step):\r\n positions = structure.positions.detach().clone()\r\n model_indices = indexing_function(positions, xla_index)\r\n forces = model(positions, model_indices).detach()\r\n forces = xm.all_reduce(xm.REDUCE_SUM, forces).detach().clone()\r\n structure.positions = update_function(forces, structure.positions).detach().clone()\r\n xm.rendezvous('sync_step')\r\nxmp.spawn(_map_fn, nprocs=8, start_method='fork')\r\n```\r\n\r\nHowever, the compiler seems to have decided to unfold this code snippet into one giant computation graph to execute at once. So I get a weird behavior... on a standard `structure` size, I can run 16 steps but the code won't run at all if I try to run 17 steps (no error thrown either). If I were to increase the input `structure` size by 2x, the compiler will only allow the code snippet to run & finish 8 steps (but won't start at all if I try to run 9 steps instead). So my suspicion is that this code snippet runs when the compiler can place the entire chain in a single graph in the TPU, despite my best effort to separate the computation graph from the `structure.positions` variable.\r\n\r\n\r\nDo you have any suggestion on what I should look for? Is `.detach().clone()` the right method to separate the computation graph in pytorch XLA?", "url": "https://github.com/pytorch/xla/issues/2502", "state": "closed", "labels": [ "stale" ], "created_at": "2020-09-20T00:10:10Z", "updated_at": "2020-11-02T01:19:35Z", "user": "jpmailoa" }, { "repo": "pytorch/tutorials", "number": 1162, "title": "Problem of QAT Demo?", "body": "When i try PyTorch QAT Demo [GitHub File](https://github.com/pytorch/tutorials/blob/master/advanced_source/static_quantization_tutorial.py) [Webpage](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#quantization-aware-training), i want to know how to find the scale and zero_point of the activations?", "url": "https://github.com/pytorch/tutorials/issues/1162", "state": "closed", "labels": [], "created_at": "2020-09-19T13:53:34Z", "updated_at": "2020-09-20T13:05:45Z", "comments": 0, "user": "wZuck" }, { "repo": "pytorch/pytorch", "number": 44924, "title": "How to get the information like the output of Model.get_config() in keras?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n![1](https://user-images.githubusercontent.com/34546552/93542850-a825ab00-f98c-11ea-9eb2-8a1b7d06f588.png)\r\n\r\nI want to get some information like this.\r\nThe input layer of one layer and the output layer of one layer.", "url": "https://github.com/pytorch/pytorch/issues/44924", "state": "closed", "labels": [], "created_at": "2020-09-18T00:54:18Z", "updated_at": "2020-09-21T16:01:05Z", "user": "Stick-To" }, { "repo": "pytorch/java-demo", "number": 15, "title": "any instructions on how to use the downloaded libtorch in Maven project?", "body": "I am not familiar with Maven and I want to get some help on how to use the same libtorch in a Maven project, could anyone give me some hint?", "url": "https://github.com/pytorch/java-demo/issues/15", "state": "closed", "labels": [], "created_at": "2020-09-17T07:02:53Z", "updated_at": "2021-07-19T19:11:47Z", "user": "xiaonanchong" }, { "repo": "pytorch/examples", "number": 822, "title": "Gradient vanishing of G in the DCGAN example", "body": "Hello,\r\n\r\nI have trained the DCGAN with the default hyper-parameter settings on the downloaded \"img_align_celeba\" dataset (recommended in the tutorial). However, the results reveal strong gradient vanishing of G. While Loss_D keeps decreasing towards 0, Loss_G grows high (towards 100). \r\n\r\nIt seems that D is trained so well, preventing a good training on G. I didn't do any modifications on the code. Do you know what happened?\r\n\r\nThanks!\r\n", "url": "https://github.com/pytorch/examples/issues/822", "state": "open", "labels": [ "help wanted" ], "created_at": "2020-09-11T14:02:58Z", "updated_at": "2022-03-09T21:32:12Z", "comments": 0, "user": "zhan4817" }, { "repo": "pytorch/serve", "number": 681, "title": "how to deploy sentence transformer model which has bert-base-nli-mean-tokens weights using torchserve", "body": "i tried so many times to deploy my model using torchserve , it did not work \r\n\r\nsometimes it is coming like this\r\n\r\n2020-09-11 10:41:02,757 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File \"/tmp/models/a7e46f396a6348deba9e844a002f7b36/handler.py\", line 61, in handle\r\n2020-09-11 10:41:02,757 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)\r\n2020-09-11 10:41:02,757 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File \"/tmp/models/a7e46f396a6348deba9e844a002f7b36/handler.py\", line 23, in initialize\r\n2020-09-11 10:41:02,757 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model = AutoModelForSequenceClassification.from_pretrained(model_dir)\r\n2020-09-11 10:41:02,757 [WARN ] W-9004-encoder_model_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: encoder_model, error: Worker died.\r\n2020-09-11 10:41:02,757 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File \"/home/ubuntu/anaconda3/envs/torch/lib/python3.8/site-packages/transformers/modeling_auto.py\", line 1359, in from_pretrained\r\n2020-09-11 10:41:02,758 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n2020-09-11 10:41:02,757 [DEBUG] W-9004-encoder_model_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-encoder_model_1.0 State change WORKER_STARTED -> WORKER_STOPPED\r\n2020-09-11 10:41:02,758 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File \"/home/ubuntu/anaconda3/envs/torch/lib/python3.8/site-packages/transformers/configuration_auto.py\", line 214, in from_pretrained\r\n2020-09-11 10:41:02,758 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - raise ValueError(\r\n2020-09-11 10:41:02,758 [INFO ] W-9004-encoder_model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ValueError: Unrecognized model in /tmp/models/a7e46f396a6348deba9e844a002f7b36. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder\r\n\r\n\r\ndoes torchserve needs a config.json , give me a suggestion on json file format", "url": "https://github.com/pytorch/serve/issues/681", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2020-09-11T05:16:34Z", "updated_at": "2021-11-06T07:00:17Z", "user": "mgeethabhargava" }, { "repo": "pytorch/pytorch", "number": 44460, "title": "How to package pytorch with the file build from source.", "body": "## \u2753 Questions and Help\r\n\r\nI've noticed in https://github.com/pytorch/pytorch/issues/31285 that pytorch only compile binaries for NV cards with CC 3.7 and up. Now i've build pytorch from source on my machine. Is there any way to package it into a new pip image please\uff1f Thanks a lot.\r\n\r\n\n\ncc @malfet @seemethere @walterddr", "url": "https://github.com/pytorch/pytorch/issues/44460", "state": "closed", "labels": [ "module: build", "triaged" ], "created_at": "2020-09-10T08:18:15Z", "updated_at": "2022-10-20T22:55:01Z", "user": "Abbyyan" }, { "repo": "pytorch/pytorch", "number": 44448, "title": "How to print optimized IR?", "body": "I read Pytorch 1.6 code: torch/csrc/jit/runtime/graph_executor.cpp, and see:\r\n```\r\nInline(*opt_graph);\r\nGRAPH_DEBUG(\"After Inline, before LowerGradOf\\n\", *opt_graph);\r\nLowerGradOf(*opt_graph);\r\nGRAPH_DEBUG(\r\n \"After LowerGradOf, before specializeAutogradZero\\n\", *opt_graph);\r\n```\r\n\r\nI want to print optimized IR, so i run: `export PYTORCH_JIT_LOG_LEVEL=\">graph_executor\"`, my pytorch code is:\r\n```\r\nimport torch\r\n\r\ndef f(x, y):\r\n a = x + y\r\n b = x - y\r\n c = a * b\r\n d = c ** 3\r\n e = d.sum()\r\n return e\r\n\r\nscript_f = torch.jit.script(f)\r\nx, h = torch.rand(3, 4), torch.rand(3, 4)\r\nprint(script_f(x, h))\r\n```\r\n\r\nHowever, i got nothing. If i use pytorch 1.4, i can get:\r\n```\r\n[DUMP graph_executor.cpp:550] Optimizing the following function:\r\n[DUMP graph_executor.cpp:550] def source_dump(x: Tensor,\r\n[DUMP graph_executor.cpp:550] y: Tensor) -> Tensor:\r\n[DUMP graph_executor.cpp:550] a = torch.add(x, y, alpha=1)\r\n[DUMP graph_executor.cpp:550] b = torch.sub(x, y, alpha=1)\r\n[DUMP graph_executor.cpp:550] c = torch.mul(a, b)\r\n[DUMP graph_executor.cpp:550] d = torch.pow(c, 3)\r\n[DUMP graph_executor.cpp:550] return torch.sum(d, dtype=None)\r\n```\r\nbut can't get GRAPH_DEBUG info.\r\n\r\nI don't know the reason, i can get a lot of logs if i set `export PYTORCH_JIT_LOG_LEVEL=dead_code_elimination:guard_elimination` when i use pytorch 1.6, but got nothing with `export PYTORCH_JIT_LOG_LEVEL=\">graph_executor\"`\r\n\n\ncc @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/44448", "state": "closed", "labels": [ "oncall: jit" ], "created_at": "2020-09-10T03:02:33Z", "updated_at": "2020-09-16T03:16:38Z", "user": "wangxiang2713" }, { "repo": "pytorch/pytorch", "number": 44377, "title": "How to get optimized IR?", "body": "## How to get optimized IR?\r\nHi, i can get torch_script IR by print(script.grah), and i know the IR will be optimized several times while running.\r\nSo how can i get all of optimized IR while running torchscript code? Thankyou.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/44377", "state": "closed", "labels": [], "created_at": "2020-09-09T11:55:03Z", "updated_at": "2020-09-09T14:41:25Z", "user": "wangxiang2713" }, { "repo": "pytorch/pytorch", "number": 44353, "title": "DDP training with syncBatchNorm\uff0chow to catch and handle the exception in training processes?", "body": "When training with the DDP and syncBatchNorm, one process runing on one GPU, When I catch the gpu OOM exception, the training is blocked. What should we do?\r\nMy code is following, when OOM exception occurs in one process, I just ignore this batch, the training phase continue.\r\n\r\n```\r\nfor i, (inputs, targets) in enumerate(train_loader):\r\n try:\r\n # do forward and backprop\r\n except RuntimeError as e:\r\n if 'out of memory' in str(e):\r\n print('| WARNING: ran out of memory, skipping this batch.')\r\n if hasattr(torch.cuda, 'empty_cache'):\r\n torch.cuda.empty_cache()\r\n optimizer.zero_grad()\r\n else:\r\n raise e\r\n\r\n```\r\n\r\nwhen one process catch the exception, the others get blocked. \n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski", "url": "https://github.com/pytorch/pytorch/issues/44353", "state": "closed", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2020-09-09T01:53:41Z", "updated_at": "2020-09-10T02:44:35Z", "user": "eeewhe" }, { "repo": "pytorch/vision", "number": 2652, "title": "Image to Tensor data", "body": "Library: **pytorch_java_only-1.6.0** \r\n\r\nI want to convert BufferedImage/File to Tensor data, is there some method or library for this?\r\n\r\nPython solution:\r\n\r\n```\r\nimage = Image.open(image_path)\r\nimage = image.convert('RGB') \r\ntransform = Compose([ToTensor()])\r\nimage = transform(image)\r\nimage = image.view(1, 3, 64, 64).cuda()\r\noutput = my_model(image)\r\noutput = output.view(-1, self.quantity)\r\noutput = nn.functional.softmax(output, dim=1)\r\noutput = torch.argmax(output, dim=1)\r\noutput = output.view(-1, self.size)[0]\r\n```\r\n\r\nI need something like that for pytorch_java. I'm sorry if my question is stupid i'm newbee \r\n\r\nP.S: Thanks for reply", "url": "https://github.com/pytorch/vision/issues/2652", "state": "closed", "labels": [ "question" ], "created_at": "2020-09-07T21:19:22Z", "updated_at": "2020-09-09T13:14:49Z", "user": "beeqwe" }, { "repo": "pytorch/pytorch", "number": 44279, "title": "How to use CUDA Dynamic Parallelism in PyTorch CPP extension?", "body": "I found discussions at discuss.pytorch.org . But there is still no solution now.\r\n\r\nHere is the error message:\r\n```\r\nerror: kernel launch from __device__ or __global__ functions requires separate compilation mode\r\n```\r\nand\r\n```\r\nerror: a __device__ function call cannot be configured\r\n```\r\n\r\nThanks.\n\ncc @ngimel", "url": "https://github.com/pytorch/pytorch/issues/44279", "state": "open", "labels": [ "module: cuda", "triaged" ], "created_at": "2020-09-07T10:55:28Z", "updated_at": "2021-09-18T11:30:15Z", "user": "qinjian623" }, { "repo": "pytorch/examples", "number": 821, "title": "Recommended RAM for training ResNeXt-101?", "body": "I am training a ResNeXt-101 model in an end-to-end manner on a version of ImageNet with 13k classes (using the method presented of the ImageNet Shuffle paper). This version contains around 12 M images. My machine has a single NVIDIA GeForce RTX 2080, intel i5 9400 and 16 GB of RAM. I am deploying 4 workers for this task. It seems that I don't get the best GPU utilization, since GPU-Util percentage ranges from 0% to 94%. Furthermore, each worker uses around 2GB of swap memory, which definitely degrades training speed/data fetch. This makes me wonder if my RAM is enough for this task. If I upgrade my RAM to 32 GB, am I expected to get a significant performance boost?\r\n\r\nThanks!!", "url": "https://github.com/pytorch/examples/issues/821", "state": "closed", "labels": [], "created_at": "2020-09-07T08:40:05Z", "updated_at": "2022-03-09T21:36:05Z", "comments": 1, "user": "AlexMetsai" }, { "repo": "pytorch/vision", "number": 2647, "title": "rgb2hsv bug in functional_tensor.py.", "body": "https://github.com/pytorch/vision/blob/bb88c4520b835e79d5d3c4423eb7ff7c26fa2043/torchvision/transforms/functional_tensor.py#L429-L442\r\n\r\nAs stated in the comments, when `r=g=b`, the calculation of `h` is expected to be value `6`. However, in the current implementation, only `hr` will be counted in, because `hg` and `hr` are just ignored with the condition `(maxc != r)`. I think this is not expected and could lead to non-zero value of `h` when `r=g=b`.\n\ncc @vfdev-5", "url": "https://github.com/pytorch/vision/issues/2647", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2020-09-07T03:34:24Z", "updated_at": "2020-09-09T10:08:37Z", "user": "yelantf" }, { "repo": "pytorch/pytorch", "number": 44265, "title": "How to run a simple benchmark test on a custom RNN?", "body": "Say I have a custom LSTM cell... how can I use this repository to run a simple benchmark test on that?", "url": "https://github.com/pytorch/pytorch/issues/44265", "state": "closed", "labels": [ "triaged" ], "created_at": "2020-09-06T20:11:40Z", "updated_at": "2020-09-09T16:37:19Z", "user": "slerman12" }, { "repo": "pytorch/benchmark", "number": 73, "title": "Add docs on how to profile benchmark models", "body": "", "url": "https://github.com/pytorch/benchmark/issues/73", "state": "closed", "labels": [], "created_at": "2020-09-02T21:10:29Z", "updated_at": "2023-07-26T18:51:23Z", "user": "wconstab" }, { "repo": "pytorch/TensorRT", "number": 181, "title": "\u2753 [Question] Is the module compiled by TRTorch thread safe?", "body": "Hi\r\nIf the native torchscript module is thread safe when its `forward` function is called from multithread, would the module compiled by TRTorch be thread safe?", "url": "https://github.com/pytorch/TensorRT/issues/181", "state": "closed", "labels": [ "feature request", "question" ], "created_at": "2020-09-02T10:34:46Z", "updated_at": "2021-11-11T01:23:36Z", "user": "uni19" }, { "repo": "pytorch/pytorch", "number": 43946, "title": "How to use torch.nn.SyncBatchNorm and torch.uitls.checkpoint together", "body": "\r\nWhen I use net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net) to convert model,and use torch.uitls.checkpoint in model ,loss backward, It seems to be stuck in process communication", "url": "https://github.com/pytorch/pytorch/issues/43946", "state": "closed", "labels": [], "created_at": "2020-09-01T09:29:37Z", "updated_at": "2020-09-01T09:47:25Z", "user": "devilztt" }, { "repo": "pytorch/serve", "number": 654, "title": "How to change Temp Directory?", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/serve/ is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\r\nMy computer has little space to mount '/tmp'. And I can't find any helps. \r\nAre there any way to change Temp Directory? ", "url": "https://github.com/pytorch/serve/issues/654", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2020-08-27T09:06:21Z", "updated_at": "2020-08-27T09:34:02Z", "user": "CSLujunyu" }, { "repo": "pytorch/vision", "number": 2624, "title": "RuntimeError: each element in list of batch should be of equal size", "body": "## \ud83d\udc1b Bug\r\n```python\r\n\"python3.7/site-packages/torch/utils/data/_utils/collate.py\", line 82, in default_collate\r\n raise RuntimeError('each element in list of batch should be of equal size')\r\nRuntimeError: each element in list of batch should be of equal size\r\n```\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```python\r\nmodel = models.resnet50()\r\n\r\ntransform = transforms.Compose([\r\n transforms.Resize((480, 640)),\r\n transforms.ToTensor(),\r\n])\r\n\r\ntrain_dataset = datasets.CocoDetection(\r\n root=args.train_dataset, annFile=args.train_annotation, transform=transform)\r\n\r\ntrain_loader = DataLoader(train_dataset, batch_size=64)\r\n\r\nfor (img, anno) in train_loader:\r\n out = model(img)\r\n```\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\nforward\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0): 1.6\r\n - OS (e.g., Linux): Ubuntu\r\n - How you installed PyTorch (`conda`, `pip`, source): conda -c pytorch\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7\r\n - CUDA/cuDNN version: 10.2\r\n - GPU models and configuration: V100\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nhttps://github.com/pytorch/pytorch/issues/42654\n\ncc @pmeier", "url": "https://github.com/pytorch/vision/issues/2624", "state": "closed", "labels": [ "question", "module: datasets" ], "created_at": "2020-08-27T05:53:50Z", "updated_at": "2021-03-31T06:56:03Z", "user": "ZhiyuanChen" }, { "repo": "pytorch/pytorch", "number": 43625, "title": "How to see the weight value of the quantified model?", "body": "\r\nHow to see the weight value of the quantified model?\r\n\r\nError:\r\n```\r\ntorch.nn.modules.module.ModuleAttributeError: 'LinearPackedParams' object has no attribute '_parameters'\r\n```\r\n\r\nCode:\r\n```\r\nimport torch\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nMODLE_LOCATION = \"./models/mfi_0.97400.pth\"\r\nMODLE_LOCATION_QUAN = \"./models/quantized_1_model.pth\"\r\nTENSOR_NAME = \"fc1.weight\"\r\n\r\ndef plot_distribution(model_name, tensor_set, resolution):\r\n model = torch.load(model_name)\r\n print(model)\r\n params = model.state_dict()\r\n tensor_value = params[TENSOR_NAME]\r\n tensor_value_np = tensor_value.numpy()\r\n tensor_value_np = tensor_value_np.flatten()\r\n bins = np.arange(-1, 1, resolution)\r\n plt.hist(tensor_value_np,bins) \r\n plt.title(\"histogram\") \r\n plt.show()\r\n\r\n\r\nif __name__ == '__main__':\r\n plot_distribution(MODLE_LOCATION, TENSOR_NAME, 0.01)\r\n plot_distribution(MODLE_LOCATION_QUAN, TENSOR_NAME, 0.01)\r\n```\r\n\r\nOutput:\r\n```\r\n(bnntorch) D:\\FewShotMFI\\Python\\MFI_pytorch>python plt_distribution.py\r\nLeNet5_Improved(\r\n (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\r\n (bn1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (max_pool_1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))\r\n (bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (max_pool_2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n (fc1): Linear(in_features=400, out_features=180, bias=True)\r\n (dropout1): Dropout(p=0.2, inplace=False)\r\n (fc2): Linear(in_features=180, out_features=100, bias=True)\r\n (dropout2): Dropout(p=0.2, inplace=False)\r\n (fc3): Linear(in_features=100, out_features=20, bias=True)\r\n)\r\nLeNet5_Improved(\r\n (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\r\n (bn1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (max_pool_1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))\r\n (bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n (max_pool_2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\r\n (fc1): DynamicQuantizedLinear(in_features=400, out_features=180, dtype=torch.qint8, qscheme=torch.per_tensor_affine)\r\n (dropout1): Dropout(p=0.2, inplace=False)\r\n (fc2): DynamicQuantizedLinear(in_features=180, out_features=100, dtype=torch.qint8, qscheme=torch.per_tensor_affine)\r\n (dropout2): Dropout(p=0.2, inplace=False)\r\n (fc3): DynamicQuantizedLinear(in_features=100, out_features=20, dtype=torch.qint8, qscheme=torch.per_tensor_affine)\r\n)\r\nTraceback (most recent call last):\r\n File \"plt_distribution.py\", line 26, in <module>\r\n plot_distribution(MODLE_LOCATION_QUAN, TENSOR_NAME, 0.01)\r\n File \"plt_distribution.py\", line 14, in plot_distribution\r\n params = model.state_dict()\r\n File \"C:\\Users\\huangyongtao\\AppData\\Local\\conda\\conda\\envs\\bnntorch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 900, in state_dict\r\n module.state_dict(destination, prefix + name + '.', keep_vars=keep_vars)\r\n File \"C:\\Users\\huangyongtao\\AppData\\Local\\conda\\conda\\envs\\bnntorch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 900, in state_dict\r\n module.state_dict(destination, prefix + name + '.', keep_vars=keep_vars)\r\n File \"C:\\Users\\huangyongtao\\AppData\\Local\\conda\\conda\\envs\\bnntorch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 897, in state_dict\r\n self._save_to_state_dict(destination, prefix, keep_vars)\r\n File \"C:\\Users\\huangyongtao\\AppData\\Local\\conda\\conda\\envs\\bnntorch\\lib\\site-packages\\torch\\nn\\quantized\\modules\\linear.py\", line 62, in _save_to_state_dict\r\n super(LinearPackedParams, self)._save_to_state_dict(destination, prefix, keep_vars)\r\n File \"C:\\Users\\huangyongtao\\AppData\\Local\\conda\\conda\\envs\\bnntorch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 856, in _save_to_state_dict\r\n for name, param in self._parameters.items():\r\n File \"C:\\Users\\huangyongtao\\AppData\\Local\\conda\\conda\\envs\\bnntorch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 772, in __getattr__\r\n type(self).__name__, name))\r\ntorch.nn.modules.module.ModuleAttributeError: 'LinearPackedParams' object has no attribute '_parameters'\r\n\r\n```\r\n\n\ncc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo", "url": "https://github.com/pytorch/pytorch/issues/43625", "state": "closed", "labels": [ "oncall: quantization", "triaged" ], "created_at": "2020-08-26T15:31:53Z", "updated_at": "2020-08-26T16:23:03Z", "user": "YongtaoHuang1994" }, { "repo": "pytorch/pytorch", "number": 43536, "title": "How do I debug \"RuntimeError: trying to initialize the default process group twice!\"", "body": "## \ud83d\udc1b Bug\r\n\r\nWhen triggering distributed training in pytorch, the error `RuntimeError: trying to initialize the default process group twice!` occurs. How would one debug it?\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. on master node ip 10.163.60.19, run `local_rank=0; master_port=1303; python -m torch.distributed.launch --node_rank=$local_rank --master_addr=\"10.163.60.19\" --master_port=$master_port regression_train.py --config_file weights/cnn3_pad_32_pad_ratio_5.ini --distributed 1 --local_rank $local_rank --master_addr 10.163.60.19 --master_port $master_port --world_size 2`\r\n2. on slave node ip 10.163.60.18, run `local_rank=1; master_port=1303; python -m torch.distributed.launch --node_rank=$local_rank --master_addr=\"10.163.60.19\" --master_port=$master_port regression_train.py --config_file weights/cnn3_pad_32_pad_ratio_5.ini --distributed 1 --local_rank $local_rank --master_addr 10.163.60.19 --master_port $master_port --world_size 2`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"regression_train.py\", line 180, in <module>\r\n torch.distributed.init_process_group(backend=args.distributed_backend, world_size=args.world_size)\r\n File \"/home/kdang/anaconda3/envs/edge-detection/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 370, in init_process_group\r\n raise RuntimeError(\"trying to initialize the default process group \"\r\nRuntimeError: trying to initialize the default process group twice!\r\nTraceback (most recent call last):\r\n File \"/home/kdang/anaconda3/envs/edge-detection/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/kdang/anaconda3/envs/edge-detection/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/kdang/anaconda3/envs/edge-detection/lib/python3.7/site-packages/torch/distributed/launch.py\", line 263, in <module>\r\n main()\r\n File \"/home/kdang/anaconda3/envs/edge-detection/lib/python3.7/site-packages/torch/distributed/launch.py\", line 259, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/home/kdang/anaconda3/envs/edge-detection/bin/python', '-u', 'regression_train.py', '--local_rank=0', '--config_file', 'weights/cnn3_pad_32_pad_ratio_5.ini', '--distributed', '1', '--local_rank', '0', '--master_addr', '10.163.60.19', '--master_port', '1303', '--world_size', '2']' returned non-zero exit status 1.\r\n```\r\n\r\n## Expected behavior\r\n\r\nDistributed training should start\r\n\r\n## Environment\r\n\r\n```\r\n(edge-detection) \u279c EDGE-DETECTION-TRAINER git:(master) python -m torch.utils.collect_env\r\nCollecting environment information...\r\nPyTorch version: 1.4.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.1\r\n\r\nOS: Ubuntu 16.04.6 LTS\r\nGCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609\r\nCMake version: version 3.5.1\r\n\r\nPython version: 3.7\r\nIs CUDA available: Yes\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: GeForce RTX 2080\r\nNvidia driver version: 418.152.00\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4\r\n/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.4\r\n\r\nVersions of relevant libraries:\r\n[pip] numpy==1.18.1\r\n[pip] torch==1.4.0\r\n[pip] torchvision==0.5.0\r\n[conda] mkl 2020.0 166 conda-forge\r\n[conda] pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch\r\n[conda] torchvision 0.5.0 py37_cu101 pytorch\r\n(edge-detection) \u279c EDGE-DETECTION-TRAINER git:(master)\r\n```\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski", "url": "https://github.com/pytorch/pytorch/issues/43536", "state": "open", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2020-08-25T02:55:51Z", "updated_at": "2023-12-04T14:46:49Z", "user": "wakandan" }, { "repo": "pytorch/pytorch", "number": 43489, "title": "How to extract the hidden feature of a multi-submodule model", "body": "How to extract the feature in advantage (with dim=512), given a PyTorch model like below.\r\nDuelingCnnDQN(\r\n (cnn): Sequential(\r\n (0): Conv2d(1, 32, kernel_size=(8, 8), stride=(4, 4))\r\n (1): ReLU()\r\n (2): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2))\r\n (3): ReLU()\r\n (4): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))\r\n (5): ReLU()\r\n (6): Flatten()\r\n )\r\n (advantage): Sequential(\r\n (0): Linear(in_features=3136, out_features=512, bias=True)\r\n (1): ReLU()\r\n (2): Linear(in_features=512, out_features=6, bias=True)\r\n )\r\n (value): Sequential(\r\n (0): Linear(in_features=3136, out_features=512, bias=True)\r\n (1): ReLU()\r\n (2): Linear(in_features=512, out_features=1, bias=True)\r\n )\r\n)\r\n", "url": "https://github.com/pytorch/pytorch/issues/43489", "state": "closed", "labels": [], "created_at": "2020-08-24T11:50:57Z", "updated_at": "2020-08-24T20:51:30Z", "user": "xinghua-qu" }, { "repo": "pytorch/pytorch", "number": 43463, "title": "from sources install pytorch on ubuntu18.04,the question is this.What should I do ?", "body": "-- Generating done\r\n-- Build files have been written to: /home/wuji/pytorch/build\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 737, in <module>\r\n build_deps()\r\n File \"setup.py\", line 321, in build_deps\r\n cmake=cmake)\r\n File \"/home/wuji/pytorch/tools/build_pytorch_libs.py\", line 59, in build_caffe2\r\n rerun_cmake)\r\n File \"/home/wuji/pytorch/tools/setup_helpers/cmake.py\", line 329, in generate\r\n self.run(args, env=my_env)\r\n File \"/home/wuji/pytorch/tools/setup_helpers/cmake.py\", line 141, in run\r\n check_call(command, cwd=self.build_dir, env=env)\r\n File \"/home/wuji/anaconda3/lib/python3.7/subprocess.py\", line 363, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nsubprocess.CalledProcessError: Command '['cmake', '-GNinja', '-DBUILD_PYTHON=True', '-DBUILD_TEST=True', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX=/home/wuji/pytorch/torch', '-DCMAKE_PREFIX_PATH=/home/wuji/anaconda3', '-DNUMPY_INCLUDE_DIR=/home/wuji/anaconda3/lib/python3.7/site-packages/numpy/core/include', '-DPYTHON_EXECUTABLE=/home/wuji/anaconda3/bin/python', '-DPYTHON_INCLUDE_DIR=/home/wuji/anaconda3/include/python3.7m', '-DPYTHON_LIBRARY=/home/wuji/anaconda3/lib/libpython3.7m.so.1.0', '-DTORCH_BUILD_VERSION=1.7.0a0+7c50c2f', '-DUSE_NUMPY=True', '/home/wuji/pytorch']' returned non-zero exit status 1.\n\ncc @malfet", "url": "https://github.com/pytorch/pytorch/issues/43463", "state": "closed", "labels": [ "module: build", "triaged" ], "created_at": "2020-08-23T02:05:48Z", "updated_at": "2020-08-25T22:26:18Z", "user": "functail" }, { "repo": "pytorch/vision", "number": 2599, "title": "Change default value of eps in FrozenBatchNorm to match BatchNorm", "body": "## \u2753 Questions and Help\r\nHello\r\nLoss is nan error occurs when I learn fast rcnn with resnext101 backbone\r\nMy code is as follows\r\n```python\r\nbackbone = resnet_fpn_backbone('resnext101_32x8d', pretrained=True)\r\nmodel = FasterRCNN(backbone, num_classes)\r\nin_features = model.roi_heads.box_predictor.cls_score.in_features\r\nmodel.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)\r\n```\r\n\r\nerror message\r\n```\r\nEpoch: [0] [ 0/7208] eta: 1:27:42 lr: 0.000040 loss: 40613806080.0000 (40613806080.0000) loss_box_reg: 7979147264.0000 (7979147264.0000) loss_classifier: 11993160704.0000 (11993160704.0000) loss_objectness: 9486380032.0000 (9486380032.0000) loss_rpn_box_reg: 11155118080.0000 (11155118080.0000) time: 0.7301 data: 0.4106 max mem: 1241\r\nLoss is nan, stopping training\r\n```\r\n\r\nWhen i change the backbone to resnet50 and resnet152, no error occrus.\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/vision/issues/2599", "state": "closed", "labels": [ "question", "topic: object detection" ], "created_at": "2020-08-21T04:56:22Z", "updated_at": "2020-12-10T10:05:26Z", "user": "juyunsang" }, { "repo": "pytorch/pytorch_sphinx_theme", "number": 78, "title": "How to use it? Canonical way seems to not work", "body": "Dear All,\r\n\r\nI have installed the theme using\r\n\r\n```\r\npip install git+https://github.com/pytorch/pytorch_sphinx_theme.git\r\n```\r\n\r\nimported and included in my `conf.py` file as follows:\r\n\r\n```python\r\nextensions = [\r\n \"sphinx.ext.autodoc\",\r\n \"sphinx.ext.githubpages\",\r\n 'sphinx.ext.coverage',\r\n \"sphinx.ext.napoleon\",\r\n \"pytorch_sphinx_theme\", # <- HERE!\r\n \"recommonmark\"\r\n]\r\n```\r\n\r\nHowever, the theme is still `sphinx_rtd_theme`. \r\n\r\nThank you in advance.\r\n\r\nBest Regards,\r\n\r\nFrancesco ", "url": "https://github.com/pytorch/pytorch_sphinx_theme/issues/78", "state": "closed", "labels": [], "created_at": "2020-08-18T12:50:08Z", "updated_at": "2020-08-18T12:50:50Z", "user": "FrancescoSaverioZuppichini" }, { "repo": "pytorch/pytorch", "number": 43135, "title": "How to use torch.utils.checkpoint and DistributedDataParallel together", "body": "when I use DistributedDataParallel in mutil-GPU , If I use checkpoint in the model forward ,it can not work \r\n\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski", "url": "https://github.com/pytorch/pytorch/issues/43135", "state": "closed", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2020-08-17T06:54:13Z", "updated_at": "2023-01-20T14:40:29Z", "user": "devilztt" }, { "repo": "pytorch/xla", "number": 2433, "title": "How to speed up the compilation process for contributor for c++ lib", "body": "## \u2753 Questions and Help\r\n\r\nWhen I only modify 1 c++ file, I expect the compile process only need rebuild this c++ file and do related link process. In PyTorch, it uses ninja to speed up this process.\r\n\r\nBut when use \"DEBUG=1 python setup.py develop\", it will recompile whole TensorFlow and xla code by Bazel, it take a long time.\r\nAny best practice your guy can share?\r\nWe should write down this part on the contributing.md.\r\nThanks\r\n", "url": "https://github.com/pytorch/xla/issues/2433", "state": "closed", "labels": [], "created_at": "2020-08-17T02:22:04Z", "updated_at": "2020-08-19T00:20:11Z", "user": "maxwillzq" }, { "repo": "pytorch/text", "number": 938, "title": "How to use Torchtext model in flask app with vocabulary and vectors?", "body": "## \u2753 Questions and Help\r\n\r\nI have the the following code for my model:\r\n\r\n```\r\nTEXT = data.Field(tokenize=\"spacy\", include_lengths=True)\r\nLABEL = data.LabelField(dtype=torch.float)\r\nfrom torchtext import datasets\r\ntrain_data, valid_data = train_data.split(random_state=random.seed(SEED))\r\ntrain_data, test_data = datasets.IMDB.splits(TEXT, LABEL)\r\n\r\nTEXT.build_vocab(train_data, vectors=\"glove.6B.100d\", unk_init=torch.Tensor.normal_)\r\n\r\nLABEL.build_vocab(train_data)\r\n\r\nBATCH_SIZE = 64\r\n\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n\r\ntrain_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(\r\n (train_data, valid_data, test_data),\r\n batch_size=BATCH_SIZE,\r\n sort_within_batch=True,\r\n device=device,\r\n)\r\n\r\nclass RNN(nn.Module):\r\n def __init__(\r\n self,\r\n vocab_size,\r\n embedding_dim,\r\n hidden_dim,\r\n output_dim,\r\n n_layers,\r\n bidirectional,\r\n dropout,\r\n pad_idx,\r\n ):\r\n\r\n super().__init__()\r\n\r\n self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx)\r\n\r\n self.rnn = nn.LSTM(\r\n embedding_dim,\r\n hidden_dim,\r\n num_layers=n_layers,\r\n bidirectional=bidirectional,\r\n dropout=dropout,\r\n )\r\n\r\n self.fc = nn.Linear(hidden_dim * 2, output_dim)\r\n\r\n self.dropout = nn.Dropout(dropout)\r\n\r\n def forward(self, text, text_lengths):\r\n\r\n # text = [sent len, batch size]\r\n\r\n embedded = self.dropout(self.embedding(text))\r\n\r\n # embedded = [sent len, batch size, emb dim]\r\n\r\n # pack sequence\r\n packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths)\r\n\r\n packed_output, (hidden, cell) = self.rnn(packed_embedded)\r\n\r\n # unpack sequence\r\n output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)\r\n\r\n # output = [sent len, batch size, hid dim * num directions]\r\n # output over padding tokens are zero tensors\r\n\r\n # hidden = [num layers * num directions, batch size, hid dim]\r\n # cell = [num layers * num directions, batch size, hid dim]\r\n\r\n # concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers\r\n # and apply dropout\r\n\r\n hidden = self.dropout(torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1))\r\n\r\n # hidden = [batch size, hid dim * num directions]\r\n\r\n return self.fc(hidden)\r\n\r\nINPUT_DIM = len(TEXT.vocab)\r\nEMBEDDING_DIM = 100\r\nHIDDEN_DIM = 256\r\nOUTPUT_DIM = 1\r\nN_LAYERS = 2\r\nBIDIRECTIONAL = True\r\nDROPOUT = 0.5\r\nPAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]\r\n\r\nmodel = RNN(\r\n INPUT_DIM,\r\n EMBEDDING_DIM,\r\n HIDDEN_DIM,\r\n OUTPUT_DIM,\r\n N_LAYERS,\r\n BIDIRECTIONAL,\r\n DROPOUT,\r\n PAD_IDX,\r\n)\r\n\r\npretrained_embeddings = TEXT.vocab.vectors\r\n\r\nprint(pretrained_embeddings.shape)\r\n\r\nmodel.embedding.weight.data.copy_(pretrained_embeddings)\r\n\r\nUNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]\r\n\r\nmodel.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)\r\nmodel.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)\r\n\r\noptimizer = optim.Adam(model.parameters())\r\ncriterion = nn.BCEWithLogitsLoss()\r\n\r\nmodel = model.to(device)\r\ncriterion = criterion.to(device)\r\n\r\n*training and evaluation functions and such*\r\n\r\nnlp = spacy.load(\"en\")\r\n\r\n\r\ndef predict_sentiment(model, sentence):\r\n model.eval()\r\n tokenized = [tok.text for tok in nlp.tokenizer(sentence)]\r\n indexed = [TEXT.vocab.stoi[t] for t in tokenized]\r\n length = [len(indexed)]\r\n tensor = torch.LongTensor(indexed).to(device)\r\n tensor = tensor.unsqueeze(1)\r\n length_tensor = torch.LongTensor(length)\r\n prediction = torch.sigmoid(model(tensor, length_tensor))\r\n return prediction.item()\r\n```\r\nsaved and loaded in the same as this:\r\n\r\n```\r\ntorch.save(model.state_dict(), \"Finished Models/Pytorch/LSTM_w_vectors.pt\")\r\n\r\nmodel.load_state_dict(torch.load(\"Finished Models/Pytorch/LSTM_w_vectors.pt\"))\r\n```\r\n\r\nHow can I use import / deploy this model? Do I need to pickle the vocab and PAD_IDX? ", "url": "https://github.com/pytorch/text/issues/938", "state": "closed", "labels": [], "created_at": "2020-08-16T19:54:22Z", "updated_at": "2020-08-24T14:16:22Z", "user": "EmreTokyuez" }, { "repo": "pytorch/audio", "number": 879, "title": "How to index subsegments of audio tensor based on time", "body": "## How to select a subsegment from a tensor\r\n\r\nHello!\r\n\r\nThis may be a very noobie question but I can't get to solve it on my own. \r\n\r\nLets say I want to select the first 10 seconds of an audio sample that I have loaded into a torch tensor with torchaudio, how should I determine the indexes of the tensor that correspond to the 10sec sample?\r\n\r\nI have done some playing with audio files with different sample rate and if I were to save a subset of the tensor to audio `.wav` the same index `[:, :600000]` (600000 elements of all channels) doesn't return.\r\n\r\nSo if I select the subset `[:, :600000]` of a waveform tensor with shape `torch.Size([1, 21091521])` and sample rate `16000` I will get the first ~38 seconds.\r\n\r\nWhile If I select the subset `[:, :600000]` of a waveform tensor with shape `torch.Size([2, 5385600])\r\n` and sample rate `44100` I will get the first ~14seconds.\r\n\r\nI suspect this is basic audio theory and how is stored in different channels and how sample rate affects but I haven't been able to find information about it.\r\n\r\nBased on the channels and sample rate of a tensor, can I select a subset of seconds of this one? If so, how can I?\r\n\r\nThanks\r\n", "url": "https://github.com/pytorch/audio/issues/879", "state": "closed", "labels": [], "created_at": "2020-08-14T11:40:28Z", "updated_at": "2020-08-14T19:12:34Z", "user": "jiwidi" }, { "repo": "pytorch/pytorch", "number": 42996, "title": "How to use and debug mixed-precision in 1.6.0 ?", "body": "## \u2753 Questions and Help\r\n\r\nHi. I've been trying to take advantage of new mixed-precision set of tools, mainly following the instructions given in https://pytorch.org/docs/stable/amp.html#autocasting. My code is running, I see that the gradients are being scaled as expected, however, my memory footprint is de facto the same (inspected by nvidia-smi), and the overall execution time is even longer. I was wondering how I can debug this and what could possibly go wrong.\r\n\r\nThe changes to my code are minimal: (1) I added autocast context to the body of my forward method; and (2) gradient scaling updates as suggested in the tutorial. The operations in my code are standard, lots of CNN and torch.mm executions. My program is being executed on 8 GPUs, for data parallelism.\r\n\r\n## Environment\r\n\r\n - PyTorch Version: 1.6.0\r\n - OS: Linux\r\n - How you installed PyTorch: pip\r\n - Python version: 3.7\r\n - CUDA/cuDNN version: 10.1\r\n - GPU models and configuration: 8 x Tesla V100\r\n\n\ncc @mcarilli @jlin27", "url": "https://github.com/pytorch/pytorch/issues/42996", "state": "closed", "labels": [ "module: docs", "triaged", "module: amp (automated mixed precision)" ], "created_at": "2020-08-13T10:00:53Z", "updated_at": "2020-08-31T13:02:07Z", "user": "djordjemila" }, { "repo": "pytorch/TensorRT", "number": 171, "title": "\ud83d\udc1b [Bug] Encountered bug when using TRTorch 0.3.0 and torch_1.6.0_update versions on Jetson Xavier AGX", "body": "## \u2753 Question\r\n\r\nAre there some missing instructions on how to build the TRTorch 'torch_1.6.0_update' for Jetson Xavier AGX?<!-- Your question -->\r\n\r\n## What you have already tried\r\n\r\n- Downloaded two TRTorch versions: 0.3.0. tag and torch_1.6.0_update\r\n\r\n- Follow the Compilations TRTorch instructions of both versions\r\n\r\n- Both versions compilation have errors:\r\n\r\n - 0.3.0: \r\n core/lowering/lowering.cpp:10:10: fatal error: torch/csrc/jit/passes/quantization.h: No such file or directory\r\n #include \"torch/csrc/jit/passes/quantization.h\"\r\n\r\n - torch_1.6.0_update:\r\n\r\n ERROR: \r\n /home/ubuntu/.cache/bazel/_bazel_root/7f0ba44765888be019f9da1ca19341ed/external/tensorrt/BUILD.bazel:63:10: \r\n @tensorrt//:nvinfer_lib: invalid label '' in each branch in select expression of attribute 'static_library' in 'cc_import' rule \r\n (including '//conditions:default'): empty package-relative label\r\n ERROR: /home/ubuntu/Downloads/TRTorch-torch_1.6.0_update/core/util/BUILD:69:11: Target '@tensorrt//:nvinfer' contains \r\n an error and its package is in error and referenced by '//core/util:trt_util'\r\n ERROR: Analysis of target '//:libtrtorch' failed; build aborted: Analysis failed\r\n\r\n## Environment\r\n> Jetson Xavier AGX with JetPack 4.4 \r\n\r\n - PyTorch Version (e.g., 1.0): 1.6.0\r\n - CPU Architecture: Jetson Xavier AGX\r\n - OS (e.g., Linux): Jetson Xavier AGX JetPack 4.4\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): **pip**\r\n - Build command you used (if compiling from source): NA\r\n - Are you using local sources or building from archives: Build from JetPack 4.4\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2\r\n - GPU models and configuration: Jetson Xavier AGX\r\n - Any other relevant information:\r\n\r\n## Additional context\r\nAttached are my edited WORKSPACE files.\r\n\r\n[TRTorchBuildErrors.zip](https://github.com/NVIDIA/TRTorch/files/5068075/TRTorchBuildErrors.zip)\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/TensorRT/issues/171", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-08-13T08:46:40Z", "updated_at": "2020-09-23T00:05:53Z", "user": "OronG13" }, { "repo": "pytorch/audio", "number": 876, "title": "Where is version.py if I insist on building from source?", "body": "When I tried to install by `pip setup.py build`, I obtained the following **ERROR** message:\r\n\r\n```console\r\n-- Building version 0.7.0a0\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 30, in <module>\r\n with open(version_path, 'w') as f:\r\nTypeError: expected str, bytes or os.PathLike object, not PosixPath\r\n```\r\n\r\nIt looks **version.py** is missing.\r\n\r\nCheers\r\n", "url": "https://github.com/pytorch/audio/issues/876", "state": "closed", "labels": [], "created_at": "2020-08-12T09:01:40Z", "updated_at": "2020-08-14T20:31:04Z", "user": "jiapei100" }, { "repo": "pytorch/text", "number": 919, "title": "Where is version.py if I insist on building from source?", "body": "\r\nWhen I tried to install by `pip setup.py build`, I obtained the following **ERROR** message:\r\n\r\n```console\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 48, in <module>\r\n _export_version(VERSION, SHA)\r\n File \"setup.py\", line 42, in _export_version\r\n with open(version_path, 'w') as fileobj:\r\nTypeError: expected str, bytes or os.PathLike object, not PosixPath\r\n```\r\n\r\nIt looks **version.py** is missing.\r\n\r\nCheers\r\n", "url": "https://github.com/pytorch/text/issues/919", "state": "open", "labels": [], "created_at": "2020-08-12T09:00:36Z", "updated_at": "2020-08-13T13:51:20Z", "user": "jiapei100" }, { "repo": "pytorch/vision", "number": 2578, "title": "Calculate Training Accuracy on resnet152", "body": "I'm trying to calculate training accuracy on resnet152, however\r\n`loss_dict = model(images, targets)`\r\nonly contains loss values. \r\n\r\nUsually the model accepts only the images and the outputs are then passed to a loss function as well as used to calculate the accuracy. Calling it without the targets parameter results in:\r\n`ValueError: In training mode, targets should be passed`\r\n\r\nSorry if this is the wrong place, I'm still quite a beginner.", "url": "https://github.com/pytorch/vision/issues/2578", "state": "closed", "labels": [ "invalid", "question", "module: models" ], "created_at": "2020-08-11T22:30:00Z", "updated_at": "2020-08-21T14:11:44Z", "user": "FrostByteGER" }, { "repo": "pytorch/tutorials", "number": 1117, "title": "But in Spatial Transformer Network?", "body": "Hi,\r\n\r\nI am opening this issue because I noticed a weird behavior of the the spatial transformer networks implementation (https://github.com/pytorch/tutorials/blob/78e91c54dd0cd4fb0d02dfcc86fe94d16ab03df6/intermediate_source/spatial_transformer_tutorial.py#L57)\r\n\r\nI summarized my findings [here](https://github.com/theRealSuperMario/pytorch_stn). In short, what is happening is that\r\nwhen the input is normalised and then fed to the STN, the `F.grid_sample` call adds a zero-padding, however, the normalisation changes the background value from `0` to `-mean/std`. \r\n(https://github.com/pytorch/tutorials/blob/78e91c54dd0cd4fb0d02dfcc86fe94d16ab03df6/intermediate_source/spatial_transformer_tutorial.py#L127)\r\n\r\nThis causes the STN to collapse very early and to actually never learn the correct transformation. You can actually see that in the example code already (https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html), because the learnt transformation is zooming OUT instead of zooming IN on the digits. For the original 28 x 28 images, this is not such a big problem, However, when you continue to cluttered MNIST as in the original publication, the difference is huge. Once again, please have a look [here](https://github.com/theRealSuperMario/pytorch_stn).\r\n\r\nI think the tutorial for the STN should be updated and also include the cluttered MNIST example because that is what drives the point home. I would volunteer to do so, if I get the permission to go ahead.\r\n\r\nUnfortunately, most other implementations I was able to find on the web also have this bug.\n\ncc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/1117", "state": "open", "labels": [ "Text", "medium", "docathon-h2-2023" ], "created_at": "2020-08-11T11:37:35Z", "updated_at": "2023-11-01T16:41:21Z", "comments": 5, "user": "theRealSuperMario" }, { "repo": "pytorch/vision", "number": 2574, "title": "ValueError: bad value(s) in fds_to_keep", "body": "Traceback (most recent call last):\r\n File \"/home/sucom/hdd_1T/project/video_rec/my_video_rec/self_video_train.py\", line 161, in <module>\r\n trainer.train(task)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/classy_vision/trainer/local_trainer.py\", line 27, in train\r\n super().train(task)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/classy_vision/trainer/classy_trainer.py\", line 45, in train\r\n task.on_phase_start()\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/classy_vision/tasks/classification_task.py\", line 945, in on_phase_start\r\n self.advance_phase()\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/classy_vision/tasks/classification_task.py\", line 847, in advance_phase\r\n self.create_data_iterator()\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/classy_vision/tasks/classification_task.py\", line 900, in create_data_iterator\r\n self.data_iterator = iter(self.dataloaders[self.phase_type])\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 279, in __iter__\r\n return _MultiProcessingDataLoaderIter(self)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 721, in __init__\r\n w.start()\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/multiprocessing/process.py\", line 105, in start\r\n self._popen = self._Popen(self)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/multiprocessing/context.py\", line 284, in _Popen\r\n return Popen(process_obj)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/multiprocessing/popen_spawn_posix.py\", line 32, in __init__\r\n super().__init__(process_obj)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/multiprocessing/popen_fork.py\", line 19, in __init__\r\n self._launch(process_obj)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/multiprocessing/popen_spawn_posix.py\", line 59, in _launch\r\n cmd, self._fds)\r\n File \"/home/sucom/.conda/envs/classy_vision/lib/python3.6/multiprocessing/util.py\", line 417, in spawnv_passfds\r\n False, False, None)\r\nValueError: bad value(s) in fds_to_keep\r\n", "url": "https://github.com/pytorch/vision/issues/2574", "state": "closed", "labels": [ "invalid", "question" ], "created_at": "2020-08-11T05:18:17Z", "updated_at": "2020-08-11T17:21:26Z", "user": "siyangbing" }, { "repo": "pytorch/vision", "number": 2572, "title": "How to fill in splits_dir and metadata_file in the video classification, my ufc101 data set only has pictures, how can I get them, if I can provide any help, I would be very grateful", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/vision/issues/2572", "state": "closed", "labels": [], "created_at": "2020-08-11T02:35:10Z", "updated_at": "2020-08-11T06:16:55Z", "user": "siyangbing" }, { "repo": "pytorch/pytorch", "number": 42777, "title": "custom function too slow, how to be as fast as native?", "body": "## function/Module too slow\r\n\r\n### conext\r\nubuntu18.04.1 64 cuda11 pytorch1.6 \r\n\r\n\r\n```\r\nclass Relu0(Function):\r\n @staticmethod\r\n def forward(ctx, input,d=1):\r\n ctx.save_for_backward(input) # save input for backward pass\r\n x=torch.clone(input)\r\n x[x<0]=0\r\n return x\r\n\r\n @staticmethod\r\n def backward(ctx, grad_output,d=1):\r\n input, = ctx.saved_tensors # restore input from context\r\n grad_output[input<0]= 0\r\n return grad_output\r\n\r\n# more faster , not enough\r\ndef relu(x):\r\n return (x>0)*x\r\n\r\n# clamp fastest , any more?\r\n\r\n```\r\n\r\n### benchmark\r\nRelu0.apply cost 10 times of torch.nn.functional.relu\r\n\r\nabove way seems not use accelerate library.\r\nwhich operations can be as fast as native?\r\n\r\n\n\ncc @VitalyFedyunin @ngimel", "url": "https://github.com/pytorch/pytorch/issues/42777", "state": "closed", "labels": [ "module: performance", "triaged" ], "created_at": "2020-08-08T11:36:06Z", "updated_at": "2020-08-15T07:48:42Z", "user": "laohur" }, { "repo": "pytorch/text", "number": 912, "title": "How to combine train and test set for IMDB Dataset in Torchtext / Pytorch", "body": "## \u2753 Questions and Help\r\n\r\n\r\n\r\nI want to use the examples in the test set of the IMDB Sentiment Analysis Dataset for training, as I have built my own benchmark with which I will compare the performance of various Models (my Matura Thesis)\r\n\r\nSo after trying, I got the appending working and also managed ot split it, so that I have a validation set as well. The code is the following:\r\n\r\n```\r\n from torchtext import datasets\r\n \r\n train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)\r\n \r\n import random\r\n train_data, valid_data = train_data.split(random_state = random.seed(SEED))\r\n from torch.utils.data import ConcatDataset\r\n data_list = list()\r\n data_list.append(train_data)\r\n data_list.append(test_data)\r\n train_data = ConcatDataset(data_list)\r\n print(f'Number of validation examples: {len(valid_data)}')\r\n print(f'Number of training examples: {len(train_data)}')\r\n```\r\n\r\nAnd I get the following split (which is my goal):\r\n\r\n```\r\nNumber of validation examples: 7500\r\nNumber of training examples: 42500\r\n```\r\n\r\nNow when I want to built the vocab with the following code, I get this error:\r\n\r\n```\r\nMAX_VOCAB_SIZE = 25_000 \r\nLABEL.build_vocab(train_data)\r\n\r\nTEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE)\r\n~\\.conda\\envs\\matura-ml\\lib\\collections\\__init__.py in update(*args, **kwds)\r\n 654 else:\r\n--> 655 _count_elements(self, iterable)\r\n 656 if kwds:\r\n\r\nTypeError: 'Example' object is not iterable\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTypeError Traceback (most recent call last)\r\n in \r\n 1 MAX_VOCAB_SIZE = 25_000\r\n 2 \r\n----> 3 TEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE)\r\n 4 LABEL.build_vocab(train_data)\r\n\r\n~\\.conda\\envs\\matura-ml\\lib\\site-packages\\torchtext\\data\\field.py in build_vocab(self, *args, **kwargs)\r\n 299 counter.update(x)\r\n 300 except TypeError:\r\n--> 301 counter.update(chain.from_iterable(x))\r\n 302 specials = list(OrderedDict.fromkeys(\r\n 303 tok for tok in [self.unk_token, self.pad_token, self.init_token,\r\n\r\nTypeError: 'Example' object is not iterable\r\n```\r\n\r\nHow can I combine the train and test split in the correct way?\r\n\r\n", "url": "https://github.com/pytorch/text/issues/912", "state": "closed", "labels": [ "legacy" ], "created_at": "2020-08-07T11:10:51Z", "updated_at": "2020-08-08T22:34:43Z", "user": "EmreTokyuez" }, { "repo": "pytorch/pytorch", "number": 42722, "title": "How to build libtorch static libraries on Windows?", "body": "I'm developing the Windows program using libtorch dynamic libraries released on pytorch website. However, I find the dynamic libraries are quite large (the torch_cpu.dll is larger than 100MB). Is there the static library version, or how could I build the libtorch static library by myself?\r\n\r\nLook forward to your response. Thanks a lot!\n\ncc @ezyang @seemethere @malfet @walterddr @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @smartcat2010 @mszhanyi", "url": "https://github.com/pytorch/pytorch/issues/42722", "state": "closed", "labels": [ "module: binaries", "module: build", "module: windows", "triaged", "windows-triaged" ], "created_at": "2020-08-07T02:37:24Z", "updated_at": "2024-08-12T13:36:33Z", "user": "lawlict" }, { "repo": "pytorch/tutorials", "number": 1110, "title": "How to Inference non-val images in Transfer Learning Tutorial", "body": "The transfer learning tutorial is great, however it leaves off after evaluating the accuracy of the model. My goal is to then use the model that has been created to inference other images, but have run into trouble getting the new data adhere to the correct format and shape. Do you have any insight on how I could do this?", "url": "https://github.com/pytorch/tutorials/issues/1110", "state": "closed", "labels": [ "torchvision", "docathon-h1-2023", "medium" ], "created_at": "2020-08-06T00:53:45Z", "updated_at": "2023-06-09T18:17:55Z", "user": "ScottMoffatLittle" }, { "repo": "pytorch/vision", "number": 2555, "title": "Unable to load fasterrcnn state_dict with custom num_classes", "body": "## \ud83d\udc1b Bug\r\n\r\ntorchvision.models.detection.fasterrcnn_resnet50_fpn() giving error when the parameter pretrained is set to True and num_classes parameter is also supplied(other than 91). \r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\n>> from torchvision import models\r\n>> my_model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True,num_classes=50)\r\n```\r\nThe error that it gives\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/username/miniconda2/envs/form_ocr/lib/python3.7/site-packages/torchvision/models/detection/faster_rcnn.py\", line 354, in fasterrcnn_resnet50_fpn\r\n model.load_state_dict(state_dict)\r\n File \"/home/username/miniconda2/envs/form_ocr/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 847, in load_state_dict\r\n self.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for FasterRCNN:\r\n\tsize mismatch for roi_heads.box_predictor.cls_score.weight: copying a param with shape torch.Size([91, 1024]) from checkpoint, the shape in current model is torch.Size([50, 1024]).\r\n\tsize mismatch for roi_heads.box_predictor.cls_score.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([50]).\r\n\tsize mismatch for roi_heads.box_predictor.bbox_pred.weight: copying a param with shape torch.Size([364, 1024]) from checkpoint, the shape in current model is torch.Size([200, 1024]).\r\n\tsize mismatch for roi_heads.box_predictor.bbox_pred.bias: copying a param with shape torch.Size([364]) from checkpoint, the shape in current model is torch.Size([200]).\r\n\r\n```\r\n\r\n## Expected behavior\r\n\r\nI was hoping it would load the state_dict on the base model and then change the `model.roi_heads.box_predictor.cls_score` layer.\r\nSomething like this perhaps,\r\n```\r\nfrom torchvision import models\r\nimport torch.nn as nn\r\n\r\nclass RCNN_Model(nn.Module):\r\n def __init__(self,pretrained=True,out_classes=91):\r\n super(RCNN_Model,self).__init__()\r\n self.model = models.detection.fasterrcnn_resnet50_fpn(pretrained=pretrained)#,num_classes=out_classes)\r\n if out_classes!=91:\r\n self.model.roi_heads.box_predictor.cls_score = nn.Linear(in_features=1024,out_features=out_classes,bias=True)\r\n \r\n def forward(self,x):\r\n return self.model(x)\r\n```\r\n\r\n\r\n## Environment\r\n\r\n```\r\nPyTorch version: 1.5.1\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.2\r\n\r\nOS: Ubuntu 18.04.3 LTS\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nCMake version: version 3.10.2\r\n\r\nPython version: 3.7\r\nIs CUDA available: No\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.13.3\r\n[pip3] torch==1.0.1.post2\r\n[pip3] torchvision==0.4.0\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 9.0 h13b8566_0 \r\n[conda] mkl 2019.4 243 \r\n[conda] mkl-service 2.3.0 py37he904b0f_0 \r\n[conda] mkl_fft 1.0.14 py37ha843d7b_0 \r\n[conda] mkl_random 1.1.0 py37hd6b4f25_0 \r\n[conda] numpy 1.17.4 pypi_0 pypi\r\n[conda] numpy-base 1.17.2 py37hde5b4d6_0 \r\n[conda] pytorch-nightly 1.0.0.dev20190328 py3.7_cuda9.0.176_cudnn7.4.2_0 pytorch\r\n[conda] torch 1.5.1 pypi_0 pypi\r\n[conda] torchvision 0.6.1 pypi_0 pypi\r\n```\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2555", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2020-08-05T13:09:57Z", "updated_at": "2020-08-05T15:14:34Z", "user": "devarshi16" }, { "repo": "pytorch/tutorials", "number": 1104, "title": "Error on https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html ?", "body": "In Section 6 of https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html, the following code is presented:\r\n\r\n```\r\ndef add_pr_curve_tensorboard(class_index, test_probs, test_preds, global_step=0):\r\n '''\r\n Takes in a \"class_index\" from 0 to 9 and plots the corresponding\r\n precision-recall curve\r\n '''\r\n tensorboard_preds = test_preds == class_index\r\n tensorboard_probs = test_probs[:, class_index]\r\n\r\n writer.add_pr_curve(classes[class_index],\r\n tensorboard_preds,\r\n tensorboard_probs,\r\n global_step=global_step)\r\n writer.close()\r\n```\r\n\r\n`writer.add_pr_curve` should take the _true_ labels, not a variable testing equality between the prediction and the class index (from `https://pytorch.org/docs/stable/tensorboard.html`), so I think the second argument, `tensorboard_preds`, is wrong: `tensorboard_preds = test_preds == class_index` tests whether the prediction is equal to the given class_index.\r\n\r\nI think this should be something like `tensorboard_preds = labels == class_index`, which gives you 0/1 based on the true labels and not the predicted labels", "url": "https://github.com/pytorch/tutorials/issues/1104", "state": "closed", "labels": [ "docathon-h1-2023", "medium" ], "created_at": "2020-08-04T20:43:47Z", "updated_at": "2023-10-05T16:51:09Z", "comments": 6, "user": "motiwari" }, { "repo": "pytorch/pytorch", "number": 42505, "title": "how to use get_trace_graph to get the attribute of a node in trace graph in pytorch 1.6?", "body": "In pytorch 1.1 when using get_trace_graph to trace the model, we could get the node's attibution as follows:\r\n%469 : bool = prim::Constant[value=0](), scope: Model/Backbone[backbone]/ConvBn[conv1]/BatchNorm2d[bn]\r\nThen we could get the the attribution of the Node: backbone.conv1.bn and its class BatchNorm.\r\n\r\nThen how to get the attribution name of the node in pytorch 1.6, as we can only get\r\n%23 : float = prim::Constant[value=1]() # test.py:12:23\r\n\r\nso how to get the attribution in pytorch 1.6\n\ncc @suo @gmagogsfm", "url": "https://github.com/pytorch/pytorch/issues/42505", "state": "closed", "labels": [ "oncall: jit", "triaged", "days" ], "created_at": "2020-08-04T03:02:51Z", "updated_at": "2020-08-13T11:20:07Z", "user": "ioou" }, { "repo": "pytorch/pytorch", "number": 42445, "title": "how to run SVM/Random forest/xgboost on the pytorch?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/42445", "state": "closed", "labels": [], "created_at": "2020-08-03T10:12:34Z", "updated_at": "2020-08-03T17:17:24Z", "user": "cvJie" }, { "repo": "pytorch/pytorch", "number": 42439, "title": "How to use libtorch on Jetson TX2", "body": "Refer to https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048\uff0cI successfully installed Pytorch 1.0.0 and torchvision0.2.2 on jetpack4.3.According to the authentication method, I also succeeded in authentication: python\u2013>import torch\u2026\r\nHowever,I want to call a scriptModel by libtorch on a QT project.but meet the error:\r\nerror:undefined reference to \u2018nvrtcGetProgramLogSize\u2019\r\nerror:undefined reference to \u2018culaunchKernel\u2019\r\nerror:undefined reference to \u2018nvrtcComplieProgram\u2019\r\nerror:undefined reference to \u2018nvrtcCreateProgram\u2019\r\nerror:undefined reference to \u2018nvrtcGetErrorString\u2019\r\nerror:undefined reference to \u2018cuModuleGetFunction\u2019\r\nand so on.\r\n\r\nAlso I can not find libnvrtc.so at ~/.local/lib/python3.6/site-packages/torch/lib.\r\n\r\nwhat is the problem?\r\n\n\ncc @ezyang @seemethere @malfet", "url": "https://github.com/pytorch/pytorch/issues/42439", "state": "open", "labels": [ "module: binaries", "triaged", "module: arm" ], "created_at": "2020-08-03T06:14:11Z", "updated_at": "2020-08-05T21:03:37Z", "user": "xifanlover" }, { "repo": "pytorch/pytorch", "number": 42404, "title": "How to deploy a \"LSTMcell\" style attention by torch.onnx?", "body": "I'm deploying a NMT network and I fail when I try to define attention module.\r\nThe first thing is that,when i put nn.LSTMcell in my init function,it would raise some errors which may caused by no inplementation in torch.onnx.\r\n\r\nAnd when I want to replace it with nn.LSTM, i still get an error with torch.onnx:\r\n```\r\nclass test(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n\r\n self.rnn = nn.LSTM(10, 20)\r\n \r\n def forward(self, x ,hx,cx):\r\n\r\n outputs = []\r\n for i in range(x.shape[0]):\r\n output ,next_step = self.rnn(x[[i]], (hx, cx))\r\n hx,cx=next_step\r\n outputs.append(output)\r\n return torch.stack(outputs).squeeze(1)\r\n\r\ninput = torch.randn(6, 1, 10)\r\nhx = torch.randn(1,1, 20)\r\ncx = torch.randn(1,1, 20)\r\ntorch_model = torch.jit.script(test())\r\na=torch_model(input,hx,cx)\r\na.shape\r\n\r\n#this turns out to be fine\r\ntorch.Size([6, 1, 20])\r\n\r\ntorch.onnx.export(torch_model, \r\n (input,hx,cx), \r\n \"torch_model.onnx\", \r\n export_params=True, \r\n opset_version=10, \r\n input_names = ['input',\"hx\",\"cx\"], \r\n output_names = ['output'], \r\n dynamic_axes={'input' : {0 : 'seq_length'},\r\n 'output' : {0 : 'seq_length'}},\r\n example_outputs=a\r\n )\r\n# this will raise\r\nRuntimeError: Unknown type bool encountered in graph lowering. This type is not supported in ONNX export.\r\n```\r\nAny help would be thankful!\n\ncc @suo @gmagogsfm @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/42404", "state": "closed", "labels": [ "oncall: jit", "module: onnx" ], "created_at": "2020-08-01T10:31:10Z", "updated_at": "2021-10-18T05:09:31Z", "user": "andylida" }, { "repo": "pytorch/TensorRT", "number": 163, "title": "\u2753 [Question] Could your team provide a cmake version?", "body": "## \u2753 Question\r\n\r\nI think many people block in building stage?\r\n", "url": "https://github.com/pytorch/TensorRT/issues/163", "state": "closed", "labels": [ "question" ], "created_at": "2020-07-31T14:22:24Z", "updated_at": "2020-08-01T02:29:39Z", "user": "alanzhai219" }, { "repo": "pytorch/pytorch", "number": 42349, "title": "I want to build the pytorch and link with openblas static library, so how to modify the CMakeList.txt ?", "body": "", "url": "https://github.com/pytorch/pytorch/issues/42349", "state": "closed", "labels": [], "created_at": "2020-07-31T02:01:02Z", "updated_at": "2020-08-01T08:25:07Z", "user": "lfcarol" }, { "repo": "pytorch/pytorch", "number": 42268, "title": "how to enable CUDNN_TENSOR_OP_MATH for float32 torch.mm operator ", "body": "I am wondering in pytorch, how to enable tensor cores for float32 torch.mm operations.\r\nI posted the same question at https://discuss.pytorch.org/t/using-nvidia-tensor-core-for-float-mm-computation/90901,\r\nbut got no response. Thanks.\r\n\n\ncc @ngimel", "url": "https://github.com/pytorch/pytorch/issues/42268", "state": "closed", "labels": [ "module: cuda", "triaged" ], "created_at": "2020-07-30T00:58:17Z", "updated_at": "2020-07-30T01:48:39Z", "user": "shz0116" }, { "repo": "pytorch/serve", "number": 566, "title": "How to use model archiver utility for complex projects?", "body": "I am trying to serve the model from the [NCRFpp](https://github.com/jiesutd/NCRFpp) project using torchserve. The files that are required by my custom _handler.py_ file are in multiple folders and have import statements which refer to this folder hierarchy. The model archiver zips all these files and extracts into a temporary folder at the same level without this folder hierarchy, leading to runtime errors because the import statements fail. How do I ensure that the import statements work, with same folder hierarchy as used during development, while using torchserve?", "url": "https://github.com/pytorch/serve/issues/566", "state": "closed", "labels": [ "triaged_wait" ], "created_at": "2020-07-29T13:44:24Z", "updated_at": "2023-01-12T01:56:23Z", "user": "sagjounkani" }, { "repo": "pytorch/examples", "number": 806, "title": "How to use torch.multiprocessing in Windows", "body": "The following error occurred when I used the torch.multiprocessing [demo](https://github.com/pytorch/examples/tree/master/mnist_hogwild) provided by Pytorch.\r\n\r\nC:\\Users\\user\\anaconda3\\python.exe D:/HPO-Pro/PBT3/main_demo.py\r\ncuda\r\ncuda\r\nTHCudaCheck FAIL file=..\\torch/csrc/generic/StorageSharing.cpp line=247 error=801 : operation not supported\r\nTraceback (most recent call last):\r\n File \"D:/HPO-Pro/PBT3/main_demo.py\", line 88, in <module>\r\n p.start()\r\n File \"C:\\Users\\user\\anaconda3\\lib\\multiprocessing\\process.py\", line 112, in start\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\user\\anaconda3\\lib\\multiprocessing\\context.py\", line 322, in _Popen\r\n return Popen(process_obj)\r\n File \"C:\\Users\\user\\anaconda3\\lib\\multiprocessing\\popen_spawn_win32.py\", line 89, in __init__\r\n reduction.dump(process_obj, to_child)\r\n File \"C:\\Users\\user\\anaconda3\\lib\\multiprocessing\\reduction.py\", line 60, in dump\r\n ForkingPickler(file, protocol).dump(obj)\r\n File \"C:\\Users\\user\\anaconda3\\lib\\site-packages\\torch\\multiprocessing\\reductions.py\", line 240, in reduce_tensor\r\n event_sync_required) = storage._share_cuda_()\r\nRuntimeError: cuda runtime error (801) : operation not supported at ..\\torch/csrc/generic/StorageSharing.cpp:247\r\n\r\nCan't I use CUDA while using torch.multiprocessing in Windows?", "url": "https://github.com/pytorch/examples/issues/806", "state": "open", "labels": [ "windows" ], "created_at": "2020-07-29T09:06:18Z", "updated_at": "2022-03-09T20:46:19Z", "user": "zhong-xin" }, { "repo": "pytorch/serve", "number": 558, "title": "Is there a way to test my handle file before serving my model?", "body": "Hi, i recently use TorchServer to serving my model. For each model i write a handle file, but i don't know how to test it. Is there a way to test my handle file before serving my model?", "url": "https://github.com/pytorch/serve/issues/558", "state": "closed", "labels": [ "question", "triaged_wait" ], "created_at": "2020-07-28T09:35:33Z", "updated_at": "2020-08-21T04:06:11Z", "user": "wangxiang2713" }, { "repo": "pytorch/TensorRT", "number": 157, "title": "\u201cerror while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory\u201d when running sample ", "body": "I'm trying to run the sample code. I've installed TRTorch according to the official [TRTorch](https://nvidia.github.io/TRTorch/tutorials/installation.html) instruction.\r\nWhen the sample code is run (with the below command from this link) the given error arise: <br/>\r\n>sudo bazel run //cpp/trtorchexec -- $(realpath /home/TRTorch/tests/modules/alexnet_scripted.jit.pt) \"(1,3,228,228)\"\r\n\r\n>error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n\r\nAlso, the LD_LIBRARY_PATH is set correctly.\r\n\r\n>export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/TensorRT/TensorRT-7.0.0.11/lib\r\n\r\nMore info:\r\n\r\n>TRTorch: latest version (python package and binary) <br/>\r\n>TensorRT: 7.0.0.11 <br/>\r\n>Pytorch: 1.5.1 <br/>\r\n>CUDA: 10.2 <br/>\r\n>Python: 3.6\r\n", "url": "https://github.com/pytorch/TensorRT/issues/157", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-07-28T08:02:31Z", "updated_at": "2023-03-23T07:18:44Z", "user": "Soroorsh" }, { "repo": "pytorch/vision", "number": 2508, "title": "Number of anchors VS. number of aspect ratios.", "body": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/detection/faster_rcnn.py#L188\r\n\r\nThe line above seems to fetch, for each location, the number of aspect ratios of anchors (not multiplying by the number of different sized anchors).", "url": "https://github.com/pytorch/vision/issues/2508", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-07-25T16:15:42Z", "updated_at": "2020-07-30T12:30:11Z", "user": "fulkast" }, { "repo": "pytorch/vision", "number": 2505, "title": "Error when training Resnet with nn.DistributedDataParallel", "body": "When I try to train [Resnet](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) with `nn.DistributedDataParallel` using multi-gpus, the error occurs as below. But when I use only one gpu, it's just ok. \r\n\r\n![image](https://user-images.githubusercontent.com/40142236/88374211-ed35c280-cdcb-11ea-8f8b-e452e52c11ce.png)\r\n\r\n\r\nSome of the code:\r\n```\r\nclass Bottleneck(nn.Module):\r\n expansion = 4\r\n def __init__(self, inplanes, planes, stride=1, downsample=None):\r\n super(Bottleneck, self).__init__()\r\n self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) # decrease the channel, does't change size\r\n self.bn1 = nn.BatchNorm2d(planes)\r\n self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\r\n padding=1, bias=False)\r\n self.bn2 = nn.BatchNorm2d(planes)\r\n self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\r\n self.bn3 = nn.BatchNorm2d(planes * 4)\r\n self.relu = nn.ReLU(inplace=False)\r\n self.downsample = downsample\r\n self.stride = stride\r\n\r\n def forward(self, x):\r\n residual = x\r\n\r\n out = self.conv1(x)\r\n out = self.bn1(out)\r\n out = self.relu(out)\r\n\r\n out = self.conv2(out)\r\n out = self.bn2(out)\r\n out = self.relu(out)\r\n\r\n out = self.conv3(out)\r\n out = self.bn3(out)\r\n\r\n if self.downsample is not None:\r\n residual = self.downsample(x)\r\n\r\n out = out + residual\r\n out = self.relu(out)\r\n\r\n return out\r\n\r\n\r\nclass ResNet(nn.Module):\r\n\r\n def __init__(self, block, layers, num_classes=9):\r\n self.inplanes = 64\r\n super(ResNet, self).__init__()\r\n self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\r\n bias=False) # the size become 1/2\r\n self.bn1 = nn.BatchNorm2d(64)\r\n self.relu = nn.ReLU(inplace=False)\r\n self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # the size become 1/2\r\n self.layer1 = self._make_layer(block, 64, layers[0])\r\n self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\r\n self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\r\n self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\r\n self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\r\n # self.fc = nn.Linear(512 * block.expansion, num_classes)\r\n self.fc1 = nn.Linear(2048, 1024)\r\n self.fc2 = nn.Linear(1024, num_classes)\r\n\r\n\r\n for m in self.modules():\r\n if isinstance(m, nn.Conv2d):\r\n n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\r\n m.weight.data.normal_(0, math.sqrt(2. / n))\r\n elif isinstance(m, nn.BatchNorm2d):\r\n m.weight.data.fill_(1)\r\n m.bias.data.zero_()\r\n\r\n def _make_layer(self, block, planes, blocks, stride=1):\r\n # block: object, planes: output channel, blocks: the num of blocks\r\n downsample = None\r\n if stride != 1 or self.inplanes != planes * block.expansion:\r\n downsample = nn.Sequential(\r\n nn.Conv2d(self.inplanes, planes * block.expansion,\r\n kernel_size=1, stride=stride, bias=False),\r\n nn.BatchNorm2d(planes * block.expansion),\r\n )\r\n\r\n layers = []\r\n layers.append(block(self.inplanes, planes, stride, downsample))\r\n self.inplanes = planes * block.expansion # the input channel num become 4 times\r\n for i in range(1, blocks):\r\n layers.append(block(self.inplanes, planes))\r\n\r\n return nn.Sequential(*layers)\r\n\r\n def forward(self, x):\r\n x = self.conv1(x)\r\n x = self.bn1(x)\r\n x = self.relu(x)\r\n x = self.maxpool(x)\r\n\r\n x = self.layer1(x)\r\n x = self.layer2(x)\r\n x = self.layer3(x)\r\n x = self.layer4(x)\r\n\r\n x = self.avgpool(x)\r\n x = x.view(x.size(0), -1)\r\n x = self.fc1(x)\r\n x = nn.init.normal_(x, mean=0, std=1024 ** -0.5)\r\n # x = self.fc2(x)\r\n return x\r\n```", "url": "https://github.com/pytorch/vision/issues/2505", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2020-07-24T08:38:04Z", "updated_at": "2020-08-10T11:51:33Z", "user": "alwayshjia" }, { "repo": "pytorch/serve", "number": 553, "title": "Dump system metrics in a database", "body": "How should I access system metrics to dump it in a SQL database?\r\n\r\nAlso, what are the best practices for logging inference predictions in a database?", "url": "https://github.com/pytorch/serve/issues/553", "state": "closed", "labels": [ "question", "triaged_wait" ], "created_at": "2020-07-24T07:23:39Z", "updated_at": "2020-08-07T08:25:47Z", "user": "vishal-wiai" }, { "repo": "pytorch/pytorch", "number": 41935, "title": "How to Upgrade PyTorch to 1.6 in Docker Image", "body": "## \u2753 How to Upgrade PyTorch to 1.6 in Docker Image?\r\n\r\n### Hi, I have a docker image that has pytorch 1.4, torchvision 0.5, cudnn 7.6.5 and cuda 10.1, and other tools and packages. I want to upgrade my pytorch to 1.6. Is there some way to do it without rebuild the whole image again?\r\n\r\nThanks!\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @ezyang @seemethere @malfet", "url": "https://github.com/pytorch/pytorch/issues/41935", "state": "closed", "labels": [ "module: binaries", "triaged", "module: docker" ], "created_at": "2020-07-23T17:56:25Z", "updated_at": "2021-01-26T20:11:26Z", "user": "zhenhuahu" }, { "repo": "pytorch/examples", "number": 802, "title": "How to use a pre-trained models weights? ", "body": "How do we specify to use the pre-trained model? Typing in python main.py -a mobilenet_v2 is not giving me the model with pretrained weights.", "url": "https://github.com/pytorch/examples/issues/802", "state": "closed", "labels": [], "created_at": "2020-07-22T22:36:47Z", "updated_at": "2022-03-09T21:37:31Z", "user": "stunbomb" }, { "repo": "pytorch/vision", "number": 2502, "title": "Why do we need target put .to(device)?", "body": "Consider key points detection task.\r\nThe question refers to this line \r\nhttps://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/references/detection/engine.py#L86\r\nSeems to be redundant.\r\nIn my case, I comment out this line and remove .item() in line 95, and at least evaluation starts.\r\nMoreover, COCO dataset output is a tuple({'image_id': int, 'annotations': list(...)}) and I do not get it how it could work in 86 line.", "url": "https://github.com/pytorch/vision/issues/2502", "state": "closed", "labels": [ "enhancement", "question", "module: reference scripts", "topic: object detection" ], "created_at": "2020-07-22T13:38:19Z", "updated_at": "2020-07-30T12:10:54Z", "user": "dmitrysarov" }, { "repo": "pytorch/pytorch", "number": 41847, "title": "How to fix \"Unknown IValue type for pickling: Device\" in PyTorch 1.3?", "body": "## \u2753 Questions and Help\r\nI got a model trained with PyTorch 1.4. If I script and save this model with PyTorch 1.4, it will save successfully, but I need to script and save this model with PyTorch 1.3. When I save model with 1.3, I got error message:\r\n```\r\nRuntimeError: Unknown IValue type for pickling: Device (pushIValueImpl at /pytorch/torch/csrc/jit/pickler.cpp:125)\r\n```\r\nI want to know how to fix this `Device` issue. Thx.\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)", "url": "https://github.com/pytorch/pytorch/issues/41847", "state": "closed", "labels": [], "created_at": "2020-07-22T10:48:50Z", "updated_at": "2020-07-23T01:53:53Z", "user": "kaituoxu" }, { "repo": "pytorch/vision", "number": 2497, "title": "Grayscale image mask is transformed in a performance decreasing way with ToTensor()", "body": "## \ud83d\udc1b Bug\r\n\r\nPreface: This is only in the context of my usage, where I use a custom dataset for semantic segmentation. My goal was to train different models on a semantic segmentation dataset and obviously I wanted to use the torchvision transforms.\r\nThe dataset does not matter much, but the labels are interesting: The labels are grayscale in a 2d array with values from 0 to NUM_CLASSES). (I mean in form of a PIL image when I say array for the label)\r\n\r\nI realized after training on a subset of the data, that the performance was incredibly bad and the models could not even overfit on a simple dataset.\r\n\r\nI debugged for a while and finally realized, that the ToTensor() operation changes the arrays from (W, H) to (3, W, H) and the values become some float values because of the [0,255] to [0,1] rescaling. What I did not realize is how much of a performance impact this change has. When just using torch.tensor() to create a tensor from the array, the performance was WAY better (see chart below, the gray performance was using the torch.tensor approach). Note that in the chart, the ONLY change I made is replace the transforms (see in reproduction explanation).\r\n\r\nCharts with MIoU and Loss, pink: HRNet with old transform, orange: DeepLabV3+ with old transform, grey: HRNet with new transform. (All pretrained on a different dataset btw)\r\n![Screenshot from 2020-07-21 14-04-06](https://user-images.githubusercontent.com/14922864/88052834-24516d00-cb5b-11ea-8d18-164190cf329e.png)\r\n\r\n![Screenshot from 2020-07-21 14-04-19](https://user-images.githubusercontent.com/14922864/88052842-27e4f400-cb5b-11ea-8296-5407c6024dba.png)\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Create labelset that does not use RGB labels but uses a grayscale array with each class having a corresponding number\r\n1. Prepare a semantic segmentation model (I used HRNet and DeepLabV3+) for training\r\n1. Load the dataset with these transforms for the labels: \r\n```\r\nlabel_transform = transforms.Compose([\r\n transforms.Resize(downsampling_size, interpolation=Image.NEAREST),\r\n transforms.ToTensor(),\r\n])\r\n```\r\n1. Train the model, plot performance (mIoU is bad even though loss keeps decreasing)\r\n1. Load the dataset with the following different transforms: \r\n```\r\ncustom_to_tensor = lambda a : torch.tensor(np.array(a))\r\nlabel_transform = transforms.Compose([\r\n # Nearest interpolation to keep valid labels\r\n transforms.Resize(downsampling_size, interpolation=Image.NEAREST),\r\n custom_to_tensor,\r\n])\r\n```\r\n1. Train again, realize the performance is way better??\r\n\r\n## Expected behavior\r\n\r\nToTensor() should not lead to such a huge performance loss when using grayscale image masks :(\r\n\r\n## Environment\r\n\r\nUsing pytorch/pytorch:1.5.1-cuda10.1-cudnn7-runtime with these pip dependencies installed:\r\n* tensorboard==2.2.0\r\n* matplotlib==3.2.2\r\n* tensorboardx==2.0\r\n* Pillow==7.2.0\r\n* numpy==1.19.0\r\n* python-box==5.0.1\r\n* pytorch-ignite==0.4.0.post1 \r\n\r\n GPU is Nvidia Quadro P6000, only used one so far\r\n\r\n\r\n## Additional context\r\n\r\nI realize this might not be a bug per definition, but it still threw me for a loop. I absolutely did not expect the simple usage of ToTensor() to hinder the performance this much.\r\n", "url": "https://github.com/pytorch/vision/issues/2497", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2020-07-21T12:07:06Z", "updated_at": "2020-07-30T12:22:02Z", "user": "Areiser" }, { "repo": "pytorch/vision", "number": 2486, "title": "Torchvision Object detection TPU Support", "body": "## \u2753 Torchvision object detection models with TPU.\r\n\r\nMy doubt lies somewhere between feature request and question. hence posting here.\r\n\r\nPyTorch supports TPU through torch_xla. It makes it possible to train models over TPU.\r\nI guess most torchvision classification models can be used with transfer learning/training over TPU.\r\n\r\nFor torchvision object detection models, do they support TPU?\r\nSome operations such as `NMS`, `rpn`, `roi_align` do not support TPU and hence I get an error as follows.\r\n\r\nI was trying Faster R-CNN resnet50 fpn model for object detection.\r\n\r\n```\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py\", line 70, in forward\r\n proposals, proposal_losses = self.rpn(images, features, targets)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/rpn.py\", line 493, in forward\r\n boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/rpn.py\", line 416, in filter_proposals\r\n keep = box_ops.batched_nms(boxes, scores, lvl, self.nms_thresh)\r\nRuntimeError: Cannot access data pointer of Tensor that doesn't have storage\r\n```\r\nMy doubts/concerns/feature request.\r\n1. Do torchvision object detection models support TPU training?\r\n2. Any Plans for TPU support in future releases for these models?\r\n3. Are these ops only CUDA native and GPU/CPU specific? Is there a work-around to train object detection / segmentation models with TPU?\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2486", "state": "open", "labels": [ "question", "topic: object detection", "new feature" ], "created_at": "2020-07-17T18:47:07Z", "updated_at": "2021-05-02T18:03:08Z", "user": "oke-aditya" }, { "repo": "pytorch/pytorch", "number": 41592, "title": "Add SpectralOps CPU implementation for ARM/PowerPC processors (where MKL is not available)", "body": "## \ud83d\udc1b Bug\r\n\r\n`fft: ATen not compiled with MKL support` RuntimeError thrown when trying to compute Spectrogram on Jetson Nano that uses ARM64 processor.\r\n\r\n## To Reproduce\r\n\r\nCode sample:\r\n```\r\nimport torchaudio\r\n\r\nwaveform, sample_rate = torchaudio.load('test.wav')\r\nspectrogram = torchaudio.transforms.Spectrogram(sample_rate)(waveform)\r\n```\r\n\r\nStack trace:\r\n```\r\nTraceback (most recent call last):\r\n File \"spectrogram_test.py\", line 4, in <module>\r\n spectrogram = torchaudio.transforms.Spectrogram(sample_rate)(waveform)\r\n File \"/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torchaudio-0.7.0a0+102174e-py3.6-linux-aarch64.egg/torchaudio/transforms.py\", line 84, in forward\r\n self.win_length, self.power, self.normalized)\r\n File \"/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torchaudio-0.7.0a0+102174e-py3.6-linux-aarch64.egg/torchaudio/functional.py\", line 162, in spectrogram\r\n waveform, n_fft, hop_length, win_length, window, True, \"reflect\", False, True\r\n File \"/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torch/functional.py\", line 465, in stft\r\n return _VF.stft(input, n_fft, hop_length, win_length, window, normalized, onesided)\r\nRuntimeError: fft: ATen not compiled with MKL support\r\n```\r\n\r\n## Expected behavior\r\n\r\nSpectrogram from waveform created\r\n\r\n## Environment\r\n\r\nCommands used to install PyTorch:\r\n```\r\nwget https://nvidia.box.com/shared/static/yr6sjswn25z7oankw8zy1roow9cy5ur1.whl -O torch-1.6.0rc2-cp36-cp36m-linux_aarch64.whl\r\nsudo apt-get install python-pip libopenblas-base libopenmpi-dev \r\npip install Cython\r\npip install numpy torch-1.6.0rc2-cp36-cp36m-linux_aarch64.whl\r\n```\r\n\r\nCommands used to install torchaudio:\r\nsox:\r\n```\r\nsudo apt-get update -y\r\nsudo apt-get install -y libsox-dev\r\npip install sox\r\n```\r\n\r\ntorchaudio:\r\n```\r\ngit clone https://github.com/pytorch/audio.git audio\r\ncd audio && python setup.py install\r\n```\r\n\r\n`torchaudio.__version__` output:\r\n`0.7.0a0+102174e`\r\n\r\n`collect_env.py` output:\r\n```\r\nPyTorch version: 1.6.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.2\r\n\r\nOS: Ubuntu 18.04.4 LTS\r\nGCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0\r\nCMake version: version 3.10.2\r\n\r\nPython version: 3.6\r\nIs CUDA available: Yes\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/aarch64-linux-gnu/libcudnn.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_etc.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.0.0\r\n/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.0.0\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.16.1\r\n[pip3] pytorch-ignite==0.3.0\r\n[pip3] torch==1.6.0\r\n[pip3] torchaudio==0.7.0a0+102174e\r\n[conda] Could not collect\r\n```\r\n\r\n Other relevant information:\r\nMKL is not installed, because it is not supported on ARM processors; oneDNN installed\r\n\r\n## Additional context\r\n\r\nI did not install MKL because it is not supported on ARM processors, so building PyTorch from source with MKL support is not possible. Is there any workaround to this problem?\n\ncc @malfet @seemethere @walterddr @mruberry @peterbell10 @ezyang", "url": "https://github.com/pytorch/pytorch/issues/41592", "state": "closed", "labels": [ "module: build", "triaged", "module: POWER", "module: arm", "module: fft", "function request" ], "created_at": "2020-07-17T13:02:43Z", "updated_at": "2021-06-30T23:29:36Z", "user": "arnasRad" }, { "repo": "pytorch/TensorRT", "number": 146, "title": "\u2753 [Question] How to convert at::tensor into nvinfer1::ITensor?", "body": "## \u2753 Question\r\n\r\nhow to convert at::tensor into nvinfer1::ITensor?\r\n\r\n## What you have already tried\r\n\r\n\r\nI tried to run resnet101 using trtorch, however, there was an error when compiling the graph.\r\nAs a result of my analysis\r\n\r\nTRTorch/core/conversion/converters/impl/element_wise.cpp\r\n\r\n```\r\n\"aten::div.Tensor(Tensor self, Tensor other) -> Tensor\",\r\n[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {\r\n\t// Should implement self / other\r\n\tauto self = args[0].ITensor();\r\n\tauto other = args[1].ITensor();\r\n\tauto div = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kDIV, self, other, util::node_info(n));\r\n\r\n\tTRTORCH_CHECK(div, \"Unable to create div layer from node: \" << *n);\r\n\r\n\tdiv->setName(util::node_info(n).c_str());\r\n\tauto out = ctx->AssociateValueAndTensor(n->outputs()[0], div->getOutput(0));\r\n\r\n\tLOG_DEBUG(\"Output tensor shape: \" << out->getDimensions());\r\n\treturn true;\r\n }\r\n```\r\n\r\nself is the ITensor type\r\nother is the IValue type\r\n\r\nThus, this program exits with an error in determining the type.\r\n\r\n`auto other = args[1].ITensor();`\r\n\r\nI know IValue can be unpacked into at::tensor, however add_elementwise requires nvinfer1::ITensor\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - CPU Architecture: x86_64\r\n - OS (e.g., Linux): Ubuntu\r\n - CUDA version: 10.2 with cudnn 8.0\r\n - GCC/G++: 7.5.0\r\n", "url": "https://github.com/pytorch/TensorRT/issues/146", "state": "closed", "labels": [ "question" ], "created_at": "2020-07-17T12:49:26Z", "updated_at": "2020-07-20T21:26:24Z", "user": "zhanjw" }, { "repo": "pytorch/vision", "number": 2481, "title": "How to use torchvision roi_align?", "body": "I'm confused about the input parameter `boxes` and output of `torchvision.ops.roi_align`. Now I have an input image and one bbox coordinate `[x1, y1, x2, y2]`. Does `roi_align` directly return the region determined by the coordinate?\r\n\r\nFor exampe, here is my test code:\r\n\r\n```python\r\nimport torch\r\nfrom torchvision.ops import roi_align\r\n\r\na = torch.Tensor([[i * 6 + j for j in range(6)] for i in range(6)])\r\nprint(a)\r\na = a.unsqueeze(dim=0)\r\n\r\nboxes = [torch.Tensor([[0, 2, 2, 4]])]\r\na = a.unsqueeze(dim=0)\r\n\r\naligned_rois = roi_align(input=a, boxes=boxes, output_size=2)\r\nprint(aligned_rois.shape)\r\nprint(\"aligned_rois:\", aligned_rois)\r\n```\r\n\r\nAnd the result is:\r\n\r\n```\r\ntensor([[ 0., 1., 2., 3., 4., 5.],\r\n [ 6., 7., 8., 9., 10., 11.],\r\n [12., 13., 14., 15., 16., 17.],\r\n [18., 19., 20., 21., 22., 23.],\r\n [24., 25., 26., 27., 28., 29.],\r\n [30., 31., 32., 33., 34., 35.]])\r\ntorch.Size([1, 1, 2, 2])\r\naligned_rois: tensor([[[[15.5000, 16.5000],\r\n [21.5000, 22.5000]]]])\r\n```\r\nWhat I want to know is why the returned region is `[15, 16; 21, 22]`?\r\n\r\nThanks for answering!\r\n", "url": "https://github.com/pytorch/vision/issues/2481", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-07-17T04:57:15Z", "updated_at": "2021-03-09T01:56:32Z", "user": "xuantengh" }, { "repo": "pytorch/TensorRT", "number": 139, "title": "\ud83d\udc1b [Bug] Fail to build the NVIDIA TRTorch container on AGX device with JetPack 4.4", "body": "## Bug Description\r\n\r\nI was following this page of [instruction](https://github.com/NVIDIA/TRTorch/tree/master/notebooks#1-requirements).\r\n\r\nCommand:\r\n```\r\n$ sudo docker build -t trtorch -f Dockerfile.notebook .\r\n```\r\n\r\nOutput:\r\n```\r\n[sudo] password for nvidia: \r\nSending build context to Docker daemon 44.18MB\r\nStep 1/14 : FROM nvcr.io/nvidia/pytorch:20.03-py3\r\n20.03-py3: Pulling from nvidia/pytorch\r\n423ae2b273f4: Pulling fs layer \r\nde83a2304fa1: Pulling fs layer \r\nf9a83bce3af0: Pulling fs layer \r\nb6b53be908de: Waiting \r\n031ae32ea045: Waiting \r\n2e90bee95401: Waiting \r\n23b28e4930eb: Waiting \r\n440cfb09d608: Waiting \r\n6f3b05de36c6: Waiting \r\nb0444ce283f5: Waiting \r\n8326831bdd40: Waiting \r\n6cb1b0c70efa: Waiting \r\n51bcf8ebb1f7: Waiting \r\n69bbced5c7a2: Waiting \r\n5f6e40c02ff4: Waiting \r\nca7835aa5ed2: Waiting \r\n4c512b1ff8a5: Waiting \r\nd85924290896: Waiting \r\n97bb0d3f884c: Waiting \r\n56a4e3b147c2: Waiting \r\n468df4aef4c6: Waiting \r\n522d2b613df7: Pulling fs layer \r\n7d6417f56587: Pulling fs layer \r\n522d2b613df7: Waiting \r\n7d6417f56587: Waiting \r\n0ccda1e4ca15: Waiting \r\n18244f890475: Waiting \r\nc7986e09dff5: Waiting \r\n2d210642f30c: Waiting \r\nc564a113d3bd: Waiting \r\n44abac184be5: Waiting \r\n61817282129e: Waiting \r\n77b3c5340637: Waiting \r\ne7911ce14988: Waiting \r\n59bc17a4d14a: Waiting \r\n6b2f7c275865: Pull complete \r\n07c633be5574: Pull complete \r\n6d767ce36c21: Pull complete \r\n46bbec03f88b: Pull complete \r\n96da7d87df89: Pull complete \r\nd2663f680b06: Pull complete \r\n0ed7e2db20ab: Pull complete \r\nafd57a3ccf55: Pull complete \r\n19ac17f49e57: Pull complete \r\n2984c7bac0e3: Pull complete \r\ne2244eb6a8e7: Pull complete \r\n070f20eb03a3: Pull complete \r\nf6580f25c383: Pull complete \r\n7cc17e0c99d8: Pull complete \r\naaf5c91bb3d5: Pull complete \r\nc9ad85820d20: Pull complete \r\ne4aaec5cb4a5: Pull complete \r\n3965323727b2: Pull complete \r\n5d75d4272baf: Pull complete \r\n318400c074f7: Pull complete \r\nb5295904374f: Pull complete \r\nb962e5b89d31: Pull complete \r\nfe830d24a0da: Pull complete \r\nDigest: sha256:5f7b67b14fed35890e06f8f4907099ed4506fe0d39250aeb10b755ac6a04a0ad\r\nStatus: Downloaded newer image for nvcr.io/nvidia/pytorch:20.03-py3\r\n ---> 16c4987611fa\r\nStep 2/14 : RUN apt update && apt install curl gnupg\r\n ---> Running in 6bf12c661c88\r\nstandard_init_linux.go:211: exec user process caused \"exec format error\" \r\nThe command '/bin/sh -c apt update && apt install curl gnupg' returned a non-zero code: 1\r\n```\r\n\r\nI wonder how can I fix this error?\r\nThank you\r\n\r\nBR,\r\nChieh \r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nFollow steps from the page of [instruction](https://github.com/NVIDIA/TRTorch/tree/master/notebooks#1-requirements).\r\n\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n- PyTorch Version: 1.15.0\r\n- JetPack Version: 4.4\r\n- How you installed PyTorch: from here\r\n- Python version: 3.6\r\n- CUDA version: 10.2\r\n- GPU models and configuration: AGX jetson device\r\n- TRT version default is 7.1.0.16 on JetPack 4.4\r\n- bazel version: 3.4.0", "url": "https://github.com/pytorch/TensorRT/issues/139", "state": "closed", "labels": [ "question", "platform: aarch64" ], "created_at": "2020-07-16T02:05:18Z", "updated_at": "2020-07-20T06:13:49Z", "user": "chiehpower" }, { "repo": "pytorch/android-demo-app", "number": 76, "title": "how to get outputTensor as 3d float[][][]?", "body": "the output of my network is 3d, but getDataAsFloatArray() can only return a 1d float[]\r\nfloat[] outArr = outputTensor.getDataAsFloatArray();", "url": "https://github.com/pytorch/android-demo-app/issues/76", "state": "open", "labels": [], "created_at": "2020-07-14T06:23:26Z", "updated_at": "2020-08-25T03:46:28Z", "user": "Xiaofeng-life" }, { "repo": "pytorch/vision", "number": 2469, "title": "ImageNet pre-trained model code and hyper-parameters", "body": "Hi,\r\n\r\nIs the code used to trained the torchvision models (especially Resnet) on ImageNet available ? What are the hyper-parameters used ? Did you use specific methods (dropout, weight decay, specific augmentation such as cutout...etc) during training ? \r\n\r\nThank you very much", "url": "https://github.com/pytorch/vision/issues/2469", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2020-07-14T05:35:23Z", "updated_at": "2020-07-14T06:58:07Z", "user": "Jobanan" }, { "repo": "pytorch/TensorRT", "number": 132, "title": "Bug about native compilation on NVIDIA Jetson AGX", "body": "## \ud83d\udc1b Bug\r\nAfter I installed the bazel from scratch on AGX device, I directly build it by bazel. However, I got the error like below.\r\n\r\n```\r\n$ bazel build //:libtrtorch --distdir third_party/distdir/aarch64-linux-gnu \r\n\r\nStarting local Bazel server and connecting to it...\r\nINFO: Repository trtorch_py_deps instantiated at:\r\n no stack (--record_rule_instantiation_callstack not enabled)\r\nRepository rule pip_import defined at:\r\n /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/rules_python/python/pip.bzl:51:29: in <toplevel>\r\nERROR: An error occurred during the fetch of repository 'trtorch_py_deps':\r\n pip_import failed: Collecting torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))\r\n ( Could not find a version that satisfies the requirement torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)\r\nNo matching distribution found for torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))\r\n)\r\nERROR: no such package '@trtorch_py_deps//': pip_import failed: Collecting torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))\r\n ( Could not find a version that satisfies the requirement torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)\r\nNo matching distribution found for torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))\r\n)\r\nINFO: Elapsed time: 8.428s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully (0 packages loaded)\r\n```\r\n\r\nIf I used `python3 setup.py install`, I got the error below:\r\n```\r\nrunning install\r\nbuilding libtrtorch\r\nINFO: Build options --compilation_mode, --cxxopt, --define, and 1 more have changed, discarding analysis cache.\r\nINFO: Repository tensorrt instantiated at:\r\n no stack (--record_rule_instantiation_callstack not enabled)\r\nRepository rule http_archive defined at:\r\n /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>\r\nWARNING: Download from https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz failed: class java.io.IOException GET returned 403 Forbidden\r\nERROR: An error occurred during the fetch of repository 'tensorrt':\r\n java.io.IOException: Error downloading [https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz] to /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/tensorrt/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz: GET returned 403 Forbidden\r\nINFO: Repository libtorch_pre_cxx11_abi instantiated at:\r\n no stack (--record_rule_instantiation_callstack not enabled)\r\nRepository rule http_archive defined at:\r\n /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>\r\nERROR: /home/nvidia/ssd256/github/TRTorch/core/BUILD:10:11: //core:core depends on @tensorrt//:nvinfer in repository @tensorrt which failed to fetch. no such package '@tensorrt//': java.io.IOException: Error downloading [https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz] to /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/tensorrt/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz: GET returned 403 Forbidden\r\nERROR: Analysis of target '//cpp/api/lib:libtrtorch.so' failed; build aborted: Analysis failed\r\nINFO: Elapsed time: 18.044s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully (0 packages loaded, 62 targets configured)\r\n```\r\n\r\nIs there any idea about this?\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Install bazel from [here](https://github.com/chiehpower/Installation/blob/master/Bazel/README.md)\r\n2. Use this command:\r\n```\r\nbazel build //:libtrtorch --distdir third_party/distdir/aarch64-linux-gnu \r\n```\r\n\r\n## Environment\r\n\r\n> Build information about the TRTorch compiler can be found by turning on debug messages\r\n\r\n - PyTorch Version: 1.15.0\r\n - JetPack Version: 4.4\r\n - How you installed PyTorch: from [here](https://github.com/chiehpower/Installation/tree/master/AGX#install-pytorch) \r\n - Python version: 3.6\r\n - CUDA version: 10.2\r\n - GPU models and configuration: AGX jetson device\r\n- TRT version default is `7.1.0.16` on JetPack 4.4\r\n- bazel version: 3.4.0\r\n\r\nThank you\r\n\r\nBR,\r\nChieh", "url": "https://github.com/pytorch/TensorRT/issues/132", "state": "closed", "labels": [ "documentation", "question", "platform: aarch64" ], "created_at": "2020-07-14T03:30:25Z", "updated_at": "2020-07-17T18:02:12Z", "user": "chiehpower" }, { "repo": "pytorch/pytorch", "number": 41328, "title": "How to transform from input points to rendered image ", "body": "", "url": "https://github.com/pytorch/pytorch/issues/41328", "state": "closed", "labels": [], "created_at": "2020-07-13T06:15:35Z", "updated_at": "2020-07-14T02:50:59Z", "user": "Gaozhongpai" }, { "repo": "pytorch/pytorch", "number": 41309, "title": "How to make build_pytorch_android.sh running with python 3 on mac?", "body": "Hi all, I was trying to following the tutrials from https://pytorch.org/mobile/android/#building-pytorch-android-from-source.\r\n\r\nwhen I run the \r\n\r\n> git clone https://github.com/pytorch/pytorch.git\r\n> cd pytorch\r\n> sh ./scripts/build_pytorch_android.sh\r\n\r\nIt reports me the error such that \r\n\r\n> File \"/Users/huanghenglin/pytorch/tools/shared/module_loader.py\", line 12, in import_module\r\n> from importlib.machinery import SourceFileLoader\r\n> ImportError: No module named machinery\r\n\r\nI guessn this issue was caused by the scrip run the module_loader.py with python 2.\r\n\r\nI already set the python 3 as my default python by adding the following code on .zshrc\r\n\r\n> export PATH=${PATH}:/usr/local/opt/python@3.8/libexec/bin\r\n> alias python=\"/usr/local/opt/python@3.8/libexec/bin/python\"\r\n\r\nand the code \r\n\r\n> from importlib.machinery import SourceFileLoader\r\n\r\nruns ok on terminal's python.\r\n\r\nthe end part of report from the secipt:\r\n\r\n> [ 63%] Generating ../../torch/csrc/autograd/generated/Functions.cpp, ../../torch/csrc/jit/generated/generated_unboxing_wrappers_0.cpp, ../../torch/csrc/jit/generated/generated_unboxing_wrappers_1.cpp, ../../torch/csrc/jit/generated/generated_unboxing_wrappers_2.cpp, ../../torch/csrc/autograd/generated/Functions.h, ../../torch/csrc/autograd/generated/variable_factories.h, ../../torch/csrc/autograd/generated/python_functions.cpp, ../../torch/csrc/autograd/generated/python_variable_methods.cpp, ../../torch/csrc/autograd/generated/python_torch_functions.cpp, ../../torch/csrc/autograd/generated/python_nn_functions.cpp, ../../torch/csrc/autograd/generated/python_functions.h\r\n> Traceback (most recent call last):\r\n> File \"tools/setup_helpers/generate_code.py\", line 118, in <module>\r\n> main()\r\n> File \"tools/setup_helpers/generate_code.py\", line 113, in main\r\n> options.force_schema_registration,\r\n> File \"tools/setup_helpers/generate_code.py\", line 34, in generate_code\r\n> from tools.autograd.gen_autograd import gen_autograd, gen_autograd_python\r\n> File \"/Users/huanghenglin/pytorch/tools/autograd/gen_autograd.py\", line 30, in <module>\r\n> from .utils import YamlLoader, split_name_params, signature_without_args\r\n> File \"/Users/huanghenglin/pytorch/tools/autograd/utils.py\", line 15, in <module>\r\n> CodeTemplate = import_module('code_template', 'aten/src/ATen/code_template.py').CodeTemplate\r\n> File \"/Users/huanghenglin/pytorch/tools/shared/module_loader.py\", line 12, in import_module\r\n> from importlib.machinery import SourceFileLoader\r\n> ImportError: No module named machinery\r\n> make[2]: *** [../torch/csrc/autograd/generated/Functions.cpp] Error 1\r\n> make[1]: *** [caffe2/CMakeFiles/torch_cpu.dir/all] Error 2\r\n> make: *** [all] Error 2\r\n> \r\n\r\n\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/41309", "state": "closed", "labels": [], "created_at": "2020-07-11T15:54:32Z", "updated_at": "2020-07-12T02:46:13Z", "user": "hehedaozuiteng" }, { "repo": "pytorch/TensorRT", "number": 130, "title": "Can't compile python package", "body": "I am able to compile the CXX API, but the python package fails with the error:\r\n\r\n```\r\nfatal error: NvInfer.h: No such file or directory\r\n #include \"NvInfer.h\"\r\n ^~~~~~~~~~~\r\ncompilation terminated.\r\n```\r\nA quick search confirms that ``NvInfer.h`` is not in the repo, so I assume it is part of LibTorch / cuDNN / TRT, so I suspect bazel has an issue with the location of one of these, but I find it strange that I can compile the C++ API, so I was wondering if the python package is currently building correctly and, if so, what I can do to troubleshoot this.\r\nMy OS is Ubuntu 20.04, python 3.8 inside a conda environment, with bazel 3.3.1, cuda 10.2, TensorRT 7.1.3.4 and cuDNN 8.0.1.13 \r\n", "url": "https://github.com/pytorch/TensorRT/issues/130", "state": "closed", "labels": [ "question", "component: build system", "No Activity" ], "created_at": "2020-07-10T05:28:08Z", "updated_at": "2020-08-18T00:06:25Z", "user": "IgnacioJPickering" }, { "repo": "pytorch/vision", "number": 2449, "title": "Custom Weights for Pytorch Hub for yolo v5", "body": "\r\nHello\r\nJust wanted to know if there a way of import yolo v5 model using PyTorch Hub and then loading my custom weights on top of it.", "url": "https://github.com/pytorch/vision/issues/2449", "state": "closed", "labels": [ "question", "module: models", "module: hub" ], "created_at": "2020-07-10T04:47:29Z", "updated_at": "2020-07-10T06:49:43Z", "user": "sakshamjn" }, { "repo": "pytorch/xla", "number": 2328, "title": "What is tracker.rate() and tracker.global_rate()", "body": "## \u2753 Questions and Help\r\nHello, I am still trying pytorch tpu. In the pytorch tpu mnist colab tutorial. It uses tracker.rate() and tracker.global_rate(). What are these two things? Thank you!", "url": "https://github.com/pytorch/xla/issues/2328", "state": "closed", "labels": [], "created_at": "2020-07-08T13:22:35Z", "updated_at": "2020-07-09T08:19:03Z", "user": "sharkdeng" }, { "repo": "pytorch/text", "number": 874, "title": "how to keep tracking the record using original id?", "body": "## \u2753 Questions and Help\r\n\r\nHi, I have a dataset and each record has its own id and some meta info. I want to keep tracking the record using id so that I know which output is for which record. I tried use Filed but it give the error, TypeError: '<' not supported between instances of 'Example' and 'Example'\r\n\r\n`\r\n\r\nsrc = data.Field(\r\n sequential=True,\r\n tokenize=tokenize_en,\r\n pad_first=True,\r\n lower=True,\r\n # fix_length=fix_length,\r\n include_lengths=True,\r\n init_token='<SOS>',\r\n eos_token='<EOS>'\r\n )\r\n raw_data = data.TabularDataset(\r\n path=data_path, format='csv',\r\n train='data.csv',\r\n fields=[\r\n ('src', src),\r\n ('id', data.Field()),\r\n ('type', data.Field())\r\n ])\r\n src.build_vocab(\r\n raw_data,\r\n #max_size=20000,\r\n # min_freq=2,\r\n #vectors=vectors\r\n )\r\n\r\n`", "url": "https://github.com/pytorch/text/issues/874", "state": "open", "labels": [], "created_at": "2020-07-08T12:26:00Z", "updated_at": "2020-07-08T12:26:00Z", "user": "Marvinmw" }, { "repo": "pytorch/pytorch", "number": 41065, "title": "How to use (torch.utils.data.DataLoader) in android? ", "body": " Now , I try running PSENet in Android . Project urls : https://github.com/whai362/PSENet \r\nIts testcode need \uff08torch.utils.data.DataLoader\uff09\u3002 you can look PSENet Project .> test_ic15.py 72 lines \r\nI have torch==1.4.0 change PSENet.pth ==> PSENet.pt and model load in Android is OK\u3002But\uff0c\r\nnext I don't know what to do.\r\n I want a little alittle translation the PSENet testcode in Android \u3002\r\n Sorry\uff0cmy English is very poor, if you can, give me some android advice\r\n", "url": "https://github.com/pytorch/pytorch/issues/41065", "state": "closed", "labels": [ "triaged", "module: android", "oncall: mobile" ], "created_at": "2020-07-07T08:28:30Z", "updated_at": "2020-07-08T20:35:26Z", "user": "Micla-SHL" }, { "repo": "pytorch/pytorch", "number": 41064, "title": "When using _MultiProcessingDataLoaderIter in Dataloader, how to add a filelock in Dataset to make the file io thread-safety? ", "body": "## \u2753 Questions and Help\r\nWhen I use DataLoader to load a dataset consisted of several files, I find when I cannot set the `num_workers > 0` because it will occurs a Error `TypeError: function takes exactly 5 arguments (1 given)`.\r\n\r\nWhen I set `shuffle = True` into DataLoader, this Error when occur randomly (e.g. I will train it for several epochs and the Error happens), however, when I set the `shuffle = False`, the error will appear in the first several minibatch.\r\n\r\nI'm very sure that the bug is from the _MultiProcessingDataLoaderIter in Dataloader and the data I've prepared is correct, because if I set the `num_workers = 0`, my code can finished the training process.\r\n\r\n### Code Details\r\nI offer some details here, and hope someone can help me \ud83d\ude2d\r\n\r\nThis is the Dataset:\r\n```python\r\nclass ChunkDataset(Dataset):\r\n def __init__(self, feat_scp_file, chunk_size_range=(100, 500)):\r\n super(ChunkDataset, self).__init__()\r\n self.feat_scp_file = feat_scp_file\r\n\r\n self.feature_reader = SynchronizedFeatureReader(self.feat_scp_file)\r\n self.utt_list = self.feature_reader.get_utt_list()\r\n self.min_chunk_size = chunk_size_range[0]\r\n self.max_chunk_size = chunk_size_range[1]\r\n\r\n def __len__(self):\r\n return len(self.feature_reader)\r\n\r\n def __getitem__(self, item):\r\n utt_id = self.utt_list[item]\r\n feat = self.feature_reader[utt_id]\r\n feat_len = feat.shape[0]\r\n\r\n chunk_size = random.randint(self.min_chunk_size, self.max_chunk_size)\r\n chunk_start = random.randint(0, max(0, feat_len - chunk_size))\r\n\r\n return feat[chunk_start: min(chunk_start + chunk_size, feat_len), :]\r\n```\r\nThe key part in it is the SynchronizedFeatureReader: ( I wrapper the data reader many times because other function need it ,not just for pytorch)\r\n```python\r\nclass SynchronizedFeatureReader(object):\r\n def __init__(self, scp_file):\r\n self.scp_file = scp_file\r\n self.feat_dict = ScriptReader(scp_file)\r\n\r\n def _load(self, utt_id):\r\n return self.feat_dict[utt_id]\r\n\r\n def __len__(self):\r\n return len(self.feat_dict)\r\n\r\n def __getitem__(self, item):\r\n return self.feat_dict[item]\r\n\r\n def __iter__(self):\r\n for (utt_id, feat) in self.feat_dict:\r\n yield utt_id, feat\r\n\r\n def get_utt_list(self):\r\n return self.feat_dict.index_keys\r\n```\r\n\r\nAnd finally, you can see how I read the data:\r\n```python\r\nclass ScriptReader(Reader):\r\n def __init__(self, ark_scp):\r\n self.fmgr = dict()\r\n def addr_processor(addr):\r\n addr_token = addr.split(\":\")\r\n if len(addr_token) == 1:\r\n raise ValueError(\"Unsupported scripts address format\")\r\n path, offset = \":\".join(addr_token[0:-1]), int(addr_token[-1])\r\n return (path, offset)\r\n\r\n super(ScriptReader, self).__init__(ark_scp,\r\n value_processor=addr_processor)\r\n\r\n def __del__(self):\r\n for name in self.fmgr:\r\n self.fmgr[name].close()\r\n\r\n def _open(self, obj, addr):\r\n if obj not in self.fmgr:\r\n self.fmgr[obj] = open(obj, \"rb\")\r\n arkf = self.fmgr[obj]\r\n arkf.seek(addr)\r\n return arkf\r\n\r\n def _load(self, key):\r\n path, addr = self.index_dict[key]\r\n fd = self._open(path, addr) \r\n obj = io.read_float_mat_vec(fd, direct_access=True) \r\n return obj\r\n```\r\nI have to explain here the `io.read_float_mat_vec` is writtern by myself, it will read the first two bytes to make sure the `fd.seek()` is right. The assert is like:\r\n```python\r\ndef expect_binary(fd):\r\n flags = bytes.decode(fd.read(2))\r\n throw_on_error(flags == '\\0B', f'Expect binary flag, but gets {flags}')\r\n```\r\nand you will find the the flags will be wrong when dataloader run.\r\n\r\nThe `scp_file` I used is like this , It's come from other code which is not important for this issue, I think. The format is like:\r\n```\r\na0001 file.vec.ark:9\r\na0002 file.vec.ark:2076\r\na0003 file.vec.ark:4143\r\na0004 file.vec.ark:6210\r\na0005 file.vec.ark:8277\r\na0006 file.vec.ark:10344\r\n......\r\n```\r\n\r\nThe Error log is:\r\n```\r\nTraceback (most recent call last):\r\n File \"TestFeatureReader.py\", line 172, in <module>\r\n main()\r\n File \"TestFeatureReader.py\", line 168, in main\r\n process.test_data(tr_dataloader)\r\n File \"/home/lycheng/workspace/corecode/Python/SRE-Pytorch-Tools/process/test_process.py\", line 31, in test_data\r\n for index, (data, label) in enumerate(data_loader):\r\n File \"/home/work_nfs2/lycheng/env/anaconda3/anaconda_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 345, in __next__\r\n data = self._next_data()\r\n File \"/home/work_nfs2/lycheng/env/anaconda3/anaconda_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 856, in _next_data\r\n return self._process_data(data)\r\n File \"/home/work_nfs2/lycheng/env/anaconda3/anaconda_py36/lib/python3.6/site-packages/torch/utils/data/datal", "url": "https://github.com/pytorch/pytorch/issues/41064", "state": "closed", "labels": [ "module: dataloader", "triaged" ], "created_at": "2020-07-07T07:18:47Z", "updated_at": "2020-07-12T03:19:32Z", "user": "GeekOrangeLuYao" }, { "repo": "pytorch/vision", "number": 2400, "title": "DownSample", "body": "https://github.com/pytorch/vision/blob/86b6c3e22e9d7d8b0fa25d08704e6a31a364973b/torchvision/models/resnet.py#L195\r\n\r\nWhy don't we need a downsample in this loop??", "url": "https://github.com/pytorch/vision/issues/2400", "state": "closed", "labels": [ "question" ], "created_at": "2020-07-07T03:17:34Z", "updated_at": "2020-07-07T09:21:55Z", "user": "jianjiandandande" }, { "repo": "pytorch/examples", "number": 799, "title": "How to use my own backbone\uff1f", "body": "", "url": "https://github.com/pytorch/examples/issues/799", "state": "closed", "labels": [], "created_at": "2020-07-06T03:31:29Z", "updated_at": "2022-03-09T21:37:54Z", "user": "wangbin2018" }, { "repo": "pytorch/vision", "number": 2393, "title": "Mask R-CNN: get all the parts and train specific ones", "body": "Hi,\r\nI would like to access all the different parts of Mask R-CNN in order to only train some of them.\r\nI learnt in the discussion forum that I can use `requires_grad` to enable/disable training, but how can I access all the `trainable` parts?\r\nThanks,\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2393", "state": "closed", "labels": [ "question" ], "created_at": "2020-07-05T09:21:15Z", "updated_at": "2020-07-07T09:33:13Z", "user": "FiReTiTi" }, { "repo": "pytorch/vision", "number": 2391, "title": "How to Change All BN layers to GN layers?", "body": "i tried this : \r\n\r\n```\r\nimport torchvision.models as models\r\nmodel = models.resnet18()\r\n\r\n#then this : \r\n\r\nfor name, module in model.named_modules():\r\n if isinstance(module, nn.BatchNorm2d):\r\n # Get current bn layer\r\n bn = getattr(model, name)\r\n # Create new gn layer\r\n gn = nn.GroupNorm(1, bn.num_features)\r\n # Assign gn\r\n print('Swapping {} with {}'.format(bn, gn))\r\n setattr(model, name, gn)\r\n\r\nprint(model)\r\n```\r\n\r\nand it gives this error :\r\n\r\n```\r\nSwapping BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) with GroupNorm(1, 64, eps=1e-05, affine=True)\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-26-dc2f23e093cc> in <module>\r\n 2 if isinstance(module, nn.BatchNorm2d):\r\n 3 # Get current bn layer\r\n----> 4 bn = getattr(model, name)\r\n 5 # Create new gn layer\r\n 6 gn = nn.GroupNorm(1, bn.num_features)\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)\r\n 592 return modules[name]\r\n 593 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 594 type(self).__name__, name))\r\n 595 \r\n 596 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'ResNet' object has no attribute 'layer1.0.bn1'\r\n```", "url": "https://github.com/pytorch/vision/issues/2391", "state": "closed", "labels": [ "invalid" ], "created_at": "2020-07-04T09:52:25Z", "updated_at": "2020-07-07T09:31:44Z", "user": "mobassir94" }, { "repo": "pytorch/vision", "number": 2390, "title": "Excessive memory consumption while using DistributedDataParallel ", "body": "## \ud83d\udc1b Bug\r\n\r\nTL;DR : While using `DistributedDataParallel` and multiple GPU, memory consumption on each GPU seems to be more than twice as much as what is observed when without using `DistributedDataParallel` on a single GPU.\r\n \r\n## To Reproduce\r\n\r\nI have been using FasterRCNN from Torchvision\u2019s models that uses DistributedDataParallel for training. However, I find that while using multiple GPU, the memory consumption is far more than without multiple GPU. Here is my code\r\n\r\n```\r\nkwargs = {}\r\n kwargs['min_size'] = args.min_size\r\n kwargs['max_size'] = args.max_size\r\n model = ModifiedFRCNN(cfg=cfg, custom_anchor=args.custom_anchor,\r\n use_def=args.use_def, cpm=args.cpm,\r\n default_filter=args.default_filter,\r\n soft_nms=args.soft_nms,\r\n upscale_r=args.upscale_r, **kwargs).cuda().eval()\r\n model = restore_network(model)\r\n model_without_ddp = model\r\n dataset = GenData(args.test_dataset,\r\n args.base_path,\r\n dataset_param=None,\r\n train=False)\r\n\r\n if args.n_gpu > 1:\r\n init_distributed_mode(args)\r\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu],\r\n find_unused_parameters=True)\r\n model_without_ddp = model.module\r\n sampler = torch.utils.data.distributed.DistributedSampler(dataset)\r\n batch_sampler = torch.utils.data.BatchSampler(sampler,\r\n args.batch_size,\r\n drop_last=True)\r\n data_loader = torch.utils.data.DataLoader(dataset,\r\n batch_sampler=batch_sampler,\r\n num_workers=args.num_workers,\r\n collate_fn=coco_collate)\r\n metric_logger = MetricLogger(delimiter=\" \")\r\n header = 'Valid:'\r\n batch_iterator = metric_logger.log_every(data_loader, 100, header)\r\n else:\r\n model = model.cuda()\r\n data_loader = iter(data.DataLoader(dataset, args.batch_size, shuffle=False,\r\n num_workers=args.num_workers,\r\n collate_fn=coco_collate))\r\n batch_iterator = iter(data_loader)\r\n```\r\n`ModifiedFRCNN` is a class that inherits `FasterRCNN` to make trivial changes, such as parameter, postprocessing etc.\r\nCase 1 : When n_gpu=1, I am able to use a batch size of upto 8.\r\nCase 2 : When n_gpu=4, I am unable to even use a batch size of 1.\r\n\r\nBoth the above mentioned cases are on same the GPU, 2080Ti. \r\n\r\n## Expected behavior\r\n\r\nConsume comparable memory if not equal on each GPUs as the case of training on a single GPU.\r\n\r\n## Environment\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.2.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.0.130\r\n\r\nOS: Debian GNU/Linux 10 (buster)\r\nGCC version: (Debian 8.3.0-6) 8.3.0\r\nCMake version: version 3.13.4\r\n\r\nPython version: 3.7\r\nIs CUDA available: Yes\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: \r\nGPU 0: GeForce RTX 2080 Ti\r\nGPU 1: GeForce RTX 2080 Ti\r\n\r\nNvidia driver version: 430.14\r\ncuDNN version: Could not collect\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.0\r\n[pip3] torch==1.2.0\r\n[pip3] torchvision==0.4.0\r\n```\r\n## Additional context\r\n\r\nThe command I use to launch\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=4 --use_env test.py <other_arguments> --world_size 4 --n_gpu 4\r\n```\r\n\r\n## PS\r\n\r\nI have posted this issue [here](https://discuss.pytorch.org/t/excessive-memory-consumption-while-using-distributeddataparallel/87568) and since I did not receive any response, I was not sure whether the place where I posted this was correct, hence re-posting here. Apologies if that shouldn't be done.\r\n\r\nThank you!", "url": "https://github.com/pytorch/vision/issues/2390", "state": "closed", "labels": [ "question" ], "created_at": "2020-07-04T06:46:25Z", "updated_at": "2020-07-07T09:44:35Z", "user": "Sentient07" }, { "repo": "pytorch/examples", "number": 797, "title": "train from last weight", "body": "Can this project continue training from the last saved weight\uff1fI trained one epoch with seven hours.and now I want to train on it basis", "url": "https://github.com/pytorch/examples/issues/797", "state": "open", "labels": [ "help wanted" ], "created_at": "2020-07-02T12:47:38Z", "updated_at": "2022-03-10T00:06:38Z", "comments": 1, "user": "Muxindawang" }, { "repo": "pytorch/pytorch", "number": 40855, "title": "Don't know how to translate op Conv", "body": "(sent here from https://github.com/onnx/onnx/issues/2822)\r\n\r\nI'm only seeing this error on Windows. It's working fine on Linux (Docker).\r\n\r\nI can't find any other issues or documentation, but I get the impression that the op registry is not populated fully. Is there some kind of setup that I need to go through? Prerequisite installation needed?\r\n\r\nI'm working on Windows 10, python 3.6, I've installed `onnx==1.7.0` indirectly with pip by installing pytorch according to the [getting started page instructions at pytorch's website](https://pytorch.org/get-started/locally/): `pip install torch==1.4.0 torchvision==0.5.0 -f https://download.pytorch.org/whl/torch_stable.html`\r\n\r\n```\r\n.venv\\lib\\site-packages\\caffe2\\python\\onnx\\backend.py:713: in prepare\r\n init_net, predict_net = cls._onnx_model_to_caffe2_net(model, device, opset_version, False)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ncls = <class 'caffe2.python.onnx.backend.Caffe2Backend'>\r\nonnx_model = ir_version: 3\r\nproducer_name: \"pytorch\"\r\nproducer_version: \"0.4\"\r\ngraph {\r\n node {\r\n input: \"0\"\r\n input: \"1\"\r\n outp... dim {\r\n dim_value: 512\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\nopset_import {\r\n domain: \"\"\r\n version: 9\r\n}\r\n\r\ndevice = 'CPU', opset_version = 9, include_initializers = False\r\n\r\n# <snip>\r\n\r\nE RuntimeError: ONNX conversion failed, encountered 69 errors:\r\n\r\n# <snip>\r\n\r\nE . Exception: [enforce fail at ..\\caffe2\\onnx\\backend.cc:1426] . Don't know how to translate op Conv\r\nE (no backtrace available)\r\n```", "url": "https://github.com/pytorch/pytorch/issues/40855", "state": "closed", "labels": [], "created_at": "2020-07-01T08:40:42Z", "updated_at": "2020-07-01T14:42:50Z", "user": "Korijn" }, { "repo": "pytorch/examples", "number": 795, "title": "Under the Mnist-Hogwild framework, how to use multi-gpu computing\uff1f", "body": "When I execute the code example of mnist_hogwild, I find that multiple processes are running parallelly on one gpu. Question: Can multiple processes be executed in parallel on multiple GPUs?", "url": "https://github.com/pytorch/examples/issues/795", "state": "open", "labels": [ "distributed" ], "created_at": "2020-06-29T08:21:04Z", "updated_at": "2022-03-09T20:56:18Z", "user": "Wang-Zhenxing" }, { "repo": "pytorch/tutorials", "number": 1038, "title": "Simplify numpy function call in object detection tutorial", "body": "In the second code block in the tutorial, the line `pos = np.where(masks[i])` has been used to get the indices of the non zero points in the image. But [numpy documentation for `np.where()`](https://numpy.org/doc/1.18/reference/generated/numpy.where.html) advises to use [`np.nonzero()`](https://numpy.org/doc/1.18/reference/generated/numpy.nonzero.html) when there is only one argument for `np.where()`, and it also makes the code more readable.\r\n", "url": "https://github.com/pytorch/tutorials/issues/1038", "state": "closed", "labels": [ "torchvision", "docathon-h1-2023", "easy" ], "created_at": "2020-06-23T09:30:21Z", "updated_at": "2023-10-05T17:20:12Z", "comments": 4, "user": "ashok-arjun" }, { "repo": "pytorch/pytorch", "number": 40257, "title": "How to get pytorch 1.4?", "body": "Pytorch 1.4 is not in this list https://pytorch.org/get-started/previous-versions/\r\n\r\nI tried to replace the 1.2 to 1.4 as below, but still it didnt work\r\n`conda install pytorch==1.4.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch`", "url": "https://github.com/pytorch/pytorch/issues/40257", "state": "closed", "labels": [], "created_at": "2020-06-19T00:42:18Z", "updated_at": "2020-06-19T03:44:09Z", "user": "ivder" }, { "repo": "pytorch/tutorials", "number": 1033, "title": "A PR to fix typos failed build/deploy (#1001)", "body": "I corrected some typos in chatbot_tutorial.py and opened a pull request #1001 .\r\nOnly texts written in a comment of a .py file were modified, but I got build fail.\r\nIs there any guideline to cope with such case?\r\nI don't know why but some PR like #1017, which just corrects a typo, was successfully built.", "url": "https://github.com/pytorch/tutorials/issues/1033", "state": "closed", "labels": [], "created_at": "2020-06-18T14:17:47Z", "updated_at": "2021-06-07T21:51:03Z", "comments": 1, "user": "lewha0" }, { "repo": "pytorch/vision", "number": 2329, "title": "A problem of multiclassifier task with Squeezenet trained on VOC2012", "body": "I got a problem when I dealed with a multiclassifier task with squeezenent on VOC2012. I just wrote a train code, and called the '''torchversion.models.squeezenet1_1''', changed num_classes. I used '''torch.nn.MultiLabelSoftMarginLoss()''' for my loss function. However, my loss never changed when I trained my network. If there is someone having same problem like me, and having some specific soluation, please help me. please! Thank you~\r\n```\r\nEpoch: [ 0/2000] step: 0, Loss: 0.754, mAP 26.93%\r\nEpoch: [ 0/2000] step: 20, Loss: 0.693, mAP 7.48%\r\nEpoch: [ 0/2000] step: 40, Loss: 0.693, mAP 6.65%\r\nEpoch: [ 0/2000] step: 60, Loss: 0.693, mAP 6.43%\r\nEpoch: [ 0/2000] step: 80, Loss: 0.693, mAP 6.39%\r\nEpoch: [ 0/2000] step: 100, Loss: 0.693, mAP 6.55%\r\nEpoch: [ 0/2000] step: 120, Loss: 0.693, mAP 6.83%\r\n```\r\n", "url": "https://github.com/pytorch/vision/issues/2329", "state": "closed", "labels": [ "invalid", "question" ], "created_at": "2020-06-18T08:48:09Z", "updated_at": "2020-07-07T15:07:28Z", "user": "JiahangWu" }, { "repo": "pytorch/pytorch", "number": 40165, "title": "How to replace a parameter with other variable while keeping the backpropagation?", "body": "For example, now I have a parameter 'D' in the model.\r\n\r\nNow I want to replace the 'D' with 'C', where 'C = a+b'. Is there anyway in pytorch that can achieve that replacement while keeping the backpropagation between 'C' and 'a+b'. (e.g., training the model will update the value of 'a' and 'b'.\r\n\r\nI've tried D.data = C, but obviously that just changed the value and violate the backpropagation. Besides, '.copy' and '.clone' didn't work either.", "url": "https://github.com/pytorch/pytorch/issues/40165", "state": "closed", "labels": [], "created_at": "2020-06-17T14:41:18Z", "updated_at": "2020-06-17T22:04:06Z", "user": "kunwuz" }, { "repo": "pytorch/tutorials", "number": 1032, "title": "Adversarial example generation by FGSM: different normalization of training vs test images?", "body": "In the Adversarial example generation tutorial the classifier from https://github.com/pytorch/examples/tree/master/mnist is used. However, this classifier is trained with input normalization `transforms.Normalize((0.1307,), (0.3081,))` while in the FGSM tutorial no normalization is used and the perturbed images are clamped to [0,1] - is this not a contradiction?", "url": "https://github.com/pytorch/tutorials/issues/1032", "state": "closed", "labels": [ "docathon-h1-2023", "medium" ], "created_at": "2020-06-17T13:09:32Z", "updated_at": "2023-06-12T20:41:43Z", "comments": 3, "user": "hookxs" }, { "repo": "pytorch/text", "number": 828, "title": "How to fix the order of data in iterator during training step?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\n\r\nCurrently, I'm running experiments with several datasets in torchtext, and I just found that I can't reproduce my experiments although I excluded all the possible randomness as following:\r\n\r\n torch.manual_seed(seed)\r\n torch.cuda.manual_seed(seed)\r\n torch.cuda.manual_seed_all(seed)\r\n random.seed(seed)\r\n np.random.seed(seed)\r\n torch.backends.cudnn.benchmark = False\r\n torch.backends.cudnn.deterministic = True\r\n\r\nI found that, when Iterator class is initialized, `RandomShuffler()` defined in torchtext.data.utils is set as a `self.random_shuffler`, and this is used to shuffle data in training dataset. However, although one can set random state of `RandomShuffler` by feeding it as an argument of it, the line `self.random_shuffler = RandomShuffler()` doesn't let us to manually set the random state of it. Am I right? Is there a way to fix the order of data for training step?", "url": "https://github.com/pytorch/text/issues/828", "state": "open", "labels": [ "new datasets and building blocks" ], "created_at": "2020-06-17T06:23:45Z", "updated_at": "2020-06-29T19:32:14Z", "user": "seewoo5" }, { "repo": "pytorch/vision", "number": 2325, "title": "pytorch pre-trained models preprocessing results 9 images", "body": "I am using vgg16 and for preprocessing I use transforms module (as used in the documentation)\r\nand I don't know why, but when it takes my image as input, it outputs 9 small copy of the input image and combines them into one single image (nonetheless the output is correct)\r\n\r\nis it a problem?", "url": "https://github.com/pytorch/vision/issues/2325", "state": "closed", "labels": [ "question" ], "created_at": "2020-06-16T20:24:38Z", "updated_at": "2020-06-19T10:00:17Z", "user": "aliamiri1380" }, { "repo": "pytorch/tutorials", "number": 1029, "title": "questions about \"CHATBOT TUTORIAL\", some meaningless words at the end of the genertated sentences.", "body": "n_iteration set to 8000, the results as follow:\r\n\r\nD:\\ProgramData\\Anaconda3\\envs\\pytorch\\python.exe \"some/3evaluate_use.py\"\r\nBuilding encoder and decoder ...\r\nModels built and ready to go!\r\n> \u4e0d\u660e\u767d\u4f60\u8bf4\u5565\u9ebb\u70e6\u60a8\u8001\u8bf4\u660e\u767d\u70b9\r\nBuilding prefix dict from the default dictionary ...\r\nLoading model from cache some\\jieba.cache\r\nLoading model cost 0.777 seconds.\r\nPrefix dict has been built successfully.\r\nBot: \u8fd9 \u90fd \u4f60 \u4e5f \u592a\u7b28 \u4e86 \u70b9 \u5427 \u5427 \u54e6 \u5440 \u8bf4 \u554a \uff1f\r\n> \u4f60\u5728\u5e72\u561b\r\nBot: \u9664\u4e86 \u804a\u5929 \u8fd8\u662f \u804a\u5929 \u804a\u5929 \u5440 \u7fa4\u4e3b \u5440 \u7fa4\u4e3b \u5440\r\n> \u4f60\u662f\u50bb\u903c\u5417\uff1f\r\nBot: \u4efb\u4f55 \u4eba\u5de5\u667a\u80fd \u90fd \u654c\u4e0d\u8fc7 \u9601\u4e0b \u8fd9 \u6b3e \u5929\u7136 \u50bb\u74dc \u50bb \u903c \uff0c \u8fd8\u6015 \u5f53\u56de \u50bb \u903c\r\n> \u5367\u69fd\r\nBot: \u5927\u54e5 \uff0c \u4e70\u65a4 \u7d20\u8d28 \u586b\u8865 \u4e00\u4e0b \u4f60 \u7325\u7410 \u7684 \u5927\u8111 \uff01 \u6ca1\u6cd5 \uff01 \u6253 \u4ed6 \u7684 \uff01\r\n> \u54c8\u54c8\u54c8\u54c8\r\nBot: \u522b\u7b11 \u4e86 \uff0c \u9732\u51fa \u4e00\u5634 \u7684 \u9ec4\u7259 \u3002 \u4eb2\u7231 \u7684 \u3002 \u9a82 \u4f60 \u3002\r\n> \u6211\u559c\u6b22\u4f60\r\nBot: \u4f60 \u5230\u5e95 \u559c\u6b22 \u6211 \u4ec0\u4e48 \uff0c \u6211 \u6539 \u8fd8 \u4e0d\u6210 \u5417 \uff1f \u884c \u554a \uff01 \u4e0a \u4e86\r\n> \u6211\u60f3\u4f60\r\nBot: \u4e00\u5b9a \u8981 \u591a \u60f3 \u54e6 \uff0c \u9664\u4e86 wc \u4ee5\u5916 \u7684 \u65f6\u95f4 \u90fd \u8981 \u60f3 \u6211 \u54e6 \uff01\r\n> \u6211\u4eec\u4e00\u8d77\u79c1\u5954\u5427\r\nBot: \u4f60 \u5435 \u4e0d\u8fc7 \u6211 \u7684 \uff01 \u5462 \u54df \u54df \u3002 \u4eb2\u7231 \u7684 \u3002 \u4eca\u5929\r\n> \u62dc\u62dc\r\nBot: \u55ef \u554a \uff0c \u518d\u89c1 \uff0c \u6211\u4f1a \u60f3 \u4f60 \u7684 \u3002 \u4eb2\u7231 \u7684 \uff01\r\n> \u4f60\u559c\u6b22\u6211\u5417\r\nBot: \u5f88 \u559c\u6b22 \u5f88 \u559c\u6b22 \u5566 \u6ef4 \u4e8b\u60c5 \u4e8b\u60c5 \u4e8b\u60c5 \u4e8b\u60c5 \u4e8b\u60c5 \u4e8b\u60c5\r\n> \u4f60\u559c\u6b22\u6211\u5417\uff1f\r\nBot: \u5f88 \u559c\u6b22 \u5f88 \u559c\u6b22 \u5566 \u6ef4 \uff01 \u732a\u732a \uff01 \u6253 \uff01 \u6253 \uff01 \u6253\r\n> \u4f60\u5403\u90a3\u4e48\u591a\r\nBot: \u662f \u554a \uff0c \u597d\u591a \u3002 \u559c\u6b22 \u3002 \u559c\u6b22 \u3002 \u4eb2\u7231 \u7684 \u3002 \u3002\r\n> \u4f60\u5403\u592a\u591a\u4e86\r\nBot: \u662f \u554a \uff0c \u597d\u591a \u3002 \u559c\u6b22 \u3002 \u4eb2\u7231 \u7684 \u3002 \u3002 \u3002 \u3002\r\n> \u4f60\u957f\u592a\u80d6\u4e86\r\nBot: \u4e0d\u8981 \u7dca \uff0c \u6709 \u6211 \uff01 \u770b \uff01 \uff01 \u6253 \uff01 \u6253 \uff01\r\n> \u4f60\u653e\u5c41\r\nBot: \u4f60 \u624d \u653e\u5c41 \u5462 \u3002 \u53ef\u80fd \u5927\u6982 \u662f \u54e5 \u3002\r\n> \u4f60\u4e0d\u662f\u4eba\r\nBot: \u662f \u554a \u3002 \u5927\u65b9 3 \u3002 \u53ef\u80fd \u5927\u6982 \u662f \u5c41\u80a1 \uff01 \u6253\r\n> \u4f60\u662f\u771f\u7684\u50bb\r\nBot: \u4f60 \u624d \u50bb \u4e86 \u50bb \u903c \u5416 \u3002 \u4eb2\u7231 \u7684\r\n> \u7d2f\u4e0d\u7d2f\r\nBot: \u7d2f \u554a \uff0c \u7d2f \u5f97 \u591f\u545b \u7684 \u8bf4 \u5440 \u8bf4 \u3002 \u7325\u7410\r\n> \u7ea6\u4e48\uff1f\r\nBot: \u95ee \u5929\u95ee \u5730 \uff0c \u8bf4 \u8bf4 \u4f60 \uff0c \u8bf4 \u8bf4 \u6211 \uff0c \u6211 \u5c31 \u8bf4 \u561b\r\n> \u4f60\u592a\u53ef\u7231\u4e86\r\nBot: \u563b\u563b \uff0c \u4f60 \u4e5f \u5f88 \u53ef\u7231 \u7684 \u8bf4 \u554a \u3002 \u4eb2\u7231 \u7684\r\n> \u5927\u957f\u817f\r\nError: Encountered unknown word.\r\n> \u597d\u6f02\u4eae\r\nBot: \u5bf9 \u554a \uff0c \u5bf9 \u554a \uff0c \u6211 \u7231 \u7f8e\u5973 \uff0c \u7f8e\u5973 \u7231 \u6211 \uff01 \u7231 \u4f60 \uff01\r\n> \u4f60\u51fa\u53bb\r\nBot: \u6211\u4f1a \u7ed9 \u4f60 \u5b89\u5168 \u4e48 3 \u3002 \u68d2 \u3002\r\n> \u6eda\r\nBot: \u8be5 \u6eda \u7684 \u662f \u7f51\u53cb \u5427 \u7684 \u8bf4 \uff0c \u73b0\u5728 \u662f \u5728 \u8868\u626c \u4f60 \u4e00\u65a4\r\n> \r\n\r\n\r\n\r\n\r\nThe words marked in the red square are meaningless words.\r\n\r\n\r\n![1111](https://user-images.githubusercontent.com/3395025/84722935-0f5f4800-afb7-11ea-97db-c2b9bdfeec74.png)\r\n\r\nIn such situation, why this happened, and how to improve this?\r\n\r\nThe average loss from 9.6302 to 0.3282\r\n![2222](https://user-images.githubusercontent.com/3395025/84722941-1423fc00-afb7-11ea-9945-a39011cb12c0.png)\r\n\r\n\r\n......\r\n\r\n![333](https://user-images.githubusercontent.com/3395025/84722944-18e8b000-afb7-11ea-9eb4-29f042cc157a.png)\r\n\r\n Any big guns can do me a favor, thank you!", "url": "https://github.com/pytorch/tutorials/issues/1029", "state": "closed", "labels": [ "Text" ], "created_at": "2020-06-16T01:53:46Z", "updated_at": "2023-03-17T20:02:26Z", "comments": 3, "user": "jobsfan" }, { "repo": "pytorch/xla", "number": 2225, "title": "How to call tensor.item() on single proc after a collective op ?", "body": "## \u2753 Questions and Help\r\n\r\nHi, I'm trying to log tensor value by calling `res[0].item()` in a single process after `all_reduce` on this tensor. Execution seems to hang.\r\n\r\nTo reproduce:\r\n```python\r\nimport torch\r\nimport torch_xla.core.xla_model as xm\r\nimport torch_xla.distributed.xla_multiprocessing as xmp\r\n\r\n\r\ndef test_tensor_item(index):\r\n\r\n xm.rendezvous('init')\r\n print(index, \"test_tensor_item\")\r\n\r\n device = xm.xla_device()\r\n rank = xm.get_ordinal()\r\n t = torch.tensor([rank + 0.0, rank + 1.0, rank + 2.0], device=device)\r\n\r\n res = xm.all_reduce(\"sum\", t)\r\n print(index, res, flush=True)\r\n\r\n xm.rendezvous('sync')\r\n if index == 0:\r\n print(index, res[0].item(), flush=True)\r\n\r\nxmp.spawn(test_tensor_item, args=(), nprocs=8, start_method='fork')\r\n```\r\n\r\nAny hints, please.\r\nThanks ", "url": "https://github.com/pytorch/xla/issues/2225", "state": "closed", "labels": [ "stale" ], "created_at": "2020-06-15T12:15:45Z", "updated_at": "2020-07-25T08:02:46Z", "user": "vfdev-5" }, { "repo": "pytorch/serve", "number": 459, "title": "How to avoid contention between models, workers and runtime parallelism?", "body": "Hi! this is a question, not an issue\r\n\r\nI see that TorchServe can serve multiple models or multiple workers per model. For example the [AWS blog](https://aws.amazon.com/blogs/machine-learning/deploying-pytorch-models-for-inference-at-scale-using-torchserve/) says \"If your model is hosted on a CPU with many cores such as the c5.24xlarge EC2 instance with 96 vCPUs, you can easily scale the number of threads by using the method described previously\"\r\n\r\nSo I see possibly up to 3 things competing for cores:\r\n\r\n- multiple models\r\n- multiple workers per model\r\n- multiple threads of the inference runtime running in each worker\r\n\r\nIs that understanding correct? How is TorchServe handling that triple level of parallelism? Is there any best practice or settings to tune?", "url": "https://github.com/pytorch/serve/issues/459", "state": "closed", "labels": [ "question", "triaged_wait" ], "created_at": "2020-06-15T09:07:21Z", "updated_at": "2020-10-22T04:06:27Z", "user": "la-cruche" }, { "repo": "pytorch/pytorch", "number": 40016, "title": "how to load weights when using torch.nn.parallel.DistributedDataParallel?", "body": "platform: linux 16.04 ;python==3.8.2, pytorch==1.4.0-gpu\r\n\r\nStand-alone multi-card\r\n\r\nI try to load weights when using torch.nn.parallel.DistributedDataParallel to load model, There have be wrong.\r\n\r\n model = torch.nn.parallel.DistributedDataParallel(model,device_ids=[args.local_rank],output_device=args.local_rank,find_unused_parameters=True)\r\n File \"/home/jiashuaihe/anaconda2/envs/torch1.1/lib/python3.8/site-packages/torch/nn/parallel/distributed.py\", line 301, in __init__\r\n self._distributed_broadcast_coalesced(\r\n File \"/home/jiashuaihe/anaconda2/envs/torch1.1/lib/python3.8/site-packages/torch/nn/parallel/distributed.py\", line 485, in _distributed_broadcast_coalesced\r\n dist._broadcast_coalesced(self.process_group, tensors, buffer_size)\r\nRuntimeError: Broken pipe\r\nTraceback (most recent call\r\n![image](https://user-images.githubusercontent.com/50036961/84616510-c350cc80-aefe-11ea-99c5-69b18c3fc676.png)\r\n\r\n\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar", "url": "https://github.com/pytorch/pytorch/issues/40016", "state": "closed", "labels": [ "needs reproduction", "oncall: distributed", "triaged" ], "created_at": "2020-06-15T03:53:06Z", "updated_at": "2020-06-18T02:52:03Z", "user": "aboy2018" }, { "repo": "pytorch/examples", "number": 790, "title": " DCGAN kernel_size ", "body": "DCGAN kernel_size why is 4,or why not 3(Isn't odd number more common)", "url": "https://github.com/pytorch/examples/issues/790", "state": "closed", "labels": [], "created_at": "2020-06-14T02:46:12Z", "updated_at": "2022-03-09T21:38:43Z", "comments": 1, "user": "yixiyixi5" }, { "repo": "pytorch/serve", "number": 456, "title": "How to use management API in Sagemaker? e.g how to change batch size", "body": "Hi,\r\n\r\nIs it possible to use management api to a sagemaker deployed model?\r\n\r\nI am trying to increase the batch size but I don't know if it is doable in sagemaker.\r\n\r\nCan we just customise it (batch size) through config.properties so it will apply when sagemaker deploy the model ?\r\n\r\nThanks\r\n", "url": "https://github.com/pytorch/serve/issues/456", "state": "closed", "labels": [], "created_at": "2020-06-13T02:06:41Z", "updated_at": "2020-06-14T23:28:44Z", "user": "bananemure" }, { "repo": "pytorch/TensorRT", "number": 98, "title": "What does it all mean Bazel?", "body": "Please specify what version to Bazel this needs to be built with? Also please make sure you can actually compile that version for aarch64 on the Jetpacks Nvidia provides for its products. If can't be compiled on aarch64 please fix this. Nvidia should really do a better job of making sure its stuff is able to be compiled on its Jetpacks. It would be nice if Nvidia would not use build systems which cannot be easily installed on their provided Jetpacks. There is not simple command to install bazel.", "url": "https://github.com/pytorch/TensorRT/issues/98", "state": "closed", "labels": [ "question" ], "created_at": "2020-06-12T16:50:08Z", "updated_at": "2020-06-13T00:04:07Z", "user": "oasisgunter" }, { "repo": "pytorch/pytorch", "number": 39939, "title": "How to resolve this issue in pycharm? ERROR: Could not find a version that satisfies the requirement torch>=1.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) I have installed through command promt but still it is showing same issue as before.", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/39939", "state": "closed", "labels": [], "created_at": "2020-06-12T10:05:42Z", "updated_at": "2020-06-12T10:10:52Z", "user": "RizwanShaukat936" }, { "repo": "pytorch/pytorch", "number": 39936, "title": "How to deploy C++ LibTorch in Windows XP 32bit? ", "body": "I want to deploy a CNN model in Windows XP 32 bit, and here are my operations:\r\n\r\ncompile LibTorch-1.4.0 32bit with VS2017;\r\nfinish the C++11 code and the .exe run successfully in win10 and win7 32bit;\r\nThe code fails in XP which reports \u201cMSVCP140.dll is invalid\u201d.\r\nI want to use VS2015_XP to compile and avoid the error in XP. However, the LibTorch uses C++11/14 which is not supported by VS2015. So, what should I do to run successfully in XP 32bit? \r\n\r\nNeed help! TAT", "url": "https://github.com/pytorch/pytorch/issues/39936", "state": "closed", "labels": [], "created_at": "2020-06-12T07:37:04Z", "updated_at": "2020-06-12T14:29:35Z", "user": "SakuraRiven" }, { "repo": "pytorch/TensorRT", "number": 96, "title": "Error when trying to build with compiled binaries", "body": "I am building an application with TRTorch precompiled binaries, and I am able to compile full precision and half precision graphs successfully.\r\n\r\nI run into build errors while trying to compile the int8 graph \r\nas long as I include this line\r\n```\r\nauto calibrator = trtorch::ptq::make_int8_calibrator(std::move(calibration_dataloader), calibration_cache_file, true);\r\n```\r\nthe build error\r\n```\r\nIn file included from /home/tsai/TRTorchSample/../trtorch/include/trtorch/trtorch.h:38,\r\n from /home/tsai/TRTorchSample/main.cpp:4:\r\n/home/tsai/TRTorchSample/../trtorch/include/trtorch/ptq.h: In instantiation of \u2018trtorch::ptq::Int8Calibrator<Algorithm, DataLoaderUniquePtr>::Int8Calibrator(DataLoaderUniquePtr, const string&, bool) [with Algorithm = nvinfer1::IInt8EntropyCalibrator2; DataLoaderUniquePtr = std::unique_ptr<torch::data::StatelessDataLoader<torch::data::datasets::MapDataset<torch::data::datasets::MapDataset<datasets::CIFAR10, Resize>, torch::data::transforms::Normalize<> >, torch::data::samplers::RandomSampler>, std::default_delete<torch::data::StatelessDataLoader<torch::data::datasets::MapDataset<torch::data::datasets::MapDataset<datasets::CIFAR10, Resize>, torch::data::transforms::Normalize<> >, torch::data::samplers::RandomSampler> > >; std::string = std::__cxx11::basic_string<char>]\u2019:\r\n/home/tsai/TRTorchSample/../trtorch/include/trtorch/trtorch.h:430:12: required from \u2018trtorch::ptq::Int8Calibrator<Algorithm, DataLoader> trtorch::ptq::make_int8_calibrator(DataLoader, const string&, bool) [with Algorithm = nvinfer1::IInt8EntropyCalibrator2; DataLoader = std::unique_ptr<torch::data::StatelessDataLoader<torch::data::datasets::MapDataset<torch::data::datasets::MapDataset<datasets::CIFAR10, Resize>, torch::data::transforms::Normalize<> >, torch::data::samplers::RandomSampler>, std::default_delete<torch::data::StatelessDataLoader<torch::data::datasets::MapDataset<torch::data::datasets::MapDataset<datasets::CIFAR10, Resize>, torch::data::transforms::Normalize<> >, torch::data::samplers::RandomSampler> > >; std::string = std::__cxx11::basic_string<char>]\u2019\r\n/home/tsai/TRTorchSample/main.cpp:77:121: required from here\r\n/home/tsai/TRTorchSample/../trtorch/include/trtorch/ptq.h:55:13: error: no matching function for call to \u2018std::vector<at::Tensor>::push_back(<unresolved overloaded function type>)\u2019\r\n 55 | batched_data_.push_back(batch.data);\r\n | ^~~~~~~~~~~~~\r\nIn file included from /usr/include/c++/9/vector:67,\r\n from /home/tsai/libtorch/include/c10/util/StringUtil.h:11,\r\n from /home/tsai/libtorch/include/c10/util/Exception.h:5,\r\n from /home/tsai/libtorch/include/c10/core/Device.h:5,\r\n from /home/tsai/libtorch/include/c10/core/Allocator.h:6,\r\n from /home/tsai/libtorch/include/ATen/ATen.h:3,\r\n from /home/tsai/libtorch/include/torch/csrc/api/include/torch/types.h:3,\r\n from /home/tsai/libtorch/include/torch/script.h:3,\r\n from /home/tsai/TRTorchSample/main.cpp:1:\r\n/usr/include/c++/9/bits/stl_vector.h:1184:7: note: candidate: \u2018void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = at::Tensor; _Alloc = std::allocator<at::Tensor>; std::vector<_Tp, _Alloc>::value_type = at::Tensor]\u2019\r\n 1184 | push_back(const value_type& __x)\r\n | ^~~~~~~~~\r\n/usr/include/c++/9/bits/stl_vector.h:1184:35: note: no known conversion for argument 1 from \u2018<unresolved overloaded function type>\u2019 to \u2018const value_type&\u2019 {aka \u2018const at::Tensor&\u2019}\r\n 1184 | push_back(const value_type& __x)\r\n | ~~~~~~~~~~~~~~~~~~^~~\r\n/usr/include/c++/9/bits/stl_vector.h:1200:7: note: candidate: \u2018void std::vector<_Tp, _Alloc>::push_back(std::vector<_Tp, _Alloc>::value_type&&) [with _Tp = at::Tensor; _Alloc = std::allocator<at::Tensor>; std::vector<_Tp, _Alloc>::value_type = at::Tensor]\u2019\r\n 1200 | push_back(value_type&& __x)\r\n | ^~~~~~~~~\r\n/usr/include/c++/9/bits/stl_vector.h:1200:30: note: no known conversion for argument 1 from \u2018<unresolved overloaded function type>\u2019 to \u2018std::vector<at::Tensor>::value_type&&\u2019 {aka \u2018at::Tensor&&\u2019}\r\n 1200 | push_back(value_type&& __x)\r\n | ~~~~~~~~~~~~~^~~\r\nmake[2]: *** [CMakeFiles/TRTorchSample.dir/build.make:63: CMakeFiles/TRTorchSample.dir/main.cpp.o] Error 1\r\nmake[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/TRTorchSample.dir/all] Error 2\r\nmake: *** [Makefile:84: all] Error 2\r\n\r\n```\r\n\r\nCMakeLists.txt\r\n\r\n```\r\ncmake_minimum_required(VERSION 3.10)\r\n\r\nproject(TRTorchSample)\r\nenable_language(CUDA)\r\n\r\nfind_package(Torch REQUIRED)\r\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}\")\r\nset(CUDA_TOOLKIT_ROOT_DIR \"/usr/local/cuda\")\r\nset(CUDA_INCLUDE_DIRS \"/usr/local/cuda/include\")\r\n\r\nadd_executable(TRTorchSample main.cpp cifar10.cpp cifar10.h)\r\n\r\ntarget_link_libraries(TRTorchSample \"${TORCH_LIBRARIES}\")\r\ntarget_link_libraries(TRTorchSample \"${PROJECT_SOURCE_DIR", "url": "https://github.com/pytorch/TensorRT/issues/96", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-06-12T03:17:18Z", "updated_at": "2020-07-19T00:03:52Z", "user": "tsaizhenling" }, { "repo": "pytorch/tutorials", "number": 1022, "title": "math text size is too small", "body": "![BlitzMathTooSmall](https://user-images.githubusercontent.com/20859781/84378673-2c37fc00-ac02-11ea-9cc6-80764e5ea082.png) \r\nI cannot easily read what's written in the equations. The Math is rendering very small on Blitz page. \r\n[Here](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html). How can I help correct this?\r\n", "url": "https://github.com/pytorch/tutorials/issues/1022", "state": "closed", "labels": [], "created_at": "2020-06-11T11:14:34Z", "updated_at": "2021-06-07T22:28:11Z", "comments": 7, "user": "PradeepSinghMakwana" }, { "repo": "pytorch/pytorch", "number": 39778, "title": "How to Build Stable PyTorch (not from master) from source and output Wheel? ", "body": "Currently, in the PyTorch docs the suggested way of building from source includes the following main commands:\r\n\r\n```bash\r\ngit clone --recursive https://github.com/pytorch/pytorch\r\ncd pytorch\r\nexport CMAKE_PREFIX_PATH=${CONDA_PREFIX:-\"$(dirname $(which conda))/../\"}\r\npython setup.py install\r\n```\r\nThe clone command gets the PyTorch repo and the next command builds PyTorch in a Conda environment.\r\n\r\nThe problem here is that when the build starts it uses the `master` branch, which is not the stable (currently builds unstable `torch-1.6.0a0+8a6914d`). How do I checkout and build a stable version? The docs don't mention how to do this\r\n\r\nSecondly, `python setup.py install` builds in conda environment, it works fine, but there is no way to port that build to other machines, I would like to create a **pip wheel**, instead, how should I do that?\r\n\r\nAny help will be greatly appreciated, thanks. ", "url": "https://github.com/pytorch/pytorch/issues/39778", "state": "closed", "labels": [], "created_at": "2020-06-10T13:15:19Z", "updated_at": "2020-06-10T13:43:16Z", "user": "RafayAK" }, { "repo": "pytorch/xla", "number": 2191, "title": "How to aggregate per-process statistics in xmp.spawn?", "body": "## \u2753 Questions and Help\r\n\r\nI am using the idiom:\r\n\r\n```\r\nxmp.spawn(_mp_fn, args=(), nprocs=1, start_method='fork')\r\n```\r\nwhere my `_mp_fn` calls `xm.optimizer_step(optim)`\r\n\r\nIs there any way to combine the other statistics (loss, various stats on gradients etc) across processes and report just the aggregate for each minibatch?\r\n\r\nAt the moment, the `_mp_fn` just prints out its local values for loss etc, which don't reflect the merged gradients used to update the model.\r\n\r\nThanks!\r\n\r\nHenry\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/2191", "state": "closed", "labels": [], "created_at": "2020-06-10T06:01:33Z", "updated_at": "2020-06-12T00:21:38Z", "user": "hrbigelow" }, { "repo": "pytorch/TensorRT", "number": 90, "title": "Issues When Using Compiled Binaries", "body": "After compiling TRTorch on an x86 machine, and copying the outputted binaries to another machine, then using them in an include directory, I get the following error when compiling my code:\r\n```\r\n/usr/bin/ld: warning: libnvinfer.so.7, needed by /home/caelin/Github/br-core/ros2_ws/src/br-detection/include/trtorch/lib/libtrtorch.so, not found (try using -rpath or -rpath-link)\r\n/usr/bin/ld: warning: libopencv_imgcodecs.so.3.2, needed by /opt/ros/eloquent/lib/libcv_bridge.so, may conflict with libopencv_imgcodecs.so.4.2\r\n/usr/bin/ld: warning: libopencv_imgproc.so.3.2, needed by /opt/ros/eloquent/lib/libcv_bridge.so, may conflict with libopencv_imgproc.so.4.2\r\n/usr/bin/ld: warning: libopencv_core.so.3.2, needed by /opt/ros/eloquent/lib/libcv_bridge.so, may conflict with libopencv_core.so.4.2\r\n/usr/bin/ld: warning: libopencv_calib3d.so.3.2, needed by /opt/ros/eloquent/lib/libimage_geometry.so, may conflict with libopencv_calib3d.so.4.2\r\n/home/caelin/Github/br-core/ros2_ws/src/br-detection/include/trtorch/lib/libtrtorch.so: undefined reference to `createInferBuilder_INTERNAL'\r\n/home/caelin/Github/br-core/ros2_ws/src/br-detection/include/trtorch/lib/libtrtorch.so: undefined reference to `createInferRuntime_INTERNAL'\r\n```", "url": "https://github.com/pytorch/TensorRT/issues/90", "state": "closed", "labels": [ "question", "No Activity" ], "created_at": "2020-06-10T00:39:26Z", "updated_at": "2020-09-11T00:05:16Z", "user": "caelinsutch" }, { "repo": "pytorch/pytorch", "number": 39641, "title": "What is the difference between torch.mean and caffe2 ReduceMean?", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nI manually convert the model from Caffe2 to Pytorch. I built the full architecture of the model in Pytorch using weights from a Caffe2. In Caffe2, the model has a ReduceMean layer, which in Pytorch I replaced with torch.mean. As a result of the replacement, the difference in the calculations turned out to be too large, which does not allow to complete the conversion successfully.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. [input_data.txt](https://github.com/pytorch/pytorch/files/4742801/input_data.txt)\r\n1.\r\n ```\r\nimport numpy as np\r\nimport torch\r\nfrom caffe2.python import workspace, core\r\n\r\ndata = np.load(\"input_data.npy\")\r\n\r\n#caffe2\r\nop_reduce_mean = core.CreateOperator(\"ReduceMean\", [\"X_reduce\"], [\"Y_reduce\"], axes=(3,), keepdims=1)\r\nworkspace.ResetWorkspace()\r\nworkspace.FeedBlob(\"X_reduce\", data)\r\nworkspace.RunOperatorOnce(op_reduce_mean)\r\nshape_reduce_mean_caffe2 = workspace.FetchBlob(\"Y_reduce\").shape\r\ndata_reduce_mean_caffe2 = workspace.FetchBlob(\"Y_reduce\")\r\ndata_reduce_mean_caffe2 = np.array(data_reduce_mean_caffe2, dtype=np.float32).reshape(data_reduce_mean_caffe2.shape)\r\nprint(data_reduce_mean_caffe2) # -0.4089698, -0.5118571, -0.5328341, -0.50671, ... , -0.5756652, -0.38777262, -0.43768662, -0.49657446\r\n\r\n#pytorch\r\ntorch_data = torch.from_numpy(data)\r\ndata_mean_torch = torch.mean(torch_data, dim=(3,), keepdim=True)\r\nprint(data_mean_torch) # -0.4089695, -0.5118583, -0.532835, -0.50670993, ... , -0.57566583, -0.38777304, -0.43768588, -0.49657464\r\n\r\n#numpy\r\ndata_mean_numpy = np.mean(data, asix=(3,), keepdims=True) # -0.40896943, -0.5118579, -0.53283495, -0.50670964, ..., -0.5756654, -0.38777274, -0.43768603, -0.49657455\r\nprint(data_mean_numpy)\r\n```\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\nThe same results.\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.5\r\n - OS (e.g., Linux): Windows 10\r\n - How you installed PyTorch (`conda`, `pip`, source): conda\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7\r\n - CUDA/cuDNN version: No\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/39641", "state": "closed", "labels": [], "created_at": "2020-06-07T19:58:45Z", "updated_at": "2020-06-08T18:15:33Z", "user": "dryarullin" }, { "repo": "pytorch/TensorRT", "number": 84, "title": "TRTorch on torchvision ResNet152", "body": "Hi, \r\n\r\nI tried the script below and get error messages and a segmentation fault. Is this a bug or am I doing it wrong?\r\n\r\n`import copy\r\nimport itertools\r\nimport logging\r\nimport numpy as np\r\nimport os\r\nimport sys\r\n\r\nimport torch\r\nimport torchvision.models\r\nimport trtorch\r\n\r\ndef torchvision_benchmark(): \r\n os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\r\n os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0,1,2\"\r\n\r\n B, C, H, W = 1, 3, 224, 224\r\n\r\n pytorch_model = torchvision.models.resnet152(pretrained=True)\r\n print(f\"Started torch.jit.script() ...\", end='')\r\n torchscript_model = torch.jit.script(copy.deepcopy(pytorch_model))\r\n print(f\" done.\")\r\n\r\n compile_settings = {\r\n \"input_shapes\": [[1, 3, 224, 224]],\r\n \"op_precision\": torch.float\r\n }\r\n\r\n for i, (k, v) in enumerate(torchscript_model.named_parameters()):\r\n print(k, v.shape)\r\n if i > 10:\r\n break \r\n\r\n _ = torch.jit.script(copy.deepcopy(pytorch_model).eval())\r\n graph_lines = str(_.inlined_graph).split('\\n')\r\n ls, le = 0, 20\r\n for l in graph_lines[ls:le]:\r\n print(l)\r\n print(f\"Started trtorch.compile() ...\", end='')\r\n trt_ts_module = trtorch.compile(_, compile_settings)\r\n print(f\" done.\")\r\n`\r\n\r\nI get following results:\r\n\r\n> Started torch.jit.script() ... done.\r\nconv1.weight torch.Size([64, 3, 7, 7])\r\nbn1.weight torch.Size([64])\r\nbn1.bias torch.Size([64])\r\nlayer1.0.conv1.weight torch.Size([64, 64, 1, 1])\r\nlayer1.0.bn1.weight torch.Size([64])\r\nlayer1.0.bn1.bias torch.Size([64])\r\nlayer1.0.conv2.weight torch.Size([64, 64, 3, 3])\r\nlayer1.0.bn2.weight torch.Size([64])\r\nlayer1.0.bn2.bias torch.Size([64])\r\nlayer1.0.conv3.weight torch.Size([256, 64, 1, 1])\r\nlayer1.0.bn3.weight torch.Size([256])\r\nlayer1.0.bn3.bias torch.Size([256])\r\ngraph(%self : __torch__.torchvision.models.resnet.ResNet,\r\n %x.1 : Tensor):\r\n %3 : int = prim::Constant[value=-1]()\r\n %4 : int = prim::Constant[value=1]() # /opt/conda/lib/python3.7/site-packages/torchvision-0.7.0a0+34810c0-py3.7-linux-x86_64.egg/torchvision/models/resnet.py:214:29\r\n %5 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name=\"conv1\"](%self)\r\n %6 : Tensor = prim::GetAttr[name=\"weight\"](%5)\r\n %7 : int = prim::Constant[value=3]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py:346:24\r\n %8 : int = prim::Constant[value=1]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py:344:38\r\n %9 : int = prim::Constant[value=2]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py:343:47\r\n %10 : Tensor? = prim::GetAttr[name=\"bias\"](%5)\r\n %11 : int[] = prim::ListConstruct(%9, %9)\r\n %12 : int[] = prim::ListConstruct(%7, %7)\r\n %13 : int[] = prim::ListConstruct(%8, %8)\r\n %x.3 : Tensor = aten::conv2d(%x.1, %6, %10, %11, %12, %13, %8) # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py:345:15\r\n %15 : __torch__.torch.nn.modules.batchnorm.BatchNorm2d = prim::GetAttr[name=\"bn1\"](%self)\r\n %16 : Function = prim::Constant[name=\"batch_norm\"]()\r\n %17 : float = prim::Constant[value=1.0000000000000001e-05]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py:106:40\r\n %18 : bool = prim::Constant[value=0]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py:94:11\r\n %19 : bool = prim::Constant[value=1]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py:94:29\r\n %exponential_average_factor.126 : float = prim::Constant[value=0.10000000000000001]() # /opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py:92:41\r\nERROR: [TRTorch Conversion Context] - %791 : Tensor = aten::_convolution(%x.1, %self.conv1.weight, %self.conv1.bias, %10, %9, %8, %788, %789, %12, %788, %788, %790): kernel weights has count 9408 but 441 was expected\r\nERROR: [TRTorch Conversion Context] - %791 : Tensor = aten::_convolution(%x.1, %self.conv1.weight, %self.conv1.bias, %10, %9, %8, %788, %789, %12, %788, %788, %790): count of 9408 weights in kernel, but kernel dimensions (7,7) with 3 input channels, 3 output channels and 1 groups were specified. Expected Weights count is 3 * 7*7 * 3 / 1 = 441\r\nERROR: [TRTorch Conversion Context] - %791 : Tensor = aten::_convolution(%x.1, %self.conv1.weight, %self.conv1.bias, %10, %9, %8, %788, %789, %12, %788, %788, %790): kernel weights has coun\r\nt 9408 but 441 was expected\r\nERROR: [TRTorch Conversion Context] - %791 : Tensor = aten::_convolution(%x.1, %self.conv1.weight, %self.conv1.bias, %10, %9, %8, %788, %789, %12, %788, %788, %790): count of 9408 weights i\r\nn kernel, but kernel dimensions (7,7) with 3 input channels, 3 output channels and 1 groups were specified. Expected Weights count is 3 * 7*7 * 3 / 1 = 441\r\nERROR: [TRTorch Conversion Context] - %791 : Tensor = aten::_convolution(%x.1, %self.conv1.weight, %self.conv1.bias, %10, %9, %8, %788, %789, %12, %788, %788, %790): kernel weights has coun\r\nt 9408 but 441 was expected\r\nERROR: [TRTorch Conversion Context] - %791 : Tensor = aten::_convolution(%x.1, %se", "url": "https://github.com/pytorch/TensorRT/issues/84", "state": "closed", "labels": [ "question" ], "created_at": "2020-06-05T21:23:22Z", "updated_at": "2020-07-03T20:04:26Z", "user": "esghif" }, { "repo": "pytorch/pytorch", "number": 39573, "title": "What is the difference between 0.4.1 and 0.4.1.post2?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/39573", "state": "closed", "labels": [], "created_at": "2020-06-05T10:52:13Z", "updated_at": "2020-06-06T00:22:18Z", "user": "Lucksong" }, { "repo": "pytorch/pytorch", "number": 39561, "title": "How to replace the original model's classes with new class", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/39561", "state": "closed", "labels": [], "created_at": "2020-06-05T04:59:14Z", "updated_at": "2020-06-06T00:14:52Z", "user": "Aiswariyasugavanam" }, { "repo": "pytorch/vision", "number": 2286, "title": "Architecture differences in Zoo Models ?", "body": "Hi, I am comparing few zoo models implementation in DL4J with Pytorch zoo models and found\r\nthat the padding in Convolution layers does not match most of time ?\r\n\r\nFor **Resnet50 and SqueezeNet** :\r\n\r\nIn DL4J, they **do not** apply padding : [0, 0] ; while in PyTorch they have padding [1, 1].\r\nThis results in different output in layers\r\n\r\nIn DL4J, they apply **Bias** in Conv layers while in PyTorch, they do not.\r\n\r\nWhy such irregularities in Network structure across frameworks ?\r\n", "url": "https://github.com/pytorch/vision/issues/2286", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2020-06-04T06:22:57Z", "updated_at": "2020-06-05T09:19:08Z", "user": "nitin2212" }, { "repo": "pytorch/text", "number": 806, "title": "torchtext and training of a Transformer", "body": "Hello\r\n\r\nI am a bit confused about training my pytorch Transformer.\r\nI am using the code below to pre-process the Penn Treebank corpus before I analyze it with my (non pre-trained) Transformer:\r\n\r\n```python\r\n# define the English text field\r\nTEXT_ch2 = Field(init_token = '<sos>',\r\n eos_token = '<eos>',\r\n unk_token = '<unk>',\r\n pad_token = '<pad>',\r\n fix_length = bptt,\r\n lower = True)\r\n\r\n# split the PennTreeBank corpus into a train, val, and test set.\r\ntrain_penn, val_penn, test_penn = torchtext.datasets.PennTreebank.splits(TEXT_ch2)\r\n\r\n# build vocabulary based on the field that we just definTVD.\r\n# (building vocabulary over all language datasets)\r\nTEXT_ch2.build_vocab(train_penn, val_penn, test_penn,\r\n specials=['<sos>','<eos>','<unk>','<pad>'])\r\n\r\n# BPTTIterator\r\ntrain_penn_iter, val_penn_iter, test_penn_iter = BPTTIterator.splits(\r\n (train_penn, val_penn, test_penn),\r\n batch_size = batch_size,\r\n bptt_len= bptt,\r\n sort_key=lambda x: len(x.text),\r\n sort_within_batch = True,\r\n shuffle = False,\r\n device= device,\r\n repeat=False)\r\n```\r\n\r\nMy question is, since my Penn Treebank corpus are separated into train, validation, and test sets, when I train my Transformer on the Penn Treebank corpus, I would only be training the model on the Penn Treebank train and validation sets (`train_penn_iter`), am I right?\r\n\r\nBut then if I train my Transformer only on the train and validation portions of the corpus, wouldn't this mean that my Transformer will not be trained to properly handle those tokens that are contained only in the test set? so does this mean that I need to train my Transformer on the entire Penn Treebank corpus, instead of just on the training and validation sets? But if this is the case, what then is the point of having the `split` function to separate the corpus into training, validation, and test sets? To me, this contradicts how the test sets are normally used in machine learning.\r\n\r\nDoes it make sense to \"test\" a Transformer on the sequences that contain the tokens which it was not trained on?\r\n\r\nThank you,", "url": "https://github.com/pytorch/text/issues/806", "state": "closed", "labels": [ "question" ], "created_at": "2020-06-04T02:22:17Z", "updated_at": "2020-06-04T14:14:12Z", "user": "h56cho" }, { "repo": "pytorch/vision", "number": 2285, "title": "Finetuning deeplab/FCN", "body": "How do I fine tune deeplabv3 ? ", "url": "https://github.com/pytorch/vision/issues/2285", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: semantic segmentation" ], "created_at": "2020-06-03T20:53:38Z", "updated_at": "2020-06-05T09:16:56Z", "user": "gaussiangit" }, { "repo": "pytorch/tutorials", "number": 1010, "title": "Tutorial on custom dataloaders (NOT datasets)", "body": "I really like [this](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#iterating-through-the-dataset) tutorial on custom datasets. However, the `torch.utils.data.DataLoader` class is only briefly mentioned in it:\r\n\r\n>However, we are losing a lot of features by using a simple for loop to iterate over the data. In particular, we are missing out on:\r\n\r\n> * Batching the data\r\n> * Shuffling the data\r\n> * Load the data in parallel using multiprocessing workers.\r\n\r\n> `torch.utils.data.DataLoader` is an iterator which provides all these features. Parameters used below should be clear. One parameter of interest is collate_fn . You can specify how exactly the samples need to be batched using collate_fn . However, default collate should work fine for most use cases.\r\n\r\nI am aware of this [issue](https://github.com/pytorch/tutorials/issues/78) and this [issue](https://github.com/pytorch/tutorials/issues/735) but neither have led to a tutorial.\r\n\r\nI am happy to make a tutorial on custom dataloaders using the `torch.utils.data.DataLoader` class, focusing on how to interface with its parameters, especially the `num_workers` and `collate_fn` parameters. Also, I am not sure if it is possible to inherit from the `torch.utils.data.DataLoader` class, similar to the `torch.utils.data.Dataset`, so I would appreciate some guidance on this.\r\n\r\nThis would be my first ever tutorial, so some guidance on formatting would be greatly helpful.\n\ncc @suraj813 @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen", "url": "https://github.com/pytorch/tutorials/issues/1010", "state": "open", "labels": [ "enhancement", "60_min_blitz", "advanced", "docathon-h2-2023" ], "created_at": "2020-06-03T13:18:03Z", "updated_at": "2025-05-20T10:14:47Z", "comments": 11, "user": "mhdadk" }, { "repo": "pytorch/examples", "number": 784, "title": "Difference between src_mask and src_key_padding_mask", "body": "I am having a difficult time in understanding transformers. Everything is getting clear bit by bit but one thing that makes my head scratch is what is the difference between src_mask and src_key_padding_mask which is passed as an argument in forward function in both encoder layer and decoder layer.\r\n\r\nhttps://pytorch.org/docs/master/_modules/torch/nn/modules/transformer.html#Transformer", "url": "https://github.com/pytorch/examples/issues/784", "state": "open", "labels": [ "nlp" ], "created_at": "2020-06-03T10:53:19Z", "updated_at": "2022-03-09T21:39:01Z", "comments": 0, "user": "saahiluppal" }, { "repo": "pytorch/TensorRT", "number": 79, "title": "Does TRTorch support bail-out mechanism?", "body": "A neural network model may have some operators which aren't supported by TensorRT.\r\nWhen TensorRT cannot compile a subgraph, can the execution of the subgraph invoke torch operators again? Can a model execution mix TensorRT and vanilla torch operators?", "url": "https://github.com/pytorch/TensorRT/issues/79", "state": "closed", "labels": [ "question", "component: execution", "No Activity" ], "created_at": "2020-06-03T10:21:27Z", "updated_at": "2020-07-10T00:03:57Z", "user": "shiwenloong" }, { "repo": "pytorch/pytorch", "number": 39427, "title": "How to save tensor to 16-bit image?", "body": "So, I have a Tensor, which represents my 16-bit 3-channel image and I wanna save it. How could I do this? I was trying \r\n\r\n```\r\ntorchvision.utils.save_image(gt[j], f\"hdr_{j+1}.tiff\")\r\n```\r\nBut seems like it works only for 8-bit images... Could someone help me to save my tensor to 16-bit 3-channel image(any format would be good. I think .tiff is fine)?", "url": "https://github.com/pytorch/pytorch/issues/39427", "state": "closed", "labels": [], "created_at": "2020-06-03T02:54:45Z", "updated_at": "2020-06-03T15:16:12Z", "user": "wh75er" }, { "repo": "pytorch/TensorRT", "number": 78, "title": "Support Tuple Inputs", "body": "Hello. I am working with a model that takes in a tuple of inputs of different sizes. Is there a way to handle this within the existing TRTorch framework? I am attempting to compile the model with the following settings but get the accompanied error. I am running form source version on commit 247c748.\r\n\r\nCompile settings:\r\n```\r\ncompile_settings = {\r\n \"input_shapes\": [\r\n {\r\n \"min\": ([1, 3, 180, 320], [1,49,2]),\r\n \"opt\": ([1, 3, 180, 320], [1,49,2]),\r\n \"max\": ([1, 3, 180, 320], [1,49,2])\r\n }, # For static size [1, 3, 224, 224]\r\n ],\r\n# \"op_precision\": torch.half # Run with FP16\r\n}\r\n```\r\n\r\nError:\r\n```\r\nTypeError: (): incompatible function arguments. The following argument types are supported:\r\n 1. (self: trtorch._C.InputRange, arg0: List[int]) -> None\r\n\r\nInvoked with: <trtorch._C.InputRange object at 0x7fda486f67b0>, ([1, 3, 180, 320], [1, 49, 2])\r\n```\r\n\r\n\r\nThe forward function of my model is the following:\r\n```\r\n def forward(self,x,seq):\r\n \r\n x = self.input_conv(x)\r\n x = self.input_fc(x).unsqueeze(0)\r\n \r\n actions = self.action_fc(seq)\r\n x, hidden = self.lstm(actions, (x, x))\r\n \r\n output = self.output_fc(x)\r\n \r\n return output\r\n```\r\n\r\nThanks", "url": "https://github.com/pytorch/TensorRT/issues/78", "state": "closed", "labels": [ "question", "component: api [Python]", "No Activity" ], "created_at": "2020-06-01T19:35:26Z", "updated_at": "2020-07-05T00:06:46Z", "user": "Michael-Equi" }, { "repo": "pytorch/pytorch", "number": 38976, "title": "How to load the trained weights in libtorch to continue the training?", "body": "How to load the trained weights in libtorch to continue the training?I can't find an example.\r\nlibtorch 1.5\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/38976", "state": "closed", "labels": [], "created_at": "2020-05-25T08:39:26Z", "updated_at": "2020-05-26T18:52:27Z", "user": "williamlzw" }, { "repo": "pytorch/pytorch", "number": 38965, "title": "what is the difference of 'torch.onnx._export()' and 'torch.onnx.export()'?", "body": "## \u2753 Questions and Help\r\nSorry, I can not understand.\r\nWhen the inputs are same, their output files(onnx file) are difference .\r\n\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/38965", "state": "closed", "labels": [], "created_at": "2020-05-24T15:12:42Z", "updated_at": "2024-05-16T06:29:50Z", "user": "cs-xiao" }, { "repo": "pytorch/TensorRT", "number": 68, "title": "TRTorch with CUDA 10.0", "body": "Hi, \r\n\r\nI tried installing TRTorch on my Ubuntu 16.04, PyTorch 1.5 compiled from source, CUDA 10.0, CUDNN 7.6.\r\n\r\nI am getting a symbol error in all configurations I tried. I am grateful for any help.\r\n\r\n`seh2bp@trtorch:/workspace$ python -c \"import torch; import trtorch\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/__init__.py\", line 165, in <module>\r\n from torch._C import *\r\nImportError: /opt/conda/lib/python3.7/site-packages/torch/lib/libshm.so: undefined symbol: _Z8_THErrorPKciS0_z`\r\n\r\nldd output is here:\r\n\r\n`seh2bp@trtorch:/workspace$ ldd /opt/conda/lib/python3.7/site-packages/trtorch/lib/libtrtorch.so\r\n linux-vdso.so.1 (0x00007ffdff3c7000)\r\n libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f338cc4d000)\r\n libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f338c8af000)\r\n libnvinfer.so.7 => /opt/tensorrt/TensorRT-7.0.0.11/lib/libnvinfer.so.7 (0x00007f337ec34000)\r\n libcublas.so.10.0 => /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcublas.so.10.0 (0x00007f337a69e000)\r\n libcudnn.so.7 => /usr/lib/x86_64-linux-gnu/libcudnn.so.7 (0x00007f3362e8d000)\r\n libtorch.so => /workspace/scer-docker/trtorch/files/libtorch/lib/libtorch.so (0x00007f3362c8b000)\r\n libtorch_cuda.so => /workspace/scer-docker/trtorch/files/libtorch/lib/libtorch_cuda.so (0x00007f33211cc000)\r\n libtorch_cpu.so => /workspace/scer-docker/trtorch/files/libtorch/lib/libtorch_cpu.so (0x00007f3311fae000)\r\n libtorch_global_deps.so => /workspace/scer-docker/trtorch/files/libtorch/lib/libtorch_global_deps.so (0x00007f3311da9000)\r\n libc10_cuda.so => /workspace/scer-docker/trtorch/files/libtorch/lib/libc10_cuda.so (0x00007f3311b73000)\r\n libc10.so => /workspace/scer-docker/trtorch/files/libtorch/lib/libc10.so (0x00007f3311923000)\r\n libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f331170b000)\r\n libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f331131a000)\r\n /lib64/ld-linux-x86-64.so.2 (0x00007f338cfd6000)\r\n libcudart.so.10.0 => /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudart.so.10.0 (0x00007f33110a0000)\r\n libmyelin.so.1 => /opt/tensorrt/TensorRT-7.0.0.11/lib/libmyelin.so.1 (0x00007f331088f000)\r\n libnvrtc.so.10.0 => /usr/local/cuda-10.0/targets/x86_64-linux/lib/libnvrtc.so.10.0 (0x00007f330f273000)\r\n libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f330f06f000)\r\n libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f330ee50000)\r\n librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f330ec48000)\r\n libcudart-80664282.so.10.2 => /workspace/scer-docker/trtorch/files/libtorch/lib/libcudart-80664282.so.10.2 (0x00007f330e9c7000)\r\n libnvToolsExt-3965bdd0.so.1 => /workspace/scer-docker/trtorch/files/libtorch/lib/libnvToolsExt-3965bdd0.so.1 (0x00007f330e7bd000)\r\n libgomp-75eea7e8.so.1 => /workspace/scer-docker/trtorch/files/libtorch/lib/libgomp-75eea7e8.so.1 (0x00007f330e598000)`\r\n\r\nI tried different configurations:\r\n\r\n- use the downloaded libtorch built for cuda 10.2, I get the same error as above\r\n- use the installed libtorch (for a reason I don't understand I have 2 versions on in \r\n`.../site-packages/torch` and one in `.../site-packages/torch-1.5.0-py3.7-linux-x86_64.egg/torch`. Build with `.../site-packages/torch` fails because it is missing optimizer.h. I will not give further detail as I believe it will just complicate the issue unnecessarily. Any help is welcome.", "url": "https://github.com/pytorch/TensorRT/issues/68", "state": "closed", "labels": [ "question" ], "created_at": "2020-05-23T08:48:38Z", "updated_at": "2020-06-05T21:16:24Z", "user": "esghif" }, { "repo": "pytorch/vision", "number": 2254, "title": "Using `vision.references`", "body": "Hi,\r\n\r\nI was wondering if there was a way by which I can use the modules inside `vision.references`, especially `vision.references.detection.engine`'s `train_one_epoch` method. At the moment, I am unable to import and use it rather would have to copy-paste or download. Could this be simplified into an import? (by adding an `__init__.py`) ? Or, perhaps there is a way to do this more elegantly and I'm unaware? \r\n\r\nThanks and Regards,", "url": "https://github.com/pytorch/vision/issues/2254", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2020-05-23T06:19:33Z", "updated_at": "2020-05-29T10:32:56Z", "user": "Sentient07" }, { "repo": "pytorch/examples", "number": 778, "title": "Runtime error when trying to run dcgan.cpp example", "body": "I am new to libtorch and I was completing the featured example to learn Pytoch C++ frontend.\r\nI downloaded the cmake and dcgan.cpp files from git and was able to cmake it on a cluster using the clang-llvm/11 compilers. I am using libtorch files downloaded from https://download.pytorch.org/libtorch/nightly/cu100/libtorch-cxx11-abi-shared-with-deps-latest.zip\r\nI am using cuda 10.2.89 and pytorch v1.5.0-gpu.\r\nWhen I try running the executable, I get the following error\r\n`./dcgan: symbol lookup error: ./dcgan: undefined symbol: _ZN5torch5optim6detail13OptimizerBase15add_param_groupERKNS0_19OptimizerParamGroupE\r\n`\r\nIt appears that the following lines are responsible for this error:\r\n` torch::optim::Adam generator_optimizer(generator->parameters(), torch::optim::AdamOptions(2e-4).betas(std::make_tuple (0.5, 0.5)));\r\ntorch::optim::Adam discriminator_optimizer(discriminator->parameters(), torch::optim::AdamOptions(2e-4).betas(std::make_tuple (0.5, 0.5)));`\r\n\r\nIs there anyone who has previously encountered this error? May I please request your help regarding the same?\r\nThank you", "url": "https://github.com/pytorch/examples/issues/778", "state": "closed", "labels": [], "created_at": "2020-05-23T02:17:54Z", "updated_at": "2020-05-23T02:51:29Z", "comments": 0, "user": "namehta4" }, { "repo": "pytorch/vision", "number": 2250, "title": "cuda10.0 support for torchvision6", "body": "I try to install torchvision6-cu100 with pip but failed, I can only find the cu92 and cu101 version, is there no support for cuda10.0 ?", "url": "https://github.com/pytorch/vision/issues/2250", "state": "closed", "labels": [ "question", "topic: binaries" ], "created_at": "2020-05-21T17:13:19Z", "updated_at": "2020-05-21T18:20:42Z", "user": "feihuidiqiu" }, { "repo": "pytorch/TensorRT", "number": 62, "title": "How to use local pytorch instead of installing again.", "body": "Hi Naren,\r\nglad to see that you check-in py binding and test. \r\nTRTorch needs to install pytorch and torchvision again and I know it is easy to build trt from scratch.\r\nBut as a developer, I always build and set pytorch env locally and do need to install it again. Could you help provide options to call local pytorch instead of installing again. @narendasan \r\n\r\nThanks,\r\nAlan", "url": "https://github.com/pytorch/TensorRT/issues/62", "state": "closed", "labels": [ "question", "component: build system", "component: api [Python]", "No Activity" ], "created_at": "2020-05-19T09:52:44Z", "updated_at": "2020-06-27T00:03:18Z", "user": "alanzhai219" }, { "repo": "pytorch/vision", "number": 2237, "title": "fresh installation of pytorch 1.5 and torchvision .6 yields error with docs ", "body": "## \ud83d\udc1b Bug\r\n\r\nusing the latest installations from the pytorch recommended conda line, along with the following required libraries\r\n\r\n```\r\ncython\r\npycocotools\r\nmatplotlib\r\n```\r\n\r\nI was able to hit an error in the line given under https://github.com/pytorch/vision/blob/master/references/detection/README.md\r\nfor performing Faster R CNN\r\n\r\nI would also wonder if I can improve the docs by mentioning the fact that, in order to run that example you must pip install cython, pycocotools, and matplotlib ?\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. copy the `references/detection/` folder somewhere\r\n2. create a conda environment and install latest stable pytorch and torchvision\r\n3. attempt to run the `README.md` provided command\r\n\r\n```\r\n(clone_reference_torchvision) emcp@2600k:~/Dev/git/clone_reference_torchvision$ python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py --dataset coco --model fasterrcnn_resnet50_fpn --epochs 26 --lr-steps 16 22 --aspect-ratio-group-factor 3\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\nTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp line=59 error=101 : invalid device ordinal\r\nTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp line=59 error=101 : invalid device ordinal\r\n| distributed init (rank 0): env://\r\nTraceback (most recent call last):\r\n File \"train.py\", line 201, in <module>\r\n main(args)\r\n File \"train.py\", line 60, in main\r\nTraceback (most recent call last):\r\n File \"train.py\", line 201, in <module>\r\n main(args)\r\n File \"train.py\", line 60, in main\r\n utils.init_distributed_mode(args)\r\n File \"/home/emcp/Dev/git/clone_reference_torchvision/utils.py\", line 317, in init_distributed_mode\r\nutils.init_distributed_mode(args)\r\n torch.cuda.set_device(args.gpu)\r\n File \"/home/emcp/anaconda3/envs/clone_reference_torchvision/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 245, in set_device\r\n File \"/home/emcp/Dev/git/clone_reference_torchvision/utils.py\", line 317, in init_distributed_mode\r\n torch._C._cuda_setDevice(device)\r\n torch.cuda.set_device(args.gpu)\r\n File \"/home/emcp/anaconda3/envs/clone_reference_torchvision/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 245, in set_device\r\nRuntimeError torch._C._cuda_setDevice(device)\r\nRuntimeError: cuda runtime error (101) : invalid device ordinal at /opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp:59\r\n: cuda runtime error (101) : invalid device ordinal at /opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp:59\r\nTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp line=59 error=101 : invalid device ordinal\r\nTraceback (most recent call last):\r\n File \"train.py\", line 201, in <module>\r\n main(args)\r\n File \"train.py\", line 60, in main\r\n utils.init_distributed_mode(args)\r\n File \"/home/emcp/Dev/git/clone_reference_torchvision/utils.py\", line 317, in init_distributed_mode\r\n torch.cuda.set_device(args.gpu)\r\n File \"/home/emcp/anaconda3/envs/clone_reference_torchvision/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 245, in set_device\r\n torch._C._cuda_setDevice(device)\r\nRuntimeError: cuda runtime error (101) : invalid device ordinal at /opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp:59\r\nTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp line=59 error=101 : invalid device ordinal\r\nTraceback (most recent call last):\r\n File \"train.py\", line 201, in <module>\r\n main(args)\r\n File \"train.py\", line 60, in main\r\n utils.init_distributed_mode(args)\r\n File \"/home/emcp/Dev/git/clone_reference_torchvision/utils.py\", line 317, in init_distributed_mode\r\n torch.cuda.set_device(args.gpu)\r\n File \"/home/emcp/anaconda3/envs/clone_reference_torchvision/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 245, in set_device\r\n torch._C._cuda_setDevice(device)\r\nRuntimeError: cuda runtime error (101) : invalid device ordinal at /opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp:59\r\nTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1587428207430/work/torch/csrc/cuda/Module.cpp line=59 error=101 : invalid device ordinal\r\nTraceback (most recent call last):\r\n File \"train.py\", line 201, in <module>\r\n main(args)\r\n File \"train.py\", line 60, in main\r\n utils.init_distributed_mode(args)\r\n File \"/home/emcp/Dev/git/clone_reference_torchvision/utils.py\", line 317, in init_distributed_mode\r\n torch.cuda.set_device(args.gpu)\r\n File \"/home/emcp/anaconda3/envs/clone_reference_torchvision/lib/python3.8/site-packages/torch/cuda/__init__", "url": "https://github.com/pytorch/vision/issues/2237", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2020-05-19T07:15:13Z", "updated_at": "2020-05-20T10:32:24Z", "user": "EMCP" }, { "repo": "pytorch/serve", "number": 363, "title": "Best practice question - how to chain multiple models together for pipeline process?", "body": "Hi all - I couldn't find anything in the documentation so wondering if there is a recommendation for how to chain multiple models together in an internal pipeline?\r\nExample - we need to take an incoming image, do obj detection , then do a seperate mdoel classification from items cropped and zoomed, return the result...i.e.:\r\n\r\n1 - Model A - Run obj detector on incoming image\r\n1a - post process to determine cropped subset of image where object is (inside model handler)\r\n\r\n2 - Model B - run inference on subset image from step 1a to classify the detected item\r\n3 - return final result\r\n\r\nIs our only option to force the client to do two calls to the /serve and effectively remote control this pipeline? \r\nIdeally we want to pass in the large image, and just return the final result all from one client http POST call since its expensive in our setup to pass/ do multiple remote calls (i.e. post to model 1, post to model 2). \r\nBut where would one control this two step process from within torch server or is there any built in 'controller' concept? \r\nIt seems the current design model is based around hosting independent models each doing their own work without any reference to how to internally bind them together into a pipeline. \r\nCould you provide any recommendations for how to best structure a pipeline like the above or is it not supported/possible/recommended?\r\nThanks very much!\r\n\r\n\r\n", "url": "https://github.com/pytorch/serve/issues/363", "state": "closed", "labels": [], "created_at": "2020-05-19T05:16:35Z", "updated_at": "2020-05-19T19:37:08Z", "user": "lessw2020" }, { "repo": "pytorch/xla", "number": 2092, "title": "How to set XLA random seed", "body": "## \u2753 Questions and Help\r\n\r\nOn CPU (same from run to run):\r\n```python\r\ntorch.manual_seed(0)\r\ntorch.zeros(5).uniform_()\r\n# tensor([0.4963, 0.7682, 0.0885, 0.1320, 0.3074])\r\n\r\ntorch.manual_seed(0)\r\ntorch.zeros(5).uniform_()\r\n# tensor([0.4963, 0.7682, 0.0885, 0.1320, 0.3074])\r\n```\r\n\r\nOn XLA (different from run to run):\r\n```python\r\ntorch.manual_seed(0)\r\ntorch.zeros(5, device=xm.xla_device()).uniform_()\r\n# tensor([0.9650, 0.4818, 0.2164, 0.2308, 0.8543], device='xla:1')\r\n\r\ntorch.manual_seed(0)\r\ntorch.zeros(5, device=xm.xla_device()).uniform_()\r\n# tensor([0.3197, 0.6271, 0.0868, 0.2445, 0.3315], device='xla:1')\r\n```", "url": "https://github.com/pytorch/xla/issues/2092", "state": "closed", "labels": [], "created_at": "2020-05-18T19:13:07Z", "updated_at": "2020-05-18T19:21:22Z", "user": "myleott" }, { "repo": "pytorch/tutorials", "number": 998, "title": "Per-tutorial dependencies/build", "body": "Current documented instructions say how to build tutorials tell you how to install dependencies and the build ALL of the tutorials at once. If you want to work on a single tutorial, this is not great, since many of the tutorials are quite involved (in terms of the dependencies they need, what external resources they need, and also how long they take to download). There should be clearer instructions about how to develop a single tutorial at a time.", "url": "https://github.com/pytorch/tutorials/issues/998", "state": "open", "labels": [ "build issue" ], "created_at": "2020-05-15T14:16:25Z", "updated_at": "2021-07-27T23:25:51Z", "comments": 1, "user": "ezyang" }, { "repo": "pytorch/pytorch", "number": 38542, "title": "How to use torch::where?", "body": "libtorch 1.5\r\nHow to use torch::where? for emample:\r\n-------------------------------------------------\r\nimport torch\r\nimport numpy as np\r\ncc=np.array(range(0,24)).reshape(-1,4)\r\nvalidIndex=np.where( ((cc[:,:2]>=0) & (cc[:,2:]>(1,2))).all(axis=1) )[0]\r\nprint(validIndex)\r\n>>[0 1 2 3 4 5]\r\n---------------------------------------------------\r\ntorch::Tensor cc = torch::range(0, 23, 1).view({ 4,6 }).view({-1,4});\r\n\ttorch::Tensor c = cc.index({ torch::indexing::Slice(),torch::indexing::Slice(2,torch::indexing::None) });\r\n?????\r\n", "url": "https://github.com/pytorch/pytorch/issues/38542", "state": "closed", "labels": [], "created_at": "2020-05-15T07:09:05Z", "updated_at": "2020-05-15T21:48:30Z", "user": "williamlzw" }, { "repo": "pytorch/serve", "number": 343, "title": "Docker: How to add your .mar files?", "body": "Dear all,\r\n\r\nCould you please update the doc showing how to use your `.mar` files and `model-store` dir with docker? Assuming I have a stored `.mar` and `model-store` dir locally on my pc and I want to run `torchserve` on docker with them, what should I do? Is there an option to add the `mar` file? The docker page of `torchserve` doesn't contain information.\r\n\r\nThank you.\r\n\r\nBe safe and best regards,\r\n\r\nFrancesco Saverio", "url": "https://github.com/pytorch/serve/issues/343", "state": "closed", "labels": [ "documentation", "duplicate", "triaged_wait" ], "created_at": "2020-05-13T14:37:23Z", "updated_at": "2020-06-09T23:39:52Z", "user": "FrancescoSaverioZuppichini" }, { "repo": "pytorch/examples", "number": 770, "title": "Building cpp example with libtorch fails with LibTorch downloaded from the website", "body": "I have tried to build some of the examples and the codes which have been tested before. But When updated the libtorch version I keep getting the following error. I downloaded the libtorch from website. \r\n\r\nUbuntu 18.04\r\nc++14\r\nmake 10.0\r\n\r\nOne of the tested Example: [https://github.com/dendisuhubdy/libtorch_examples.git](url)\r\nlog file :\r\n`-- The C compiler identification is GNU 7.5.0\r\n-- The CXX compiler identification is GNU 7.5.0\r\n-- Check for working C compiler: /usr/bin/cc\r\n-- Check for working C compiler: /usr/bin/cc -- works\r\n-- Detecting C compiler ABI info\r\n-- Detecting C compiler ABI info - done\r\n-- Detecting C compile features\r\n-- Detecting C compile features - done\r\n-- Check for working CXX compiler: /usr/bin/c++\r\n-- Check for working CXX compiler: /usr/bin/c++ -- works\r\n-- Detecting CXX compiler ABI info\r\n-- Detecting CXX compiler ABI info - done\r\n-- Detecting CXX compile features\r\n-- Detecting CXX compile features - done\r\n-- Looking for pthread.h\r\n-- Looking for pthread.h - found\r\n-- Looking for pthread_create\r\n-- Looking for pthread_create - not found\r\n-- Looking for pthread_create in pthreads\r\n-- Looking for pthread_create in pthreads - not found\r\n-- Looking for pthread_create in pthread\r\n-- Looking for pthread_create in pthread - found\r\n-- Found Threads: TRUE \r\n-- Found CUDA: /usr/local/cuda-10.2 (found version \"10.2\") \r\n-- Caffe2: CUDA detected: 10.2\r\n-- Caffe2: CUDA nvcc is: /usr/local/cuda-10.2/bin/nvcc\r\n-- Caffe2: CUDA toolkit directory: /usr/local/cuda-10.2\r\n-- Caffe2: Header version is: 10.2\r\n-- Found CUDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so \r\n-- Found cuDNN: v7.6.5 (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)\r\n-- Autodetected CUDA architecture(s): 5.2\r\n-- Added CUDA NVCC flags for: -gencode;arch=compute_52,code=sm_52\r\n-- Found Torch: /home/ubuntu/libtorch/libtorch/lib/libtorch.so \r\n-- Found OpenCV: /usr/local (found version \"3.4.9\") \r\n-- OpenCV library status:\r\n-- config: /usr/local/share/OpenCV\r\n-- version: 3.4.9\r\n-- libraries: opencv_calib3d;opencv_core;opencv_dnn;opencv_features2d;opencv_flann;opencv_highgui;opencv_imgcodecs;opencv_imgproc;opencv_ml;opencv_objdetect;opencv_photo;opencv_shape;opencv_stitching;opencv_superres;opencv_video;opencv_videoio;opencv_videostab\r\n-- include path: /usr/local/include;/usr/local/include/opencv\r\n-- Downloading MNIST dataset\r\n/home/ubuntu/libtorch_examples/build/../data/mnist/train-images-idx3-ubyte.gz already exists, skipping ...\r\n/home/ubuntu/libtorch_examples/build/../data/mnist/train-images-idx3-ubyte already exists, skipping ... \r\n/home/ubuntu/libtorch_examples/build/../data/mnist/train-labels-idx1-ubyte.gz already exists, skipping ...\r\n/home/ubuntu/libtorch_examples/build/../data/mnist/train-labels-idx1-ubyte already exists, skipping ... \r\n/home/ubuntu/libtorch_examples/build/../data/mnist/t10k-images-idx3-ubyte.gz already exists, skipping ...\r\n/home/ubuntu/libtorch_examples/build/../data/mnist/t10k-images-idx3-ubyte already exists, skipping ... \r\n/home/ubuntu/libtorch_examples/build/../data/mnist/t10k-labels-idx1-ubyte.gz already exists, skipping ...\r\n/home/ubuntu/libtorch_examples/build/../data/mnist/t10k-labels-idx1-ubyte already exists, skipping ... \r\n-- Configuring done\r\n-- Generating done\r\n-- Build files have been written to: /home/ubuntu/libtorch_examples/build\r\nScanning dependencies of target mnist\r\n[ 14%] Building CXX object src/CMakeFiles/mnist.dir/mnist.cpp.o\r\nScanning dependencies of target dcgan\r\n[ 28%] Building CXX object src/CMakeFiles/dcgan.dir/dcgan.cpp.o\r\nScanning dependencies of target yolov3\r\n[ 42%] Building CXX object src/CMakeFiles/yolov3.dir/darknet.cpp.o\r\n[ 57%] Building CXX object src/CMakeFiles/yolov3.dir/yolov3.cpp.o\r\nIn file included from /home/ubuntu/libtorch_examples/src/mnist.cpp:8:0:\r\n/home/ubuntu/libtorch_examples/include/mnist.h:55:13: error: \u2018FeatureDropout\u2019 in namespace \u2018torch::nn\u2019 does not name a type\r\n torch::nn::FeatureDropout conv2_drop;\r\n ^~~~~~~~~~~~~~\r\n/home/ubuntu/libtorch_examples/include/mnist.h: In constructor \u2018Net::Net()\u2019:\r\n/home/ubuntu/libtorch_examples/include/mnist.h:37:33: error: \u2018conv2_drop\u2019 was not declared in this scope\r\n register_module(\"conv2_drop\", conv2_drop);\r\n ^~~~~~~~~~\r\n/home/ubuntu/libtorch_examples/include/mnist.h:37:33: note: suggested alternative: \u2018conv2\u2019\r\n register_module(\"conv2_drop\", conv2_drop);\r\n ^~~~~~~~~~\r\n conv2\r\n/home/ubuntu/libtorch_examples/include/mnist.h: In member function \u2018at::Tensor Net::forward(at::Tensor)\u2019:\r\n/home/ubuntu/libtorch_examples/include/mnist.h:45:22: error: \u2018conv2_drop\u2019 was not declared in this scope\r\n torch::max_pool2d(conv2_drop->forward(conv2->forward(x)), 2));\r\n ^~~~~~~~~~\r\n/home/ubuntu/libtorch_examples/include/mnist.h:45:22: note: suggested alternative: \u2018conv2\u2019\r\n torch::max_pool2d(conv2_drop->forward(conv2->forward(x)), 2));\r\n ^~~~", "url": "https://github.com/pytorch/examples/issues/770", "state": "closed", "labels": [], "created_at": "2020-05-13T11:53:26Z", "updated_at": "2020-05-17T16:28:00Z", "comments": 1, "user": "Gfuse" }, { "repo": "pytorch/tutorials", "number": 995, "title": "data_loading_tutorial.py iterators", "body": "I like this tutorial but I think it would be better if it included an example of how to define __next__() and __iter__() methods so that the dataset can be used with `enumerate`.", "url": "https://github.com/pytorch/tutorials/issues/995", "state": "closed", "labels": [ "data loading", "docathon-h1-2023", "medium" ], "created_at": "2020-05-12T21:55:37Z", "updated_at": "2023-06-02T15:45:12Z", "comments": 3, "user": "patricknaughton01" }, { "repo": "pytorch/vision", "number": 2205, "title": "Some issues in conv_transpose2d. ", "body": "Recently I met this problem bothering me.\r\nIn TensorFlow, there is a funciton:\r\n`tf.nn.conv2d_transpose(\r\n input, filters, output_shape, strides, padding='SAME', data_format='NHWC',\r\n dilations=None, name=None\r\n)`\r\nBut in PyTorch:\r\n`torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) \u2192 Tensor`\r\nAs you can see there is no parameter named output_shape. But my project need this parameter to recitify the size of output. Anyone can help me? Thanks~", "url": "https://github.com/pytorch/vision/issues/2205", "state": "closed", "labels": [ "invalid", "question" ], "created_at": "2020-05-12T04:30:00Z", "updated_at": "2020-05-12T13:21:51Z", "user": "dhiyu" }, { "repo": "pytorch/pytorch", "number": 38129, "title": "how to get the libtorch code from the python?", "body": "![image](https://user-images.githubusercontent.com/31852119/81431554-ea93db80-9193-11ea-96f4-d2f1df0982ce.png)\r\n\r\nin the python, it's easy to slice, but in the libtorch i don't find any information about it? so please tell me how to get the code from the above python code? thanks", "url": "https://github.com/pytorch/pytorch/issues/38129", "state": "closed", "labels": [], "created_at": "2020-05-08T17:26:41Z", "updated_at": "2020-05-08T17:52:13Z", "user": "Peterisfar" }, { "repo": "pytorch/tutorials", "number": 987, "title": "where is folder \"reference\"?", "body": "At toturial [https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together),\r\nin Putting Everything Together, a couple of files under folder `reference` is mentioned, yet this folder has never shown up before. I didn't find it in either `torch`(version 1.5.0) or `torchvision`(version 0.6.0) pakage in my anaconda environment.", "url": "https://github.com/pytorch/tutorials/issues/987", "state": "closed", "labels": [], "created_at": "2020-05-08T04:34:31Z", "updated_at": "2020-05-09T02:18:05Z", "user": "feiyangsuo" }, { "repo": "pytorch/text", "number": 758, "title": "How to apply Torchtext convenience classes to prepare data for a Transformer", "body": "Hello,\r\nReading the tutorial [Language Translation with torchText](https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html) I wondered how someone could use those convenience classes (`Field, BucketIterator`) to train/fine-tune a `Transformer` such as those available at [Huggingface](https://github.com/huggingface/transformers).\r\n\r\nFor instance, I'm currently working with a large dataset distributed in jsonl files which looks like:\r\n```python\r\n{ \"query\": \"this is a query 1\", \"doc\": \"relevant document regarding query 1\" },\r\n{ \"query\": \"this is a query 2\", \"doc\": \"relevant document regarding query 2\" },\r\n ...\r\n \r\n```\r\n\r\nNow, to forward this data into a transformer like Bert, it is necessary to convert this dataset into the format:\r\n\r\n```python3\r\n(\r\n#queries\r\n {\r\n 'input_ids': tensor([\r\n [ 101, 2023, 2003, 1037, 23032, 1015, 102, 0],\r\n [ 101, 2023, 2003, 1037, 23032, 1016, 102, 0]]), \r\n 'attention_mask': tensor([\r\n [1, 1, 1, 1, 1, 1, 1, 0],\r\n [1, 1, 1, 1, 1, 1, 1, 0]])\r\n }, \r\n\r\n #docs\r\n {\r\n 'input_ids': tensor([\r\n [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102],\r\n [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102]]), \r\n 'attention_mask': 'input_ids': tensor([\r\n [1, 1, 1, 1, 1, 1, 1, 1],\r\n [1, 1, 1, 1, 1, 1, 1, 1]])\r\n}\r\n```\r\nSo, what would be a clear and efficient approach to apply those convenience classes to tokenize a text dataset to fit it in the required format of a transformer?\r\n", "url": "https://github.com/pytorch/text/issues/758", "state": "closed", "labels": [], "created_at": "2020-05-06T23:47:51Z", "updated_at": "2020-05-07T13:54:37Z", "user": "celsofranssa" }, { "repo": "pytorch/text", "number": 757, "title": "How to apply torchtext to prepare data for a transformer", "body": "Hello,\r\nReading the tutorial [Language Translation with torchText](https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html) I wondered how someone could use those convenience classes (`Field, BucketIterator`) to train/fine-tune a `Transformer` such as those available at [Huggingface](https://github.com/huggingface/transformers).\r\n\r\nFor instance, I'm currently working with a large dataset distributed in jsonl files which looks like:\r\n```python\r\n{ \"query\": \"this is a query 1\", \"doc\": \"relevant document regarding query 1\" },\r\n{ \"query\": \"this is a query 2\", \"doc\": \"relevant document regarding query 2\" },\r\n ...\r\n \r\n```\r\n\r\nNow, to forward this data into a transformer like Bert, it is necessary to convert this dataset into the format:\r\n\r\n```python3\r\n(\r\n#queries\r\n {\r\n 'input_ids': tensor([\r\n [ 101, 2023, 2003, 1037, 23032, 1015, 102, 0],\r\n [ 101, 2023, 2003, 1037, 23032, 1016, 102, 0]]), \r\n 'attention_mask': tensor([\r\n [1, 1, 1, 1, 1, 1, 1, 0],\r\n [1, 1, 1, 1, 1, 1, 1, 0]])\r\n }, \r\n\r\n #docs\r\n {\r\n 'input_ids': tensor([\r\n [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102],\r\n [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102]]), \r\n 'attention_mask': 'input_ids': tensor([\r\n [1, 1, 1, 1, 1, 1, 1, 1],\r\n [1, 1, 1, 1, 1, 1, 1, 1]])\r\n}\r\n```\r\nSo, what would be a clear and efficient approach to apply those convenience classes to tokenize a text dataset to fit it in the required format of a transformer?\r\n", "url": "https://github.com/pytorch/text/issues/757", "state": "closed", "labels": [ "legacy" ], "created_at": "2020-05-06T23:47:18Z", "updated_at": "2022-06-23T21:46:27Z", "user": "celsofranssa" }, { "repo": "pytorch/tutorials", "number": 977, "title": "is the log softmax function missing in the Transformer Example?", "body": "Hello,\r\nThe tutorial (https://pytorch.org/tutorials/beginner/transformer_tutorial.html) describes the Transformer paper and says that \"... .To have the actual words, the output of nn.TransformerEncoder model is sent to the final Linear layer, which is followed by a log-Softmax function.\" The code, however, does return the output directly from the last Linear layer and does not use a (log) softmax anywhere. Do I fail to see something or is it actually missing?", "url": "https://github.com/pytorch/tutorials/issues/977", "state": "closed", "labels": [], "created_at": "2020-05-04T19:41:07Z", "updated_at": "2020-09-13T13:52:55Z", "comments": 2, "user": "hilfe123" }, { "repo": "pytorch/xla", "number": 2026, "title": "How are kernel implementations registered to PyTorch", "body": "## \u2753 Questions and Help\r\nHi,\r\n\r\nI was wondering how the PyTorch dispatcher finds the kernel implementations for functions defined in `aten_xla_type_default.h`\r\n\r\nand what is the purpose of `RegisterAtenTypeFunctions `", "url": "https://github.com/pytorch/xla/issues/2026", "state": "closed", "labels": [], "created_at": "2020-05-04T16:32:41Z", "updated_at": "2020-05-08T19:14:16Z", "user": "a2bhulla" }, { "repo": "pytorch/vision", "number": 2175, "title": "ModuleNotFoundError: No module named 'torchvision.models.detection'", "body": "I have pytorch1.1.0 and torchvision0.2.2 installed in my anaconda environment.\r\nI can: 1. `import torch`; 2.`import torchvision` (following the toturial) Yet when `from torchvision.models.detection.faster_rcnn import FastRCNNPredictor`, error raised as:\r\n```\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"D:\\Applications\\PyCharm 2019.2.3\\helpers\\pydev\\_pydev_bundle\\pydev_import_hook.py\", line 21, in do_import\r\n module = self._system_import(name, *args, **kwargs)\r\nModuleNotFoundError: No module named 'torchvision.models.detection'\r\n```\r\nI suspect that my version of torchvision is somewhat low. But my GPU driver only support cudatoolkit9.0, and version 0.2.2 is automatically chosen when I install torchvision.\r\n", "url": "https://github.com/pytorch/vision/issues/2175", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-05-03T06:51:40Z", "updated_at": "2020-05-04T12:33:56Z", "user": "feiyangsuo" }, { "repo": "pytorch/xla", "number": 2021, "title": "How to run nprocs > 1 on local CPU using the XRT client ?", "body": "## \u2753 Questions and Help\r\n\r\nI would like to test locally some code with `xmp.spawn(..., nprocs=8)`. When I use suggested env vars in https://github.com/pytorch/xla/blob/master/CONTRIBUTING.md#running-the-tests, my tests pass with nprocs=1 and fail with nprocs > 1 complaining about gRPC\r\n```\r\n2020-05-02 23:51:20.896158: E 1654 tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:509] Unknown: Could not start gRPC server\r\n```\r\nAny suggestions on how to setup properly XRT_DEVICE_MAP and XRT_WORKERS ?\r\nThanks \r\n\r\nPS: \r\nI test the code with docker : gcr.io/tpu-pytorch/xla r1.5\r\n", "url": "https://github.com/pytorch/xla/issues/2021", "state": "closed", "labels": [], "created_at": "2020-05-03T00:05:20Z", "updated_at": "2020-05-03T00:44:06Z", "user": "vfdev-5" }, { "repo": "pytorch/xla", "number": 2000, "title": "How are operations recorded once tensors are dispatched to XLA", "body": "## \u2753 Questions and Help\r\nIn the pytorch/xla docs it states \"XLA tensors, on the other hand, are lazy. They record operations in a graph until the results are needed\"\r\n\r\nI was wondering once pytorch dispatches to XLA how this recording occurs. Does the creation of an XLATensor also create a node for this operations which is added to an XLA graph? \r\n\r\nThanks", "url": "https://github.com/pytorch/xla/issues/2000", "state": "closed", "labels": [ "stale" ], "created_at": "2020-04-30T20:13:08Z", "updated_at": "2020-06-06T21:04:18Z", "user": "a2bhulla" }, { "repo": "pytorch/vision", "number": 2167, "title": "engine.py error while following tutorial", "body": "## \ud83d\udcda Documentation\r\n\r\nI have found this library via the examples at https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html\r\n\r\nI ran the google colab and that successfully finished.. however when I go to copy `master` from here and use it locally I am getting an error..\r\n\r\nEngine.py line \r\n![image](https://user-images.githubusercontent.com/3691722/80714967-f9c0bc80-8af5-11ea-8d17-1d8f1278a69d.png)\r\n\r\n![image](https://user-images.githubusercontent.com/3691722/80715039-14933100-8af6-11ea-89ee-894b769723f7.png)\r\n\r\nError\r\n\r\n```\r\n train_one_epoch(model, optimizer, training_data_loader, device, epoch, print_freq=model_conf[\"hyperParameters\"][\"display\"])\r\n File \"/home/emcp/Dev/git/EMCP/faster-rcnn-torchvision/model_components/model/engine.py\", line 27, in train_one_epoch\r\n images = list(image.to(device) for image in images)\r\n File \"/home/emcp/Dev/git/EMCP/faster-rcnn-torchvision/model_components/model/engine.py\", line 27, in <genexpr>\r\n images = list(image.to(device) for image in images)\r\nAttributeError: 'Image' object has no attribute 'to'\r\n``\r\nseems to be an image did not load perhaps ?", "url": "https://github.com/pytorch/vision/issues/2167", "state": "closed", "labels": [ "question", "module: models", "module: reference scripts", "topic: object detection" ], "created_at": "2020-04-30T13:21:38Z", "updated_at": "2022-09-15T11:04:05Z", "user": "EMCP" }, { "repo": "pytorch/pytorch", "number": 37478, "title": "How to do indexing in one-dimensional tensor?", "body": "How to do indexing in one-dimensional tensor?\r\n```\r\ntorch::Tensor keep = nms(c_bboxes, c_scores.index({Slice(), 1}), 0.3);\r\ncout << keep.sizes() << endl;\r\nint keep_end = min(750, (int)keep.size(0));\r\ncout << keep_end << endl;\r\nkeep = keep.index({Slice(), keep_end});\r\n```\r\nHow to index 1 dimension tensor in libtorch?\r\n\r\nThe output is :\r\n```\r\n[775]\r\n750\r\nterminate called after throwing an instance of 'c10::IndexError'\r\n what(): too many indices for tensor of dimension 1 (applySlicing at ../aten/src/ATen/TensorIndexing.h:422)\r\n\r\n```\r\n\r\nSo why is that? And how to do the index correctly?", "url": "https://github.com/pytorch/pytorch/issues/37478", "state": "closed", "labels": [], "created_at": "2020-04-29T03:15:08Z", "updated_at": "2020-04-29T03:28:45Z", "user": "Edwardmark" }, { "repo": "pytorch/vision", "number": 2151, "title": "Cannot train deeplabv3_resnet50 with batch size of 1", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. train a `deeplabv3_resnet50`\r\n1. call `forward` with tensor of shape: `torch.Size([1, 3, 240, 320])` (batch with single colour image)\r\n1. receive error message: `ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])`\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nTraining should conduct as with `torch.Size([2, 3, 240, 320])` (batch size 2 and up).\r\n\r\n## Environment\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.5.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.2\r\n\r\nOS: Ubuntu 18.04.4 LTS\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nCMake version: version 3.10.2\r\n\r\nPython version: 3.6\r\nIs CUDA available: No\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.18.2\r\n[pip3] torch==1.5.0\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchvision==0.6.0\r\n[conda] Could not collect\r\n```\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\r\nTraining with batch size of 1 is probably uncommon but there is no technical reason why this should not be possible and the error message is rather cryptic and unhelpful.", "url": "https://github.com/pytorch/vision/issues/2151", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-04-28T17:05:52Z", "updated_at": "2020-04-28T17:47:30Z", "user": "christian-rauch" }, { "repo": "pytorch/elastic", "number": 97, "title": "How to run elastically on kubernetes (nnodes vs worker replicas)", "body": "### Question\r\n- On the frontpage README.md of the repo it says to run Elastic on 1 ~ 4 nodes, 8 trainers/node, total 8 ~ 32 trainers. Job starts as soon as 1 node is healthy, you may add up to 4 nodes.\r\n\r\n```\r\npython -m torchelastic.distributed.launch\r\n --nnodes=1:4\r\n --nproc_per_node=8\r\n --rdzv_id=JOB_ID\r\n --rdzv_backend=etcd\r\n --rdzv_endpoint=ETCD_HOST:ETCD_PORT\r\n YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)\r\n```\r\n\r\n- In the docs for the kube example it says:\r\n```\r\nset Worker.replicas to the number of nodes to start with (you may modify this later to scale the job in/out)\r\n```\r\n\r\n- When I try to run with a minReplicas of 1, maxReplicas of 2, and replicas of 2 and **the autoscaling group for my training nodes only has one node available** training starts with the one available node, and then the second one joins in when it can, but it seems to reset progress \ud83d\udc47, is this expected because we haven't hit a checkpoint yet? Is this desired? Especially in a world where we're using spot instances, how can I make sure I don't get stuck in a loop similar to this redoing the same epoch?\r\n```\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1830/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 4.8472e+00 (5.0661e+00) Acc@1 3.12 ( 3.16) Acc@5 9.38 ( 11.34)\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1840/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 4.7636e+00 (5.0644e+00) Acc@1 9.38 ( 3.17) Acc@5 18.75 ( 11.37)\r\nINFO 2020-04-27 18:52:52,967 Etcd machines: ['http://0.0.0.0:2379']\r\nINFO 2020-04-27 18:52:53,585 Attempting to join next rendezvous\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1850/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 4.6797e+00 (5.0630e+00) Acc@1 3.12 ( 3.17) Acc@5 18.75 ( 11.39)\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1860/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 5.0609e+00 (5.0614e+00) Acc@1 6.25 ( 3.17) Acc@5 15.62 ( 11.42)\r\nINFO 2020-04-27 18:52:53,587 Observed existing rendezvous state: {'status': 'final', 'version': '10', 'participants': [0], 'keep_alives': ['/torchelastic/p2p/run_imagenet/rdzv/v_10/rank_0'], 'num_workers_waiting': 0}\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1870/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 4.1825e+00 (5.0594e+00) Acc@1 6.25 ( 3.18) Acc@5 28.12 ( 11.46)\r\nINFO 2020-04-27 18:52:53,628 Added self to waiting list. Rendezvous full state: {\"status\": \"final\", \"version\": \"10\", \"participants\": [0], \"keep_alives\": [\"/torchelastic/p2p/run_imagenet/rdzv/v_10/rank_0\"], \"num_workers_waiting\": 1}\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1880/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 5.0155e+00 (5.0574e+00) Acc@1 0.00 ( 3.18) Acc@5 6.25 ( 11.47)\r\nInstance: [i-015067026ed8f10a3] Epoch: [0][1890/3125] Time 0.139 ( 0.137) Data 0.034 ( 0.034) Loss 4.8805e+00 (5.0552e+00) Acc@1 9.38 ( 3.21) Acc@5 18.75 ( 11.53)\r\nINFO 2020-04-27 18:52:58,719 Attempting to join next rendezvous\r\nINFO 2020-04-27 18:52:58,722 Observed existing rendezvous state: {'status': 'final', 'version': '10', 'participants': [0], 'keep_alives': ['/torchelastic/p2p/run_imagenet/rdzv/v_10/rank_0'], 'num_workers_waiting': 1}\r\nINFO 2020-04-27 18:52:58,782 Added self to waiting list. Rendezvous full state: {\"status\": \"final\", \"version\": \"10\", \"participants\": [0], \"keep_alives\": [\"/torchelastic/p2p/run_imagenet/rdzv/v_10/rank_0\"], \"num_workers_waiting\": 2}\r\nINFO 2020-04-27 18:53:08,501 Keep-alive key /torchelastic/p2p/run_imagenet/rdzv/v_10/rank_0 is not renewed.\r\nINFO 2020-04-27 18:53:08,501 Rendevous version 10 is incomplete. \r\nINFO 2020-04-27 18:53:08,501 Attempting to destroy it.\r\nINFO 2020-04-27 18:53:08,502 Keep-alive key /torchelastic/p2p/run_imagenet/rdzv/v_10/rank_0 is not renewed.\r\nINFO 2020-04-27 18:53:08,502 Destroyed rendezvous version 10 successfully.\r\nINFO 2020-04-27 18:53:08,502 Previously existing rendezvous state changed. Will re-try joining.\r\nINFO 2020-04-27 18:53:08,502 Rendevous version 10 is incomplete. \r\nINFO 2020-04-27 18:53:08,502 Attempting to destroy it.\r\nINFO 2020-04-27 18:53:08,503 Rendezvous attempt failed, will retry. Reason: Key not found : /torchelastic/p2p/run_imagenet/rdzv/active_version\r\nINFO 2020-04-27 18:53:08,502 Attempting to join next rendezvous\r\nINFO 2020-04-27 18:53:08,506 New rendezvous state created: {'status': 'joinable', 'version': '11', 'participants': []}\r\nINFO 2020-04-27 18:53:08,541 Joined rendezvous version 11 as rank 0. Full state: {'status': 'joinable', 'version': '11', 'participants': [0]}\r\nINFO 2020-04-27 18:53:08,541 Rank 0 is responsible for join last call.\r\nINFO 2020-04-27 18:53:09,504 Attempting to join next rendezvous\r\nINFO 2020-04-27 18:53:09,507 Observed existing rendezvous state: {'status': 'joinable', 'version': '11', 'participants': [0]}\r\nINFO 2020-04-27 18:53:09,540 Joined rendezvous version 11 as rank ", "url": "https://github.com/pytorch/elastic/issues/97", "state": "closed", "labels": [], "created_at": "2020-04-27T18:58:03Z", "updated_at": "2020-04-30T16:29:37Z", "user": "mttcnnff" }, { "repo": "pytorch/pytorch", "number": 37341, "title": "how to covert libtorch model to onnx model in libtorch1.5", "body": "how to convert libtorch model to onnx model in libtorch1.5 . how to convert libtorch model to tensorrt model in libtorch \uff1f\r\n\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/37341", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2020-04-27T11:03:44Z", "updated_at": "2020-04-27T15:13:44Z", "user": "williamlzw" }, { "repo": "pytorch/java-demo", "number": 10, "title": "how to build demo with javac ", "body": "Thanks for the tutorial. \r\n\r\nWell, could you pls write a guide start from javac for the users who are not familiar with Gradle. \r\n\r\nThanks again.\r\n---\r\nwhen running the app, I had always got this.\r\n![image](https://user-images.githubusercontent.com/12872935/80546590-55842c00-89b6-11ea-937f-53363d729906.png)\r\n", "url": "https://github.com/pytorch/java-demo/issues/10", "state": "closed", "labels": [], "created_at": "2020-04-26T19:57:31Z", "updated_at": "2020-05-03T10:08:38Z", "user": "fzwqq" }, { "repo": "pytorch/TensorRT", "number": 45, "title": "Build Project failed with Bazel", "body": "Hi,\r\n\r\nI configure the env and try to build such project. \r\nINFO: Analyzed target //:libtrtorch (34 packages loaded, 1830 targets configured).\r\nINFO: Found 1 target...\r\nINFO: Deleting stale sandbox base /home/xxx/.cache/bazel/_bazel_xxx/2ed6247d0d5238dab6f58f41a8e8ad4b/sandbox\r\nERROR: missing input file 'external/tensorrt/lib/x86_64-linux-gnu/libnvinfer.so', owner: '@tensorrt//:lib/x86_64-linux-gnu/libnvinfer.so'\r\nERROR: /home/xxx/2ed6247d0d5238dab6f58f41a8e8ad4b/external/tensorrt/BUILD.bazel:15:1: @tensorrt//:nvinfer_lib: missing input file '@tensorrt//:lib/x86_64-linux-gnu/libnvinfer.so'\r\nTarget //:libtrtorch failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nERROR: /home/xxx/.cache/bazel/_bazel_xxxd5238dab6f58f41a8e8ad4b/external/tensorrt/BUILD.bazel:15:1 1 input file(s) do not exist\r\nINFO: Elapsed time: 2.964s, Critical Path: 0.13s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully\r\n\r\ncould you help solve such errors or provide more env setting details?\r\nI have email @narendasan and please check it.\r\nThanks.", "url": "https://github.com/pytorch/TensorRT/issues/45", "state": "closed", "labels": [ "question", "component: build system" ], "created_at": "2020-04-26T04:02:19Z", "updated_at": "2020-04-30T03:59:00Z", "user": "alanzhai219" }, { "repo": "pytorch/vision", "number": 2144, "title": "Negative Samples in Faster RCNN training results in NaN RPN_BOX_REG Loss", "body": "Overview:\r\nI updated torch and torchvision to the latest builds. A cool update was that now negative samples could be included in RCNN training. However, I end up getting a NaN value for loss_rpn_box_reg when I provide negative samples.\r\n\r\nI was training a Pedestrian Detector. Based on my custom dataset input, if a label wasn't provided, I would use it as a negative sample. This is the code snippet I used.\r\n```\r\n def __getitem__(self, idx):\r\n img_path , x1 , y1 , x2 ,y2 , label = self.imgs[idx].split(\",\")\r\n img = Image.open(img_path).convert(\"RGB\")\r\n boxes = []\r\n if label:\r\n pos = np.asarray([[y1,y2],[x1,x2]]).astype(np.float)\r\n xmin = np.min(pos[1])\r\n xmax = np.max(pos[1])\r\n ymin = np.min(pos[0])\r\n ymax = np.max(pos[0])\r\n boxes.append([xmin, ymin, xmax, ymax])\r\n labels = torch.ones((1,), dtype=torch.int64)\r\n iscrowd = torch.zeros((1,), dtype=torch.int64)\r\n else:\r\n boxes.append([0.0,0.0,0.0,0.0])\r\n labels = torch.zeros((1,), dtype=torch.int64)\r\n iscrowd = torch.zeros((0,), dtype=torch.int64)\r\n # convert everything into a torch.Tensor\r\n boxes = torch.as_tensor(boxes, dtype=torch.float32)\r\n image_id = torch.tensor([idx])\r\n area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])\r\n target = {}\r\n target[\"boxes\"] = boxes\r\n target[\"labels\"] = labels\r\n target[\"image_id\"] = image_id\r\n target[\"area\"] = area\r\n target[\"iscrowd\"] = iscrowd\r\n if self.transforms is not None:\r\n img, target = self.transforms(img, target)\r\n return img, target\r\n```\r\n\r\nThe training seems to work fine if I replace the following line:\r\n```\r\nboxes.append([0.0,0.0,0.0,0.0])\r\n```\r\nwith \r\n```\r\nboxes.append([0.0,0.0,0.1,0.1])\r\n```\r\nSo i'm guessing it's because both xmin/ymin and xmax/ymax are equal.\r\n\r\nSetup:\r\nTorch : 1.5.0 \r\nTorchvision: 0.6.0\r\n Nvidia - 440.33 \r\n Cuda-10.2\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/2144", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-04-24T23:00:57Z", "updated_at": "2021-08-13T13:19:57Z", "user": "praneet195" }, { "repo": "pytorch/examples", "number": 759, "title": "torch::Tensor can't be use with std::tuple or std::vector?", "body": "#include <torch/torch.h>\r\n#include <Windows.h>\r\n#include <iostream>\r\n#include <string>\r\n#include <vector>\r\n\r\nauto ReadRsv(const std::string path) {\r\n\tHANDLE filea= CreateFileA((LPCSTR)path.c_str(), GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_SEQUENTIAL_SCAN,NULL);\r\n\tint cout;\r\n\tint length;\r\n\tReadFile(filea, &cout,4,NULL,NULL);\r\n\tstd::vector<std::tuple<torch::Tensor, torch::Tensor>> rsv;\r\n\tbyte* dataa = new byte[784];\r\n\tbyte* datab = new byte[1];\r\n\tDWORD hasread;\r\n\tfor (int i = 0; i<cout; ++i) {\r\n\t\tReadFile(filea, &length, 4, &hasread, NULL);\r\n\t\tReadFile(filea, &dataa, 784, &hasread, NULL);\r\n\t\ttorch::Tensor line = torch::from_blob(&dataa, { 784 },torch::kByte);\r\n\t\tReadFile(filea, &datab, 1, &hasread, NULL);\r\n\t\ttorch::Tensor label = torch::from_blob(&datab, { 1 }, torch::kByte);\r\n\t\trsv.push_back(std::make_tuple(line,label)); //wrong?\r\n\t}\r\n\tdelete []dataa;\r\n\tdelete []datab;\r\n\tCloseHandle(filea);\r\n\treturn rsv;\r\n}\r\n\r\n--------------------------------------------------\r\nwin10 x64;libtorch 1.5 release x64;\r\n------------------------\r\ndownload rsv file: https://share.weiyun.com/5DYsiDe\r\n-------------\r\nwhen i=0,it run success,but when i=1,it run wrong. \r\n0x00007FFD618EF7E4 (torch_cpu.dll) (in consoleapplication1.exe) throws an exception: 0xC0000005: an access conflict occurs while writing to location 0x0000000000000000.\r\nRemove this sentence and it will run successfully ->rsv.push_back(std::make_tuple(line,label)); ", "url": "https://github.com/pytorch/examples/issues/759", "state": "open", "labels": [ "c++" ], "created_at": "2020-04-24T04:46:38Z", "updated_at": "2022-03-09T20:49:36Z", "comments": 1, "user": "williamlzw" }, { "repo": "pytorch/pytorch", "number": 37201, "title": "Libtorch:how to create tensor from tensorRT fp16 cuda half type pointer?", "body": "how to create tensor from tensorRT fp16 half type pointer in libtorch?\r\nI am working on a detection model. I change the backbone of it to tensorRT to do FP16 inference, and the detection code such as decode boxes and nms is done in libtorch and torchvisoin, so how to create fp16 tensor from tensorRT half type pointers?\r\nThe important code is to illustrate the issue:\r\n```\r\n// tensorRT code to get half type outpus\r\nhalf_float::half* outputs[18];\r\ndoInference(*engine, data, outputs, 1);\r\n// to get the final outputs with libtorch\r\nvector<torch::Tensor> output;\r\n//???? how to feed the date in outpus to output????\r\n// get the result with libtorch method detect_trt->forward\r\n auto res = detect_trt->forward(output); \r\n```\r\nThanks in advance.\n\ncc @yf225 @glaringlee", "url": "https://github.com/pytorch/pytorch/issues/37201", "state": "closed", "labels": [ "module: cpp", "triaged" ], "created_at": "2020-04-24T02:19:45Z", "updated_at": "2020-04-29T03:28:20Z", "user": "Edwardmark" }, { "repo": "pytorch/pytorch", "number": 37134, "title": "C++ model output is a List, how to get each item?", "body": "## \u2753 Questions and Help\r\nI'm using PyTorch1.3 and libtorch1.3.\r\n\r\nIn python, my scripted model's returned type is `List[List[Dict[str, Tensor]]]`\r\n\r\nIn C++, I get model output from `auto output = model.forward(inputs);`, I find that output is a `GenericList`. I don't know how to access each item of GenericList, and I want to know how to get each Tensor from Dict[str, Tensor].\r\n\r\nThx.\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)", "url": "https://github.com/pytorch/pytorch/issues/37134", "state": "closed", "labels": [], "created_at": "2020-04-23T07:53:56Z", "updated_at": "2020-04-23T12:07:01Z", "user": "kaituoxu" }, { "repo": "pytorch/pytorch", "number": 37132, "title": "How to rebuild the libtorch to get the lib.so after download the libtorch from https://download.pytorch.org/libtorch/nightly/cu92/libtorch-win-shared-with-deps-latest.zip?", "body": "how to build and make libtorch after I change the code in https://download.pytorch.org/libtorch/nightly/cu92/libtorch-win-shared-with-deps-latest.zip?\r\nAny guide please?\r\nThanks in advance.", "url": "https://github.com/pytorch/pytorch/issues/37132", "state": "closed", "labels": [], "created_at": "2020-04-23T06:08:40Z", "updated_at": "2020-04-23T07:22:20Z", "user": "Edwardmark" }, { "repo": "pytorch/examples", "number": 757, "title": "the learning rate of word_language_model", "body": "Hi, I have a question about the learning rate in the example \"word_language_model\", \r\nthe init lr = 20, which seems very large, can you tell me why lr is set to equal 20?\r\nThanks a lot!\r\nIf you have some advices about improving the performance, please let me know and thanks", "url": "https://github.com/pytorch/examples/issues/757", "state": "open", "labels": [ "help wanted", "nlp" ], "created_at": "2020-04-22T09:07:29Z", "updated_at": "2024-04-02T21:27:56Z", "comments": 3, "user": "zhangyingbit" }, { "repo": "pytorch/pytorch", "number": 36991, "title": "how to convert quantization_ware_training model to onnx", "body": "## \u2753 Questions and Help\r\npython 3.6\r\npytorch version: 1.4.0\r\nonnx 1.6.0\r\nIn most issues, some of them mentioned about this question. but I still don't know how to convert a int8 model to onnx. I followed the tutorial (https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#quantization-aware-training) to try to train a quantization model, the pretrained model is got from model zoo. I already got the int8 model, but how to convert it to onnx??\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/36991", "state": "closed", "labels": [], "created_at": "2020-04-21T08:31:29Z", "updated_at": "2020-04-21T20:19:36Z", "user": "onesnow123q" }, { "repo": "pytorch/examples", "number": 755, "title": "Process doesn't exit properly for single-node distributed setting.", "body": "Hello, I trained an ImageNet using the following arguments, \r\n```\r\nCUDA_VISIBLE_DEVICES=0,2,3,4 python main.py /media/ramdisk/images --arch resnet18 -j 16 --multiprocessing-distributed --dist-url 'tcp://127.0.0.1:52038' --dist-backend 'nccl' --world-size 1 --rank 0 --print-freq 2500\r\n```\r\nThe visible devices were set to 0,2,3,4 since I had to leave it empty for another use at the time, and print-freq was set at 2500 to avoid generating too much std outputs. The training runs well, but its termination is not so smooth.\r\n\r\nHere is the last few lines of [log](https://github.com/pytorch/examples/files/4498140/log.txt), and a capture of the nvidia-smi at the time. \r\n\r\n![image](https://user-images.githubusercontent.com/19501347/79676978-ce50ee80-8226-11ea-948a-26a7eea72cb6.png)\r\n\r\nOne of the gpu shows an ERR! on GPU Fan and Power usage. And even after killing the processes manually, the error remains. (I had to restart the server in order to get out of the ERR state)\r\n\r\n1. Why does the processes remain?\r\n2. What is the proper way to terminate them?\r\n", "url": "https://github.com/pytorch/examples/issues/755", "state": "open", "labels": [ "distributed" ], "created_at": "2020-04-19T01:28:55Z", "updated_at": "2022-03-09T20:52:47Z", "comments": 0, "user": "inventor71" }, { "repo": "pytorch/examples", "number": 754, "title": "Why the kernel size of discriminator & generator is 4 in dcgan", "body": "I don't understand, is there any special role? or cited other model?\r\nthanks\uff01", "url": "https://github.com/pytorch/examples/issues/754", "state": "closed", "labels": [], "created_at": "2020-04-18T00:33:19Z", "updated_at": "2022-03-09T21:44:47Z", "comments": 2, "user": "mltloveyy" }, { "repo": "pytorch/examples", "number": 753, "title": "Imagenet data?", "body": "I'd like to use the imagenet example to train a resnet on the whole imagenet dataset... The problem is I can't seem to actually find the entire dataset anywhere (14 million images). The URLs link on the imagenet website is dead. Does anyone know the standard way to get the classification dataset? i.e. how were the pretrained models in pytorch trained?", "url": "https://github.com/pytorch/examples/issues/753", "state": "closed", "labels": [], "created_at": "2020-04-18T00:01:24Z", "updated_at": "2021-08-11T23:05:25Z", "comments": 3, "user": "justinblaber" }, { "repo": "pytorch/examples", "number": 751, "title": "Example of MNIST using RNN", "body": "Hi @osalpekar ,\r\n\r\nI would like to implement an example of MNIST using RNN.\r\n\r\n**Motivation:** Create pytorch example similar to Official Tensorflow Keras RNN example using MNIST [here](https://www.tensorflow.org/guide/keras/rnn)\r\n\r\nI have written and tested the code by modifying following example on MNIST [here](https://github.com/pytorch/examples/tree/master/mnist) . Please let me know if I can raise a PR for this. \r\n\r\nThanks and regards,\r\nRakesh", "url": "https://github.com/pytorch/examples/issues/751", "state": "closed", "labels": [], "created_at": "2020-04-16T11:52:49Z", "updated_at": "2022-03-10T00:30:41Z", "comments": 6, "user": "rakesh-malviya" }, { "repo": "pytorch/vision", "number": 2109, "title": "development plan of \"functional_tensor\"", "body": "## \u2753 Questions and Help\r\n\r\nHi torchvision team,\r\n\r\nThis is Nic from NVIDIA, thanks for sharing your great work on data processing solutions!\r\n1. I saw you developed \"functional_tensor.py\" to support Tensor type data but didn't find where it is used in transforms, may I know the reason?\r\n2. And what's your future plan of transforms for Numpy and Tensor data type?\r\nActually, I found 2 Tensor only transforms, others are for PIL or numpy.\r\n3. If you want to support both Tensor and Numpy for all transforms, explicitly ask users to select transform for Numpy or for Tensor?\r\nOr implicitly detect data type in transforms and use \"function.py\" or \"function_tensor.py\"?\r\n\r\nThanks in advance.\r\n", "url": "https://github.com/pytorch/vision/issues/2109", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2020-04-16T02:10:03Z", "updated_at": "2020-10-21T08:23:16Z", "user": "Nic-Ma" }, { "repo": "pytorch/vision", "number": 2108, "title": "Not getting proper mask as instance.", "body": "Hi guys\r\n\r\nUse pretrained weights=yes\r\nno. of epoch=400\r\nno. of class=1\r\n\r\nAt the time of prediction i am not getting individual masks for individual object. i am getting some extra mask but those are empty or partial . what can be the reason? ", "url": "https://github.com/pytorch/vision/issues/2108", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-04-15T11:41:30Z", "updated_at": "2020-04-15T15:11:43Z", "user": "vivekdeepquanty" }, { "repo": "pytorch/vision", "number": 2106, "title": "I can't load mobilenet under version 0.5.0", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.a =models.mobilenet()\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nTypeError: 'module' object is not callable\r\n## Expected behavior\r\nload the mobilenet model, but I can find mobilenet module in dir(torchvision.models)\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\nubuntu16.04\r\ntorchvision version 0.5.0\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source):pip\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/vision/issues/2106", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2020-04-15T06:08:59Z", "updated_at": "2020-04-15T10:04:09Z", "user": "lunasdejavu" }, { "repo": "pytorch/pytorch", "number": 36644, "title": "I had build pytourch from source. But how to install after making a build?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\nCan anybody let me know how to install after build from source?", "url": "https://github.com/pytorch/pytorch/issues/36644", "state": "closed", "labels": [], "created_at": "2020-04-15T06:04:05Z", "updated_at": "2020-04-18T05:34:15Z", "user": "tnavadiya" }, { "repo": "pytorch/vision", "number": 2103, "title": "size -> size() ?", "body": "traceback\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/maksim/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.6911.25/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py\", line 3, in Exec\r\n exec(exp, global_vars, local_vars)\r\n File \"<string>\", line 3, in <module>\r\n File \"/Users/maksim/dev_projects/denoising-fluorescence/denoising/venv/lib/python3.7/site-packages/torchvision/transforms/transforms.py\", line 247, in __call__\r\n return F.center_crop(img, self.size)\r\n File \"/Users/maksim/dev_projects/denoising-fluorescence/denoising/venv/lib/python3.7/site-packages/torchvision/transforms/functional.py\", line 382, in center_crop\r\n image_width, image_height = img.size\r\nTypeError: cannot unpack non-iterable builtin_function_or_method object\r\n>>> img.size()\r\ntorch.Size([1, 1, 2160, 2560])\r\n```\r\n\r\nline with bug\r\n\r\nhttps://github.com/pytorch/vision/blob/master/torchvision/transforms/functional.py#L374\r\n\r\nmy version numbers\r\n\r\n```\r\ntorch==1.4.0\r\ntorchvision==0.5.0\r\n```", "url": "https://github.com/pytorch/vision/issues/2103", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2020-04-14T14:15:57Z", "updated_at": "2020-04-14T15:09:30Z", "user": "makslevental" }, { "repo": "pytorch/pytorch", "number": 36574, "title": "how to remove ios dependency", "body": "I have written pytorch c++ app.\r\nI clone pytorch code and build it as per guidelines in pytorch mobile\r\nwhen i compile the app for x86_64 i am getting ios dependencies as below.\r\nplease help me avoid these ios errors. I want to run app in x86 linux pc.\r\n\r\n-- The C compiler identification is GNU 6.5.0\r\n-- The CXX compiler identification is GNU 6.5.0\r\n-- Check for working C compiler: /usr/bin/cc\r\n-- Check for working C compiler: /usr/bin/cc -- works\r\n-- Detecting C compiler ABI info\r\n-- Detecting C compiler ABI info - done\r\n-- Detecting C compile features\r\n-- Detecting C compile features - done\r\n-- Check for working CXX compiler: /usr/bin/c++\r\n-- Check for working CXX compiler: /usr/bin/c++ -- works\r\n-- Detecting CXX compiler ABI info\r\n-- Detecting CXX compiler ABI info - done\r\n-- Detecting CXX compile features\r\n-- Detecting CXX compile features - done\r\n-- Found torch: /home/anilkumar.av/pytorch-mobile/pytorch/build_android/install/lib/libtorch.a\r\n-- Configuring done\r\n-- Generating done\r\n-- Build files have been written to: /home/anilkumar.av/pytorch-mobile/helloworld/build\r\nScanning dependencies of target pythExec\r\n[ 50%] Building CXX object CMakeFiles/pythExec.dir/pythExec.cpp.o\r\n[100%] Linking CXX executable pythExec\r\n/home/anilkumar.av/pytorch-mobile/pytorch/build_android/install/lib/libc10.a(TensorImpl.cpp.o): In function `std::__ndk1::basic_ios<char, std::__ndk1::char_traits<char> >::init(std::__ndk1::basic_streambuf<char, std::__ndk1::char_traits<char> >*)':\r\n/home/anilkumar.av/Android/Sdk/ndk/21.0.6113669/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/ios:711: undefined reference to `std::__ndk1::ios_base::init(void*)'\r\n/home/anilkumar.av/pytorch-mobile/pytorch/build_android/install/lib/libc10.a(TensorImpl.cpp.o): In function `basic_streambuf':\r\n/home/anilkumar.av/Android/Sdk/ndk/21.0.6113669/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/streambuf:232: undefined reference to `std::__ndk1::locale::locale()'\r\n", "url": "https://github.com/pytorch/pytorch/issues/36574", "state": "closed", "labels": [], "created_at": "2020-04-14T09:54:47Z", "updated_at": "2020-04-14T15:19:48Z", "user": "avanilkumar" }, { "repo": "pytorch/vision", "number": 2101, "title": "Where to download torchvision0.5.1 .whl files", "body": "Hi,\r\nI would like to download the torchvision 0.5.1 version but I cannot find a source to download the .whl file.\r\nIt is not in pypi.org nor in pytorch.org, nor anaconda.org.\r\n\r\nI have proxy issues on my computer so I can not use pip command, I need to download the .whl file first.\r\nCan you help me by giving an address of the 0.5.1 version? or some other hint to solve this? Thank you.", "url": "https://github.com/pytorch/vision/issues/2101", "state": "closed", "labels": [ "question", "topic: binaries" ], "created_at": "2020-04-14T09:12:26Z", "updated_at": "2020-04-14T13:32:57Z", "user": "300LiterPropofol" }, { "repo": "pytorch/pytorch", "number": 36554, "title": "How to save sth in python-api pytorch, but load it in libtorch?", "body": "How can I save some tensor in python, but load it in libtorch:\r\n\r\nI save tensor named piror using python, using the code:\r\n```\r\ntorch.save(prior, 'prior.pth')\r\n```\r\nAnd I load the tensor in libtorch using C++, by the following code:\r\n```\r\nstd::vector<torch::Tensor> tensorVec;\r\ntorch::load(tensorVec, \"/app/model/prior.pth\");\r\ntorch::Tensor priors = tensorVec[0];\r\n```\r\nBut I got the error:\r\nterminate called after throwing an instance of 'c10::Error'\r\n what(): `torch::jit::load()` received a file from `torch.save()`, but `torch::jit::load()` can only load files produced by `torch.jit.save()` (load at ../torch/csrc/jit/serialization/import.cpp:285)\r\n\r\nWhy is that? And what should I do to solve the issue? Thanks in advance.\n\ncc @suo @yf225", "url": "https://github.com/pytorch/pytorch/issues/36554", "state": "closed", "labels": [ "oncall: jit", "module: cpp", "module: serialization" ], "created_at": "2020-04-14T02:14:09Z", "updated_at": "2020-04-14T14:31:07Z", "user": "Edwardmark" }, { "repo": "pytorch/serve", "number": 192, "title": "Steps for how to preserve model store state across docker container restarts", "body": "When running torchserve in docker containers, provide steps for how to preserve state across container restarts", "url": "https://github.com/pytorch/serve/issues/192", "state": "closed", "labels": [ "enhancement" ], "created_at": "2020-04-12T21:37:27Z", "updated_at": "2022-02-09T23:49:05Z", "user": "chauhang" }, { "repo": "pytorch/serve", "number": 191, "title": "Add steps for how to run gpu docker container", "body": "Please add the steps for running GPU docker container in the docker readme. Steps should describe how to specify the gpus to be used on a multi-gpu machine \r\n\r\neg \r\n\r\n`docker run --rm -it --gpus device=0 -p 8080:8080 -p 8081:8081 torchserve:v0.1-gpu-latest`\r\n\r\nwhere device=0,1,2,3 selects GPUs indexed by ordinals 0,1,2 and 3, respectively. torchserve will see only these GPUs. If you specify device=all, then the torchserve will see all the available GPUs.\r\n", "url": "https://github.com/pytorch/serve/issues/191", "state": "closed", "labels": [ "documentation" ], "created_at": "2020-04-12T20:05:15Z", "updated_at": "2020-06-09T23:47:47Z", "user": "chauhang" }, { "repo": "pytorch/vision", "number": 2095, "title": "Unable to reproduce Faster RCNN evaluation metrics on pascal voc 2010 for Object Detection ", "body": "Hi Everyone,\r\n\r\nI am training the **pretrained Faster RCNN model** on PASCAL VOC 2010 dataset for Object Detection by following this pyTorch finetuning tutorial: [pytorch.org/tutorials/intermediate/torchvision_tutorial.html](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html)\r\n\r\n```\r\nmodel = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\r\n \r\nin_features = model.roi_heads.box_predictor.cls_score.in_features\r\nmodel.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes=21)\r\n```\r\nI also tried changing backbone to mobilenet_v2 as described in the above tutorial but results were much worse.\r\n\r\nI am using this dataset loading code with a batch size of 2: [https://github.com/pytorch/vision/issues/1097#issuecomment-508917489](https://github.com/pytorch/vision/issues/1097#issuecomment-508917489). I am also using RandomHorizontalFlip transformation while training. I train the models using the code in the tutorial ([github.com/pytorch/vision/blob/master/references/detection/engine.py](https://github.com/pytorch/vision/blob/master/references/detection/engine.py)).\r\n\r\nThe model performance on val dataset degrades after 5th epoch and the best **mAP** I could get is about **47%** which is much less than the expected performance (69.9%). Please note that I train on train split and evaluate on val split whereas in the paper, the model is trained on trainval and tested on test split but I don't think this can lead to such a reduction of performance.\r\n```\r\nparams = [p for p in model.parameters() if p.requires_grad]\r\noptimizer = torch.optim.SGD(params, lr=0.0001, momentum=0.9, weight_decay=0.005)\r\n\r\n# optimizer = torch.optim.Adam(params, lr=0.0001, weight_decay=0.005)\r\n# Adam gives much worse results (< 10% mAP!) for some reason!\r\n\r\nfor epoch in range(30):\r\n train_one_epoch(model, optimizer, train_loader, device, epoch, print_freq=1000)\r\n evaluate(model, val_loader, device=device)\r\n```\r\n\r\n```\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.472\r\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.768\r\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.522\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.188\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.402\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.518\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.412\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.599\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.607\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.318\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.535\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.650\r\n```\r\nCan anyone please help me resolve the issue? Do I have to make any changes to the default parameters in torchvision's faster_rcnn implementation?\r\n\r\n**Specifications:**\r\nPython - v3.7.3\r\npyTorch - v1.3.0\r\ntorchvision - v0.4.1\r\nCUDA - v10.1\r\nGPU - NVIDIA GeForce RTX 2080 8GB\r\n\r\nThanks for your time!", "url": "https://github.com/pytorch/vision/issues/2095", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2020-04-12T03:26:34Z", "updated_at": "2020-04-25T00:31:08Z", "user": "kevalmorabia97" }, { "repo": "pytorch/pytorch", "number": 36384, "title": "[quantization] how to quantize model which include not support to quantize layer", "body": "Hi, I have a model which include `prelu` layer, not support to quantize in current pytorch version, how to quantize this model for x86 CPU now? I try to define this model with the following format: \r\n```python\r\nself.convbn1 = QuantizableConvBNBlock(xxx) (has defined)\r\nself.prelu = nn.PReLU()\r\nself.convbn2 = QuantizableConvBNBlock(xxx) \r\nself.quant = torch.quantization.QuantStub()\r\nself.dequant = torch.quantization.DeQuantStub()\r\n \r\ndef forward(self, x):\r\n x = self.quant(x)\r\n x = self.convbn1(x) \r\n x = self.dequant(x)\r\n x = self.prelu(x)\r\n x = self.quant(x)\r\n x = self.convbn2(x)\r\n ...\r\n```\r\nbut I found after perform the quantization-aware training following tutorial, eval result is very terrible, what is the reason and how to solve it ?\r\nThanks ", "url": "https://github.com/pytorch/pytorch/issues/36384", "state": "closed", "labels": [], "created_at": "2020-04-10T13:24:18Z", "updated_at": "2020-04-13T04:19:47Z", "user": "zhangyongwei93" }, { "repo": "pytorch/vision", "number": 2089, "title": "How to use torchvision.ops.nms in cpp?", "body": "How to use torchvision.ops.nms in cpp?\r\nWhat should I include and how to call the funciton? Any doc?", "url": "https://github.com/pytorch/vision/issues/2089", "state": "closed", "labels": [ "help wanted", "module: documentation", "module: c++ frontend" ], "created_at": "2020-04-10T09:57:56Z", "updated_at": "2021-02-08T13:19:28Z", "user": "Edwardmark" }, { "repo": "pytorch/ELF", "number": 165, "title": "How to parse SGF files analyzed by ELF GO", "body": "Hi, I want to ask for more detailed information about SGF files provided in the Facebook elf-go tools.\r\n\r\nhttps://ai.facebook.com/tools/elf-opengo\r\nIn the above link, SGF files analyzed by elf-go are provided and I want to analyze those files.\r\nMore specifically SGF files in the link below.\r\nhttps://dl.fbaipublicfiles.com/elfopengo/analysis/data/gogod_commentary_sgfs.gzip\r\n\r\nThe format is slightly different from typical SGF files. Each line in the recorded move includes additional tree structured information generated by elf-go.\r\nBut I cannot find detailed information about the format of the file nor how to parse them.\r\nCan I get a parser for these files? Or any detailed instructions on how to parse them correctly?\r\n\r\nThank you.", "url": "https://github.com/pytorch/ELF/issues/165", "state": "open", "labels": [], "created_at": "2020-04-10T08:32:52Z", "updated_at": "2020-04-10T08:32:52Z", "user": "mibastro" }, { "repo": "pytorch/pytorch", "number": 36367, "title": "how to ensure the quality of pytorch framework?", "body": "Hi, I am a postgrad student, and I am developing my own deep-learning framework inside my lab, I just curious about how you guys maintained your framwork?Besides unit tests, is there any methods that can guarantee qulity?", "url": "https://github.com/pytorch/pytorch/issues/36367", "state": "closed", "labels": [], "created_at": "2020-04-10T05:02:35Z", "updated_at": "2020-04-13T04:14:45Z", "user": "MountainHil" }, { "repo": "pytorch/ios-demo-app", "number": 14, "title": "how to quantize the mobilenet", "body": "would you please provide the steps to quantize the mobilenet?\r\n\r\nhttps://github.com/pytorch/ios-demo-app/blob/master/PyTorchDemo/PyTorchDemo/ImageClassification/model/mobilenet_quantized.pt", "url": "https://github.com/pytorch/ios-demo-app/issues/14", "state": "closed", "labels": [], "created_at": "2020-04-09T10:32:56Z", "updated_at": "2020-12-16T07:38:59Z", "user": "ronjian" }, { "repo": "pytorch/vision", "number": 2083, "title": "COCO AP of FPN with ResNet-50 backbone for object detection", "body": "Hi @fmassa, thanks for the great codes.\r\nI am confused about COCO AP of `Faster R-CNN ResNet-50 FPN`,\r\nfrom [Document](https://pytorch.org/docs/stable/torchvision/models.html) and #925 and [Source Code](https://github.com/pytorch/vision/blob/master/references/detection/train.py#L156,L173),\r\nI guess that the model `Faster R-CNN ResNet-50 FPN` was trained with following hyperparameters and got AP 37.0, am I right?\r\n\r\n| Repo | Network | box AP | scheduler | epochs | lr-steps | batch size | lr |\r\n|:-----------------------------:|:-------------:|:----------:|:-------------:|:---------:|:----------------:|:--------------:|:--------:|\r\n| vision | R-50 FPN | 37.0 | **2x** | 26 | 16, 22 | 16 | 0.02 |\r\n\r\n> batch_size = 2 * 8 (NUM_GPU) = 16\r\n\r\nHowever, I noticed that the box AP in [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/MODEL_ZOO.md#end-to-end-faster-and-mask-r-cnn-baselines) and [Detectron](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#end-to-end-faster--mask-r-cnn-baselines) seems to have better performance as below:\r\n\r\n| Repo | Network | box AP | scheduler | epochs | lr-steps | batch size | lr |\r\n|:-----------------------------:|:-------------:|:----------:|:-------------:|:---------:|:-----------------:|:--------------:|:--------:|\r\n| maskrcnn-benchmark | R-50 FPN | 36.8 | **1x** | 12.28 | 8.19, 10.92 | 16 | 0.02 |\r\n| Detectron | R-50 FPN | 36.7 | **1x** | 12.28 | 8.19, 10.92 | 16 | 0.02 |\r\n| Detectron | R-50 FPN | 37.9 | **2x** | 24.56 | 16.37, 21.83 | 16 | 0.02 |\r\n\r\n> from [maskrcnn-benchmark 1x config](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/configs/e2e_faster_rcnn_R_50_FPN_1x.yaml)\r\n> epochs = 90000 (steps) * 16 (batch size) / 117266 (training images per epoch) = 12.28\r\n> btw, COCO2017 has 118287 training images but only 117266 training images contain at least one object\r\n\r\nI would like to know what causes this gap?\r\n\r\n- 37.0 (torchvision 2x) vs 36.8 (maskrcnn-benchmark 1x)\r\n- 37.0 (torchvision 2x) vs 37.9 (Detectron 2x)\r\n\r\nBesides, could I have the result which trained with scheduler 1x?\r\n\r\n| Repo | Network | box AP | scheduler | epochs | lr-steps | batch size | lr |\r\n|:-----------------------------:|:-------------:|:----------:|:-------------:|:---------:|:----------------:|:--------------:|:--------:|\r\n| vision | R-50 FPN | ?? | **1x** | 13 | 8, 11 | 16 | 0.02 |\r\n\r\nThank you!", "url": "https://github.com/pytorch/vision/issues/2083", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-04-09T03:44:06Z", "updated_at": "2020-04-27T00:59:56Z", "user": "potterhsu" }, { "repo": "pytorch/vision", "number": 2082, "title": "Does vision cpp api support half cuda precision ?", "body": "Does vision cpp api support half cuda precision ?\r\nI see that in the CMakelist.txt, it used flags as -D__CUDA_NO_HALF_OPERATORS_.\r\nhttps://github.com/pytorch/vision/blob/master/CMakeLists.txt#L10", "url": "https://github.com/pytorch/vision/issues/2082", "state": "closed", "labels": [ "question", "module: c++ frontend" ], "created_at": "2020-04-09T03:29:21Z", "updated_at": "2020-04-16T06:49:46Z", "user": "Edwardmark" }, { "repo": "pytorch/vision", "number": 2079, "title": "batch normalization affects model.eval's prediction", "body": "## \ud83d\udc1b Bug\r\nI'm not entirely sure if I maybe do not miss something VERY obvious here, feel free to tell me if that is the case, however I think it might be a bug: Batch normalization should only affect the input during training. However, I find with an easy experiment, that this is not the case. Note that dropout is not applied.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n0. generate Input \"input_\"\r\n1. initialize model densenet121\r\n2. set model to eval mode and predict using prediction1 = model(input_)\r\n3. set model to training mode\r\n4. predict something (without model update!)\r\n5. set model to eval mode again\r\n6. predict class from the same input prediction2 = model(input_)\r\n7. note that there was no weight update and the prediction1 != prediction2\r\n\r\n```python\r\nfrom torchvision import models\r\nimport torch\r\n\r\ndef set_parameter_requires_grad(model):\r\n for param in model.parameters():\r\n param.requires_grad = False\r\n\r\nif __name__ == \"__main__\":\r\n model = models.densenet121(pretrained=True)\r\n set_parameter_requires_grad(model)\r\n input_ = torch.zeros((1,3, 224, 224))\r\n model.eval()\r\n eval_value = model(input_)\r\n model.train()\r\n another_variable = model(input_)\r\n model.eval()\r\n eval_value_2 = model(input_)\r\n\r\n print(eval_value[0,0:3])\r\n print(eval_value_2[0,0:3])\r\n\r\n###### RETURNS######\r\ntensor([-0.3295, 0.2166, -0.6806])\r\ntensor([-0.5839, 0.4981, -0.4104])\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nI expected the model to be independent from batch normalization during model.eval(), i.e. prediction1 == prediction2\r\n\r\n## Environment\r\n\r\ntested on ubuntu 1804 and windows 10 using python 3.7, torchvision 0.5.0 and torch 1.4.0\r\n\r\n__\r\nEdit: I'm stupid. The batch normalization layers apply the statistics seen in the training during the evaluation. I closed this issue.", "url": "https://github.com/pytorch/vision/issues/2079", "state": "closed", "labels": [ "question" ], "created_at": "2020-04-08T19:27:33Z", "updated_at": "2020-04-09T10:09:54Z", "user": "dnns92" }, { "repo": "pytorch/TensorRT", "number": 37, "title": "Should the compiler check to see if modules are in eval mode?", "body": "", "url": "https://github.com/pytorch/TensorRT/issues/37", "state": "closed", "labels": [ "question", "component: core" ], "created_at": "2020-04-07T22:03:59Z", "updated_at": "2020-05-28T20:33:41Z", "user": "narendasan" }, { "repo": "pytorch/pytorch", "number": 36132, "title": "how to get cuda stream in torch 1.5?", "body": "I previous using this get cuda stream:\r\n\r\n```\r\nmodulated_deformable_col2im_coord_cuda(THCState_getCurrentStream(state),\r\n```\r\n\r\nI found this API gone `THCState_getCurrentStream` without even a deprecation warning, what's the altinate of this API?\r\nin torch 1.5?\n\ncc @yf225", "url": "https://github.com/pytorch/pytorch/issues/36132", "state": "closed", "labels": [ "module: cpp", "triaged" ], "created_at": "2020-04-07T06:59:27Z", "updated_at": "2020-04-09T03:21:48Z", "user": "lucasjinreal" }, { "repo": "pytorch/text", "number": 723, "title": "How to use custom parsers in Torchtext", "body": " I would like to use custom parser like nltk in torchtext, how to do that? \r\n", "url": "https://github.com/pytorch/text/issues/723", "state": "closed", "labels": [], "created_at": "2020-04-05T07:56:22Z", "updated_at": "2022-06-24T00:15:05Z", "user": "nawshad" }, { "repo": "pytorch/vision", "number": 2063, "title": "Wrong lr schedule in semantic segmentation sample?", "body": "Hi! I am using the semantic segmentation reference training scripts and I think I found an issue with the lr scheduler.\r\n \r\nIn the documentation of [torch.optim.lr_scheduler.LambdaLR](https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.LambdaLR) it says that the lambda function receives an integer parameter epoch, but in the training reference script it looks that the parameter `x` is used as if it was the global step: https://github.com/pytorch/vision/blob/e61538cba036c42bab23ce8f9d205da9889977ae/references/segmentation/train.py#L158\r\nIf I understand it correctly, this is the poly learning rate policy used in [DeepLab](https://arxiv.org/pdf/1606.00915.pdf), so I think that instead it should be:\r\n```python\r\nlambda x: (1 - x / args.epochs) ** 0.9)\r\n```\r\nAlso, it'd be interesting to have a way of changing the learning rate at the finer resolution of iterations, instead of epochs.\r\n\r\n@fmassa what do you think?", "url": "https://github.com/pytorch/vision/issues/2063", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2020-04-04T17:56:02Z", "updated_at": "2020-04-06T13:26:57Z", "user": "oscmansan" }, { "repo": "pytorch/text", "number": 722, "title": "How to load a word embedding dictionary using torchtext", "body": "Hi,\r\n\r\nI have tried to write that to a gensim word2vec format then load, but it throws error about string to float conversion. Is there a standard way to use custom pre-trained embedding (not created through gensim) which is a python dictionary to load using torchtext?\r\n\r\nThanks,\r\n", "url": "https://github.com/pytorch/text/issues/722", "state": "open", "labels": [], "created_at": "2020-04-04T01:05:08Z", "updated_at": "2020-04-06T15:57:47Z", "user": "nawshad" }, { "repo": "pytorch/pytorch", "number": 35877, "title": "How to use a network B to obtain a tensor and use it to replace the weight of a layer of network A, and this back propagation process will train A and B", "body": "## \u2753 Questions and Help\r\n\r\n### How to use a network B to obtain a tensor and use it to replace the weight of a layer of network A and this backpropagation process will train A and B\u3002\r\n\r\nI make a code, which uses the weight of a layer of network A as input into network B. then B output a tensor and I use it to replace the weight of a layer of the network A. But I find the weight only needs the type of nn.Parameter() and I convert the tensor to Parameter type, but I find the weight of network B does not get an update. Can you help me please! \r\n\r\nIt's noted that I want to train the weight of network B by the loss of the network A.\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/35877", "state": "closed", "labels": [], "created_at": "2020-04-02T13:28:13Z", "updated_at": "2020-04-06T17:09:36Z", "user": "syiswell" }, { "repo": "pytorch/tutorials", "number": 924, "title": "5x5 kernel size instead of 3x3?", "body": "Hi, I just read this tutorial on your official website [NEURAL NETWORKS](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py)\r\nand think according to the image and the following code, maybe the kernel size of the first convolution layer should be 5x5 instead of 3x3.\r\nIf we follow [this formula](https://stackoverflow.com/questions/44193270/how-to-calculate-the-output-size-after-convolving-and-pooling-to-the-input-image) and by default the argument of **conv2d** is **padding = 0** and **stride = 1**, we have\r\n* **1st conv2d with 5x5 kernel**: 32x32 -> 28x28\r\n* **1st max pooling**: 28x28 -> 14x14\r\n* **2nd conv2d with 3x3 kernel**: 14x14 -> 12x12\r\n* **2nd max pooling**: 12x12 -> 6x6\r\n\r\nWhich will explain both the image and the following linear layer (6x6 image dimension) in your code.\r\n\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/924", "state": "closed", "labels": [], "created_at": "2020-04-02T08:36:52Z", "updated_at": "2021-04-26T20:13:45Z", "comments": 1, "user": "sudo-bcli" }, { "repo": "pytorch/TensorRT", "number": 34, "title": "What the advantages of TRTorch? ", "body": "I used to use torch2trt to convert pytorch module, could you explain the advatage over torch2trt? \r\n\r\nIf the model contain op that tensorrt don't support, can trtorch convert it to engine? \r\nOtherwise run the op supported by tensorrt with tensorrt, and other use libtorch?\r\n\r\nI really appreciate for your great works, if you can answer my doubts, I will be very grateful.", "url": "https://github.com/pytorch/TensorRT/issues/34", "state": "closed", "labels": [ "question" ], "created_at": "2020-04-01T10:07:45Z", "updated_at": "2020-05-28T20:33:19Z", "user": "dancingpipi" }, { "repo": "pytorch/pytorch", "number": 35759, "title": "how to do 3d data augmentation in parallel on the gpu?", "body": "I have a lot of 3d data and need to do various data augmentation. I want to do data augmentation in parallel on the gpu, but it seems that pytorch does not allow gpu operation in the dataloader. Is there any good way?\r\n\n\ncc @ngimel @SsnL", "url": "https://github.com/pytorch/pytorch/issues/35759", "state": "open", "labels": [ "module: dataloader", "module: cuda", "triaged" ], "created_at": "2020-03-31T15:25:22Z", "updated_at": "2020-04-01T13:23:34Z", "user": "chuxiang93" }, { "repo": "pytorch/tutorials", "number": 918, "title": "Saving the weights", "body": "After training for certain iterations. How to save the weights, Which can be used for further analysis", "url": "https://github.com/pytorch/tutorials/issues/918", "state": "closed", "labels": [], "created_at": "2020-03-31T08:39:49Z", "updated_at": "2021-06-08T21:29:42Z", "comments": 1, "user": "SRIKARHI" }, { "repo": "pytorch/TensorRT", "number": 28, "title": "How can I build TRTorch without network?", "body": "as the title.", "url": "https://github.com/pytorch/TensorRT/issues/28", "state": "closed", "labels": [ "question", "component: build system" ], "created_at": "2020-03-30T08:15:49Z", "updated_at": "2020-04-24T17:29:30Z", "user": "dancingpipi" }, { "repo": "pytorch/ELF", "number": 163, "title": "How to use ELF in Sabaki or gogui?", "body": "Could anybody help tell me how to use ELF OpenGo with Sabaki or gogui?\r\ndon't use weight of leelazero-elf.", "url": "https://github.com/pytorch/ELF/issues/163", "state": "closed", "labels": [], "created_at": "2020-03-28T11:37:49Z", "updated_at": "2020-05-21T06:43:16Z", "user": "herogan2017" }, { "repo": "pytorch/vision", "number": 2021, "title": "IndexError: list index out of range", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Run the [TorchVision Object Detection Finetuning Tutorial](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html). \r\n2. For the model, I used the instructions for \r\n\r\n> 2. Modifying the model to add a different backbone\r\n\r\nBut I keep getting the following error:\r\n\r\n`---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-16-159df024665a> in <module>\r\n 4 for epoch in range(num_epochs):\r\n 5 # train for one epoch, printing every 10 iterations\r\n----> 6 train_one_epoch(model, optimizer, train_loader, device, epoch, print_freq=10)\r\n 7 # update the learning rate\r\n 8 lr_scheduler.step()\r\n\r\n/Volumes/Samsung_T5/OneDrive - Coventry University/detector/faster_rcnn_v23/engine.py in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)\r\n 28 targets = [{k: v.to(device) for k, v in t.items()} for t in targets]\r\n 29 \r\n---> 30 loss_dict = model(imgs1, targets)\r\n 31 \r\n 32 losses = sum(loss for loss in loss_dict.values())\r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 530 result = self._slow_forward(*input, **kwargs)\r\n 531 else:\r\n--> 532 result = self.forward(*input, **kwargs)\r\n 533 for hook in self._forward_hooks.values():\r\n 534 hook_result = hook(self, input, result)\r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)\r\n 69 features = OrderedDict([('0', features)])\r\n 70 proposals, proposal_losses = self.rpn(images, features, targets)\r\n---> 71 detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)\r\n 72 detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)\r\n 73 \r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 530 result = self._slow_forward(*input, **kwargs)\r\n 531 else:\r\n--> 532 result = self.forward(*input, **kwargs)\r\n 533 for hook in self._forward_hooks.values():\r\n 534 hook_result = hook(self, input, result)\r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torchvision/models/detection/roi_heads.py in forward(self, features, proposals, image_shapes, targets)\r\n 754 matched_idxs = None\r\n 755 \r\n--> 756 box_features = self.box_roi_pool(features, proposals, image_shapes)\r\n 757 box_features = self.box_head(box_features)\r\n 758 class_logits, box_regression = self.box_predictor(box_features)\r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 530 result = self._slow_forward(*input, **kwargs)\r\n 531 else:\r\n--> 532 result = self.forward(*input, **kwargs)\r\n 533 for hook in self._forward_hooks.values():\r\n 534 hook_result = hook(self, input, result)\r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torchvision/ops/poolers.py in forward(self, x, boxes, image_shapes)\r\n 186 rois = self.convert_to_roi_format(boxes)\r\n 187 if self.scales is None:\r\n--> 188 self.setup_scales(x_filtered, image_shapes)\r\n 189 \r\n 190 scales = self.scales\r\n\r\n~/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torchvision/ops/poolers.py in setup_scales(self, features, image_shapes)\r\n 159 # get the levels in the feature map by leveraging the fact that the network always\r\n 160 # downsamples by a factor of 2 at each level.\r\n--> 161 lvl_min = -torch.log2(torch.tensor(scales[0], dtype=torch.float32)).item()\r\n 162 lvl_max = -torch.log2(torch.tensor(scales[-1], dtype=torch.float32)).item()\r\n 163 self.scales = scales\r\n\r\nIndexError: list index out of range`\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\nI tried different models and adjusted the actors and scales, but keep getting this error. \r\n\r\n## Environment\r\n\r\n```\r\nPyTorch version: 1.4.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: None\r\n\r\nOS: Mac OSX 10.15.3\r\nGCC version: Could not collect\r\nCMake version: Could not collect\r\n\r\nPython version: 3.8\r\nIs CUDA available: No\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\n\r\nVersions of relevant libraries:\r\n[pip] numpy==1.18.1\r\n[pip] torch==1.4.0\r\n[pip] torchvision==0.5.0\r\n[conda] blas ", "url": "https://github.com/pytorch/vision/issues/2021", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-03-26T16:27:03Z", "updated_at": "2024-03-26T17:41:15Z", "user": "17sarf" }, { "repo": "pytorch/vision", "number": 2019, "title": "How to plot masks of maskrcnn? ", "body": "Hello,\r\n\r\nDoes someone know how to plot masks of maskrcnn? In the output of maskrcnn_inference in roi_heads.py mask_logits pass through sigmoid to become mask_probs and its output_size is generally very small to correctly see anything (28*28 by default, which is defined in the roi_align parameters). I tried to binarize this mask_probs using cv2.threshold (with threshold value equals to np.median of mask_probs) then convert to polygon with cv2.findcontours and finally resize it in the image shape but the results are not good. \r\n\r\nThanks", "url": "https://github.com/pytorch/vision/issues/2019", "state": "closed", "labels": [ "question", "topic: object detection", "module: utils" ], "created_at": "2020-03-26T15:19:47Z", "updated_at": "2021-05-06T13:27:06Z", "user": "leglandudu69" }, { "repo": "pytorch/pytorch", "number": 35372, "title": "How to support single-process-multiple-devices in DistributedDataParallel other than CUDA device", "body": "Hi,\r\n\r\nI am investigating to extend the DistributedDataParallel to other accelerator devices than CUDA devices.\r\nNot only to support single-process-single-device but also to support the single-process-multiple-devices and multple-processes-multiple-devices.\r\n\r\nThere are a lot of CUDA dependency in the DistributedDataParallel.\r\n\r\nMy question is:\r\n1. How to override CUDA logical dependency and dispatch the gather and scatter (and other APIs used) to the c10d backend without modifying the distributed.py ? [https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py](url)\r\n \r\n\r\n\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd @cbalioglu @gcramer23", "url": "https://github.com/pytorch/pytorch/issues/35372", "state": "open", "labels": [ "oncall: distributed" ], "created_at": "2020-03-25T08:52:08Z", "updated_at": "2021-06-04T13:57:34Z", "user": "JohnLLLL" }, { "repo": "pytorch/tutorials", "number": 905, "title": "prune: model sparity increase,but inference time doesn't cut down", "body": "`prune.l1_unstructured(conv_module, name='weight', amount=0.8)<br>prune.remove(conv_module, 'weight')`\r\n\r\nwith these two function, I process all module with convolution,and their sparsity become 80%.\r\nbut the model inference time incease. Is it expected ?\r\nand another question is that after prune,I save model with:\r\n\r\n`torch.save(model.state_dict(), 'new_model.pth')`\r\nand then ,load the save model,it's module sparsity go back to 0 , How to save pruned model correctly ?\r\nThank you !", "url": "https://github.com/pytorch/tutorials/issues/905", "state": "closed", "labels": [], "created_at": "2020-03-25T02:26:48Z", "updated_at": "2021-06-08T22:05:51Z", "comments": 1, "user": "gyc-code" }, { "repo": "pytorch/examples", "number": 742, "title": "No saved *.png in checkpoint in dcgan.cpp", "body": "In the README file of \"DCGAN Example with the PyTorch C++ Frontend\" it says that:\r\n\r\n_The training script periodically generates image samples. Use the display_samples.py script situated in this folder to generate a plot image. For example:_\r\n\r\nBut the dcgan.cpp file just saves the model in *.pt format, doesnt save any picture. Then, the command stated in the README:\r\n```\r\n$ python display_samples.py -i dcgan-sample-10.png\r\nSaved out.png\r\n```\r\nGives an error as ```dcgan-sample-10.png``` doesnt exist", "url": "https://github.com/pytorch/examples/issues/742", "state": "open", "labels": [ "bug", "help wanted" ], "created_at": "2020-03-24T15:15:32Z", "updated_at": "2023-04-20T20:59:19Z", "comments": 3, "user": "hect1995" }, { "repo": "pytorch/vision", "number": 2007, "title": "Learning rate become 0", "body": "my lr was 0.0001\r\n\r\nBut after some epoch it become zero.\r\n\r\nEpoch: [15] [ 0/209] eta: 0:03:06 lr: 0.000000 loss: 0.5737 (0.5737) loss_classifier: 0.0601 (0.0601) loss_box_reg: 0.0831 (0.0831) loss_mask: 0.4023 (0.4023) loss_objectness: 0.0062 (0.0062) loss_rpn_box_reg: 0.0221 (0.0221) time: 0.8938 data: 0.2370 max mem: 6450\r\nEpoch: [15] [ 10/209] eta: 0:02:13 lr: 0.000000 loss: 0.5818 (0.6080) loss_classifier: 0.0609 (0.0621) loss_box_reg: 0.0782 (0.0759) loss_mask: 0.4273 (0.4496) loss_objectness: 0.0061 (0.0073) loss_rpn_box_reg: 0.0119 (0.0132) time: 0.6731 data: 0.0303 max mem: 6450\r\nEpoch: [15] [ 20/209] eta: 0:02:05 lr: 0.000000 loss: 0.5848 (0.5937) loss_classifier: 0.0595 (0.0620) loss_box_reg: 0.0693 (0.0756) loss_mask: 0.4273 (0.4355) loss_objectness: 0.0060 (0.0068) loss_rpn_box_reg: 0.0118 (0.0138) time: 0.6527 data: 0.0096 max mem: 6450\r\nEpoch: [15] [ 30/209] eta: 0:01:59 lr: 0.000000 loss: 0.5848 (0.5950) loss_classifier: 0.0616 (0.0626) loss_box_reg: 0.0710 (0.0762) loss_mask: 0.4182 (0.4338) loss_objectness: 0.0065 (0.0087) loss_rpn_box_reg: 0.0106 (0.0137) time: 0.6611 data: 0.0098 max mem: 6450\r\nEpoch: [15] [ 40/209] eta: 0:01:50 lr: 0.000000 loss: 0.5718 (0.5921) loss_classifier: 0.0639 (0.0642) loss_box_reg: 0.0767 (0.0768) loss_mask: 0.4173 (0.4295) loss_objectness: 0.0072 (0.0086) loss_rpn_box_reg: 0.0101 (0.0130) time: 0.6396 data: 0.0092 max mem: 6450\r\nEpoch: [15] [ 50/209] eta: 0:01:43 lr: 0.000000 loss: 0.5703 (0.5907) loss_classifier: 0.0640 (0.0655) loss_box_reg: 0.0798 (0.0764) loss_mask: 0.4035 (0.4259) loss_objectness: 0.0062 (0.0098) loss_rpn_box_reg: 0.0109 (0.0131) time: 0.6363 data: 0.0088 max mem: 6450\r\n\r\ni am training on custom data with 2(1class+background) class.", "url": "https://github.com/pytorch/vision/issues/2007", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2020-03-24T13:01:26Z", "updated_at": "2020-03-25T14:28:20Z", "user": "vivekdeepquanty" }, { "repo": "pytorch/vision", "number": 2004, "title": "Suspicious results", "body": "## Questions about suspicious results \u2753\r\n\r\nI've trained MaskRCNN with a pre-trained ResNet50 to segment nuclei in immunofluorescence images. The results are really good, so thanks again for this terrific implementation.\r\n\r\nHowever, I've noticed in some cases that an object (here a nucleus) might be cut in multiple small parts (see bottom right part of the attached image).\r\n\r\n<img width=\"531\" alt=\"Capture d\u2019\u00e9cran 2020-03-23 \u00e0 15 27 27\" src=\"https://user-images.githubusercontent.com/6014800/77369519-0fcfa600-6d1c-11ea-820d-3186fc1bc037.png\">\r\n\r\nWe can observe that the nucleus labeled 247 is cut in multiple parts.\r\nI get that in most detection/segmentation applications, an object can partially obfuscate another one that is further in a scene. But, is it a normal behavior for this implementation?\r\n", "url": "https://github.com/pytorch/vision/issues/2004", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-03-23T22:38:10Z", "updated_at": "2020-03-24T18:18:58Z", "user": "FiReTiTi" }, { "repo": "pytorch/FBGEMM", "number": 328, "title": "quantized matrix multiplication question", "body": "Could you please point me to a quantized matrix-matrix multiplication example in the test or benchmark directory ? That is to say,\r\n\r\nC = A * B // A, B, C are single-precision floating-point matrices \r\nC' = dequant (quant(A) * quant (B) ) // quant(A) and quant(B) are int8 matrices\r\n\r\nThanks", "url": "https://github.com/pytorch/FBGEMM/issues/328", "state": "closed", "labels": [ "question" ], "created_at": "2020-03-23T04:03:40Z", "updated_at": "2022-03-18T06:43:26Z", "user": "jinz2014" }, { "repo": "pytorch/examples", "number": 740, "title": "Question: How to cite your work", "body": "Hi,\r\n\r\nI am writing a paper that modifies the codes for the example of MNIST dataset in your repository. May I ask how you would prefer that I cite your work?\r\n\r\nThank you.\r\n\r\n ", "url": "https://github.com/pytorch/examples/issues/740", "state": "closed", "labels": [], "created_at": "2020-03-22T21:30:47Z", "updated_at": "2020-04-11T18:13:40Z", "user": "hql5143" }, { "repo": "pytorch/pytorch", "number": 35159, "title": "How to implement bmm between two sparse tensor", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/35159", "state": "closed", "labels": [], "created_at": "2020-03-21T16:57:58Z", "updated_at": "2020-03-23T18:46:54Z", "user": "xhcgit" }, { "repo": "pytorch/pytorch", "number": 35153, "title": "How to impove conv2d performance in cpu mode ", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\n1. when i use PIP to install pytorch, con2d perforamance is better\r\n\r\n2. when i down pytorch 1.0.0 source code in gitlab, bad performance\r\n\r\n3.USE_MKL or OPENMP or some other optimizition results in the performance difference?\r\n\r\n\r\nHere is con2d op shape:\r\nConv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\r\nway 1:\r\npytorch install:\r\nconda install pytorch-cpu==1.0.0 torchvision-cpu==0.2.1 cpuonly -c pytorch\r\nlog: the conv2D infer time :3.85 s\r\nway 2:\r\ndown pytorch 1.0.0 source code and compile with \"python setup.py install\"\r\nlog: the conv2D infer time :13.63 s\r\n", "url": "https://github.com/pytorch/pytorch/issues/35153", "state": "closed", "labels": [], "created_at": "2020-03-21T07:33:23Z", "updated_at": "2020-03-23T09:04:07Z", "user": "daydayfun" }, { "repo": "pytorch/examples", "number": 738, "title": "Neural Style fails if style image has an alpha channel", "body": "In `fast_neural_style/neural_style/neural_style.py` line 55, if the style image has an alpha channel, then the generated tensor has 4 dimensions and this causes `utils.normalize_batch` to throw due to a tensor dimension mismatch a few lines down.\r\n\r\nI've _fixed_ this by appending `.convert('RGB')` so line 55 now reads\r\n```\r\nstyle = utils.load_image(args.style_image, size=args.style_size).convert('RGB')\r\n```\r\nThe `ImageFolder` data loader does the same transformation, however, maybe a warning should be issued since it is the key file.", "url": "https://github.com/pytorch/examples/issues/738", "state": "open", "labels": [ "help wanted" ], "created_at": "2020-03-20T17:00:20Z", "updated_at": "2022-03-09T21:48:53Z", "comments": 0, "user": "hackf5" }, { "repo": "pytorch/tutorials", "number": 899, "title": "How to apply torch.quantization.quantize_dynamic for conv2d layer?", "body": "I am working on quantizing resnet50 model. I tried to use the following command.\r\n\r\n```\r\nquantized_model = torch.quantization.quantize_dynamic(\r\n resnet18, {torch.nn.Conv2d,torch.nn.Linear}, dtype=torch.qint8\r\n)\r\n```\r\n\r\nBut only the linear layer has quaantized but not the convolutional layer. Can anyone help me how to dynamically quantize the convolutional layer?", "url": "https://github.com/pytorch/tutorials/issues/899", "state": "open", "labels": [ "quantization" ], "created_at": "2020-03-20T09:02:12Z", "updated_at": "2021-07-30T20:28:16Z", "user": "Midhilesh29" }, { "repo": "pytorch/vision", "number": 1999, "title": "Request Mobilenet fpn", "body": "## \ud83d\ude80 Feature\r\nHi I want to write mobilenet fpn.\r\n## Motivation\r\n\r\nImprove MaskRCNN speed and accuracy.\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\r\n## Code:\r\n**/torchvision/models/detection/backbone_utils.py**\r\n```\r\nfrom collections import OrderedDict\r\nfrom torch import nn\r\nfrom torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool\r\n\r\nfrom torchvision.ops import misc as misc_nn_ops\r\nfrom .._utils import IntermediateLayerGetter\r\nfrom .. import resnet\r\nfrom .. import mobilenet_v2\r\n\r\nfrom torchvision.models import mobilenet_v2 as MobileNetV2\r\nclass BackboneWithFPN(nn.Sequential):\r\n \r\n def __init__(self, backbone, return_layers, in_channels_list, out_channels):\r\n body = IntermediateLayerGetter(backbone, return_layers=return_layers)\r\n fpn = FeaturePyramidNetwork(\r\n in_channels_list=in_channels_list,\r\n out_channels=out_channels,\r\n extra_blocks=LastLevelMaxPool(),\r\n )\r\n super(BackboneWithFPN, self).__init__(OrderedDict(\r\n [(\"body\", body), (\"fpn\", fpn)]))\r\n self.out_channels = out_channels\r\n\r\n\r\ndef resnet_fpn_backbone(backbone_name, pretrained):\r\n backbone = resnet.__dict__[backbone_name](\r\n pretrained=pretrained,\r\n norm_layer=misc_nn_ops.FrozenBatchNorm2d)\r\n # freeze layers\r\n for name, parameter in backbone.named_parameters():\r\n if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:\r\n parameter.requires_grad_(False)\r\n\r\n return_layers = {'layer1': 0, 'layer2': 1, 'layer3': 2, 'layer4': 3}\r\n in_channels_stage2 = backbone.inplanes // 8\r\n in_channels_list = [\r\n in_channels_stage2,\r\n in_channels_stage2 * 2,\r\n in_channels_stage2 * 4,\r\n in_channels_stage2 * 8,\r\n ]\r\n out_channels = 256\r\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels)\r\n\r\n\r\nclass FPNMobileNet(nn.Module):\r\n def __init__(self, pretrained=True):\r\n super().__init__()\r\n net = MobileNetV2(pretrained)\r\n self.features = net.features\r\n self.layer1= nn.Sequential(*self.features[0:4])\r\n self.layer2 = nn.Sequential(*self.features[4:7])\r\n self.layer3 = nn.Sequential(*self.features[7:11])\r\n self.layer4 = nn.Sequential(*self.features[11:19])\r\n for param in self.features.parameters():\r\n param.requires_grad = False\r\n\r\n\r\n def forward(self, x):\r\n\r\n # Bottom-up pathway, from ResNet\r\n enc0 = self.layer1(x)\r\n\r\n enc1 = self.layer2(enc0) # 256\r\n\r\n enc2 = self.layer3(enc1) # 512\r\n\r\n enc3 = self.layer4(enc2) # 1024\r\n\r\n return enc3\r\n\r\ndef mobilenet_fpn_backbone(pretrained):\r\n backbone = FPNMobileNet(pretrained)\r\n print(backbone)\r\n # freeze layers\r\n for name, parameter in backbone.named_parameters():\r\n if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:\r\n parameter.requires_grad_(False)\r\n\r\n return_layers = {'layer1': 0, 'layer2': 1, 'layer3': 2, 'layer4': 3}\r\n\r\n in_channels_stage2 =1280 // 8\r\n in_channels_list = [\r\n in_channels_stage2,\r\n in_channels_stage2 * 2,\r\n in_channels_stage2 * 4,\r\n in_channels_stage2 * 8,\r\n ]\r\n \r\n out_channels = 256\r\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels)\r\n\r\n```\r\n**/torchvision/models/detection/mobilenet_fpn.py**\r\n```\r\nfrom .backbone_utils import mobilenet_fpn_backbone\r\n\r\ndef fpn(pretrained = True):\r\n\tbackbone = mobilenet_fpn_backbone( pretrained)\r\n\treturn backbone\r\n\t\r\n```\r\n**demo.py**\r\n```from torchvision.models.detection import mobilenet_fpn\r\n backbone = mobilenet_fpn.fpn(True)\r\n backbone.eval()\r\n\r\n x = torch.rand(1,3, 100, 100)\r\n out = backbone(x)\r\n print(out)\r\n\r\n\r\n```\r\n\r\n\r\n## Bug:\r\n\"RuntimeError: Given groups=1, weight of size 32 3 3 3, expected input[1, 1280, 4, 4] to have 3 channels, but got 1280 channels instead\"\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1999", "state": "closed", "labels": [ "question", "module: models", "topic: object detection", "topic: feature extraction" ], "created_at": "2020-03-20T08:34:56Z", "updated_at": "2020-11-30T07:22:50Z", "user": "finnickniu" }, { "repo": "pytorch/android-demo-app", "number": 68, "title": "How to create a new nlp model?", "body": "Thanks for the project.\r\nThe example successful run on Android.\r\nHowever, I want to create my our model for other nlp tasks.\r\nSo, can you show me the way to create the nlp model? Or the source of creating model-reddit16-f140225004_2.pt1?\r\n", "url": "https://github.com/pytorch/android-demo-app/issues/68", "state": "open", "labels": [], "created_at": "2020-03-20T07:45:01Z", "updated_at": "2020-05-20T01:29:55Z", "user": "anbo724" }, { "repo": "pytorch/vision", "number": 1986, "title": "Training scheme of the pretrained imagenet models?", "body": "Hi,\r\n\r\nAre the pretrained models reported by torchvision using the same hyper-parameters as https://github.com/pytorch/examples/blob/master/imagenet/main.py? I used the default hyper-parameters to train mobilenet_v2, but the results were much worse than reported.\r\n\r\nThanks\r\n", "url": "https://github.com/pytorch/vision/issues/1986", "state": "closed", "labels": [ "question", "module: models", "module: reference scripts" ], "created_at": "2020-03-15T09:16:29Z", "updated_at": "2020-03-19T18:43:05Z", "user": "tzm1003306213" }, { "repo": "pytorch/pytorch", "number": 34775, "title": "How to do a split operation for dataset, not random split. I mean just like dataset[0:100] and dataset[100:200]]", "body": "How to do a split operation for dataset, not random split. I mean just like dataset[0:100] and dataset[100:200]]", "url": "https://github.com/pytorch/pytorch/issues/34775", "state": "closed", "labels": [], "created_at": "2020-03-15T02:28:33Z", "updated_at": "2020-03-15T02:37:14Z", "user": "HymEric" }, { "repo": "pytorch/pytorch", "number": 34773, "title": "How to write codes to support second order derivative (double backward) for custom CUDA extension", "body": "Hi,\r\n\r\nI am lost in figuring out how to compute second order derivatives for custom CUDA extensions after reading the [Extend Torch With Cpp and CUDA](https://pytorch.org/tutorials/advanced/cpp_extension.html). \r\n\r\nCould somebody tell me how to do this? Many thanks!", "url": "https://github.com/pytorch/pytorch/issues/34773", "state": "closed", "labels": [], "created_at": "2020-03-15T01:17:24Z", "updated_at": "2020-03-18T15:03:39Z", "user": "xieshuqin" }, { "repo": "pytorch/pytorch", "number": 34720, "title": "Where is dd.h ?", "body": "\r\n```console\r\n....../pytorch/third_party/sleef/src/quad/sleefsimdqp.c:111:10: fatal error: dd.h: No such file or directory\r\n #include \"dd.h\"\r\n ^~~~~~\r\ncompilation terminated.\r\nsleef/src/quad/CMakeFiles/sleefquadavx512f_obj.dir/build.make:70: recipe for target 'sleef/src/quad/CMakeFiles/sleefquadavx512f_obj.dir/sleefsimdqp.c.o' failed\r\nmake[2]: *** [sleef/src/quad/CMakeFiles/sleefquadavx512f_obj.dir/sleefsimdqp.c.o] Error 1\r\nmake[2]: Leaving directory '....../pytorch/build_18.04'\r\nCMakeFiles/Makefile2:4939: recipe for target 'sleef/src/quad/CMakeFiles/sleefquadavx512f_obj.dir/all' failed\r\nmake[1]: *** [sleef/src/quad/CMakeFiles/sleefquadavx512f_obj.dir/all] Error 2\r\nmake[1]: *** Waiting for unfinished jobs....\r\n....../pytorch/third_party/sleef/src/quad/sleefsimdqp.c:111:10: fatal error: dd.h: No such file or directory\r\n #include \"dd.h\"\r\n ^~~~~~\r\ncompilation terminated.\r\nsleef/src/quad/CMakeFiles/sleefquadavx2_obj.dir/build.make:70: recipe for target 'sleef/src/quad/CMakeFiles/sleefquadavx2_obj.dir/sleefsimdqp.c.o' failed\r\nmake[2]: *** [sleef/src/quad/CMakeFiles/sleefquadavx2_obj.dir/sleefsimdqp.c.o] Error 1\r\nmake[2]: Leaving directory '....../pytorch/build_18.04'\r\nCMakeFiles/Makefile2:5329: recipe for target 'sleef/src/quad/CMakeFiles/sleefquadavx2_obj.dir/all' failed\r\nmake[1]: *** [sleef/src/quad/CMakeFiles/sleefquadavx2_obj.dir/all] Error 2\r\nmake[2]: Leaving directory '....../pytorch/build_18.04'\r\n[ 63%] Built target ATEN_CPU_FILES_GEN_TARGET\r\nmake[2]: Leaving directory '....../pytorch/build_18.04'\r\n[ 63%] Built target generate-torch-sources\r\nmake[1]: Leaving directory '....../pytorch/build_18.04'\r\nMakefile:165: recipe for target 'all' failed\r\nmake: *** [all] Error 2\r\n```", "url": "https://github.com/pytorch/pytorch/issues/34720", "state": "closed", "labels": [], "created_at": "2020-03-13T17:31:27Z", "updated_at": "2020-03-14T00:15:45Z", "user": "jiapei100" }, { "repo": "pytorch/TensorRT", "number": 13, "title": "RFC: Converter API", "body": "Right now the Converter API expects lambdas of the type: `(ConversionCtx* ctx, torch::jit::Node* n, kwargs* args) -> bool`\r\n\r\nQuestions:\r\n1. The bool return is a quick way to signal success or failure in converting the op. This could be something more descriptive\r\n\r\n2. Right now it is the responsibility of converters to log associations between `torch::jit::Value`s and `nvinfer1::ITensors`s so that later is significantly easier to assemble the arguments to a node. It may be nice if you could return a vector of unions of IValues and ITensors and have the converter executor do the insertions. This would probably need to rely on some guarantee that order of return is easy to determine and constant \r\n ", "url": "https://github.com/pytorch/TensorRT/issues/13", "state": "closed", "labels": [ "question", "priority: low", "component: converters", "No Activity" ], "created_at": "2020-03-13T01:07:55Z", "updated_at": "2020-06-10T00:02:51Z", "user": "narendasan" }, { "repo": "pytorch/TensorRT", "number": 7, "title": "RFC: How should engines be integrated into the JIT Interpreter?", "body": "Right now as a side effect of registering an engine in the execution manager, a new op specifically for the engine is registered in the op registry. For instance running a ResNet backbone will be implemented with a new op with schema `trt::execute_engine_55d1de7b7b50(Tensor in_input_38) -> (Tensor)`. We could also have a generic op like `trt::execute_engine(int id, Tensor in_input_38, ...) -> (Tensor, ...)` and rely on information in the engine manager to run the correct engine, as long as variadic arguments (and returns) work. ", "url": "https://github.com/pytorch/TensorRT/issues/7", "state": "closed", "labels": [ "question", "component: execution" ], "created_at": "2020-03-11T20:08:12Z", "updated_at": "2020-05-28T20:34:13Z", "user": "narendasan" }, { "repo": "pytorch/TensorRT", "number": 6, "title": "Verify that engine runs in the correct stream", "body": "This is the stream that is used right now \r\n`c10::cuda::CUDAStream stream = c10::cuda::getCurrentCUDAStream(inputs[0].device().index());`\r\n\r\nWill this always be correct? What are the cases where this will give an incorrect stream? ", "url": "https://github.com/pytorch/TensorRT/issues/6", "state": "closed", "labels": [ "question", "component: execution", "No Activity" ], "created_at": "2020-03-11T19:59:20Z", "updated_at": "2020-07-09T00:17:53Z", "user": "narendasan" }, { "repo": "pytorch/vision", "number": 1964, "title": "The simplest way to use checkpoint to maximize the GPU memory usage", "body": "## \u2753 The simplest way to use checkpoint to maximize the GPU memory usage\r\n\r\nHi, guys,\r\nI am learning about how to use the checkpoint to optimize the GPU memory usage, and I see there is a example in [densenet.py](https://github.com/pytorch/vision/blob/216035315185edec747dca8879d7197e7fb22c7d/torchvision/models/densenet.py#L53) as\r\n```python\r\n @torch.jit.unused # noqa: T484\r\n def call_checkpoint_bottleneck(self, input):\r\n # type: (List[Tensor]) -> Tensor\r\n def closure(*inputs):\r\n return self.bn_function(*inputs)\r\n\r\n return cp.checkpoint(closure, input)\r\n```\r\nFirstly, I think using checkpoint to maximize the GPU memory usage **only apply to activation modules, such as ReLU**. So, why not just create an new Module like:\r\n```python\r\nclass cp_ReLU(nn.Module):\r\n def __init__(self, inplace) -> None:\r\n super(cp_ReLU, self).__init__()\r\n \r\n relu = nn.ReLU(inplace = inplace)\r\n \r\n def forward(self, x):\r\n y=cp.checkpoint(relu, x)\r\n return y\r\n```\r\nAnd use cp_ReLU instead of original ReLU in all the places where a ReLU is need as:\r\n```python\r\nif self.memory_efficient:\r\n self.add_module('relu2', nn.ReLU(inplace=True)),\r\nelse \r\n self.add_module('relu2', cp_ReLU(inplace=True)),\r\n```\r\nI think this kind of implementation will make best use of the checkpoint.\r\nAm I right?\r\nOr would checkpoint also have effect to other kinds of modules like, Conv2d or BatchNorm2d?\r\n\r\nYour suggestion and answer will be appreciated! \r\n \r\n", "url": "https://github.com/pytorch/vision/issues/1964", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2020-03-11T10:37:12Z", "updated_at": "2020-03-12T18:03:53Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1952, "title": "FastRCNNPredictor doesn't return prediction in evaluation", "body": "## \ud83d\udc1b Bug\r\n\r\nDear all,\r\n\r\nI am doing object detection in an image with one class. After training, `FastRCNNPredictor` does not return anything in validation mode. I have followed this official tutorial https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html.\r\n\r\nThanks.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nI have created a custom dataset, this is one of the output:\r\n\r\n```\r\ntensor([[[0.0549, 0.0549, 0.0549, ..., 0.1647, 0.1569, 0.1569],\r\n [0.0549, 0.0549, 0.0549, ..., 0.1686, 0.1569, 0.1569],\r\n [0.0549, 0.0549, 0.0549, ..., 0.1647, 0.1569, 0.1529],\r\n ...,\r\n [0.0471, 0.0471, 0.0471, ..., 0.1490, 0.1490, 0.1490],\r\n [0.0471, 0.0471, 0.0471, ..., 0.1490, 0.1490, 0.1490],\r\n [0.0471, 0.0471, 0.0471, ..., 0.1490, 0.1490, 0.1490]],\r\n \r\n [[0.0471, 0.0471, 0.0471, ..., 0.1255, 0.1176, 0.1176],\r\n [0.0471, 0.0471, 0.0471, ..., 0.1294, 0.1176, 0.1176],\r\n [0.0471, 0.0471, 0.0471, ..., 0.1255, 0.1176, 0.1137],\r\n ...,\r\n [0.0235, 0.0235, 0.0235, ..., 0.1098, 0.1098, 0.1098],\r\n [0.0235, 0.0235, 0.0235, ..., 0.1098, 0.1098, 0.1098],\r\n [0.0235, 0.0235, 0.0235, ..., 0.1098, 0.1098, 0.1098]],\r\n \r\n [[0.0510, 0.0510, 0.0510, ..., 0.1176, 0.1098, 0.1098],\r\n [0.0510, 0.0510, 0.0510, ..., 0.1216, 0.1098, 0.1098],\r\n [0.0510, 0.0510, 0.0510, ..., 0.1176, 0.1098, 0.1059],\r\n ...,\r\n [0.0314, 0.0314, 0.0314, ..., 0.1059, 0.1059, 0.1059],\r\n [0.0314, 0.0314, 0.0314, ..., 0.1059, 0.1059, 0.1059],\r\n [0.0314, 0.0314, 0.0314, ..., 0.1059, 0.1059, 0.1059]]]),\r\n {'boxes': tensor([[315.0003, 213.5002, 626.0004, 329.5002]]),\r\n 'labels': tensor([0]),\r\n 'image_id': tensor([1]),\r\n 'area': tensor([36503.9961]),\r\n 'iscrowd': tensor([0])})\r\n```\r\nTo prove its correctness I have also visualized the bbox on the image:\r\n\r\n![image](https://user-images.githubusercontent.com/15908060/76199447-2e984d80-61f0-11ea-931c-bd2cb0687ed3.png)\r\n\r\nThen I create a `Dataloader`:\r\n\r\n```python\r\n\r\ndl = DataLoader(ds, batch_size=8, num_workers=4, collate_fn=lambda x: tuple(zip(*x)))\r\n\r\nmodel = fasterrcnn_resnet50_fpn(num_classes=1).to(device)\r\n\r\nparams = [p for p in model.parameters() if p.requires_grad]\r\noptimizer = torch.optim.SGD(params, lr=0.005,\r\n momentum=0.9, weight_decay=0.0005)\r\n```\r\n\r\nTraining works:\r\n\r\n```python\r\nmodel.train()\r\nfor i in range(5):\r\n\r\n for images, targets in dl:\r\n images = list(image.to(device) for image in images)\r\n targets = [{k: v.to(device) for k,v in t.items()} for t in targets]\r\n loss_dict = model(images, targets)\r\n losses = sum(loss for loss in loss_dict.values())\r\n optimizer.zero_grad()\r\n losses.backward()\r\n optimizer.step()\r\n\r\n print(losses)\r\n```\r\n\r\nOutput:\r\n\r\n```\r\ntensor(0.6391, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.6329, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.6139, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.5965, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.5814, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.5468, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.5049, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.4502, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.3787, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.2502, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.1605, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.0940, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.0558, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.0507, device='cuda:0', grad_fn=<AddBackward0>)\r\ntensor(0.0413, device='cuda:0', grad_fn=<AddBackward0>)\r\n```\r\n\r\nBut, when I try to get a prediction I have no output:\r\n\r\n```python\r\nmodel = model.eval()\r\nwith torch.no_grad():\r\n model = model.cuda()\r\n pred = model([ds[2][0].cuda()])\r\n```\r\n\r\n`pred` is\r\n\r\n```\r\n[{'boxes': tensor([], size=(0, 4)),\r\n 'labels': tensor([], dtype=torch.int64),\r\n 'scores': tensor([])}]\r\n```\r\n\r\nThank you in advance\r\n## Expected behavior\r\n\r\nThe model should return a valid prediction.\r\n\r\n## Environment\r\n```\r\nPyTorch version: 1.4.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.1\r\n\r\nOS: Ubuntu 18.04.4 LTS\r\nGCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0\r\nCMake version: Could not collect\r\n\r\nPython version: 3.7\r\nIs CUDA available: Yes\r\nCUDA runtime version: 10.1.243\r\nGPU models and configuration: GPU 0: GeForce GTX 1080 Ti\r\nNvidia driver version: 430.50\r\ncuDNN version: Could not collect\r\n\r\nVersions of relevant libraries:\r\n[pip] efficientnet-pytorch==0.5.1\r\n[pip] msgpack-numpy==0.4.3.2\r\n[pip] numpy==1.17.4\r\n[pip] PytorchStorage==0.0.0\r\n[pip] torch==1.4.0\r\n[pip] torchbearer==0.5.3\r\n[pip] torchlego==0.0.0\r\n[pip] torchsummary==1.5.1\r\n[pip] torchvision==0.5.0\r\n[conda] _pytorch_select 0.2 gpu_0 \r\n[conda] blas 1.0 mkl \r\n[conda] efficie", "url": "https://github.com/pytorch/vision/issues/1952", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-03-09T09:27:19Z", "updated_at": "2024-12-25T07:03:50Z", "user": "FrancescoSaverioZuppichini" }, { "repo": "pytorch/pytorch", "number": 34437, "title": "How to quantize CNN in pytorch 1.3? ", "body": "I tried to quantize CNN refer to https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html\r\n\r\nbut I got this error:\r\nRuntimeError: Didn't find engine for operation quantized::linear_prepack NoQEngine\r\n\r\nHow can I solve it?\r\n\r\nmy environment:\r\n\r\npytorch1.3.0+cpu\r\nwindows 10\r\npython3.7", "url": "https://github.com/pytorch/pytorch/issues/34437", "state": "closed", "labels": [], "created_at": "2020-03-08T03:12:25Z", "updated_at": "2020-03-08T06:41:59Z", "user": "zhanyike" }, { "repo": "pytorch/vision", "number": 1944, "title": "Use the pretrained model of ResNet50 on ImageNet, the val acc is 4% less than reported.", "body": "As the title mentioned, anyone have met the same issue?Thx!", "url": "https://github.com/pytorch/vision/issues/1944", "state": "closed", "labels": [ "question", "awaiting response", "needs discussion", "module: models", "topic: classification" ], "created_at": "2020-03-05T14:17:18Z", "updated_at": "2020-10-21T08:25:11Z", "user": "liu-zhenhua" }, { "repo": "pytorch/examples", "number": 726, "title": "How can I sue freeze_support() in the train.py?", "body": "\r\n", "url": "https://github.com/pytorch/examples/issues/726", "state": "closed", "labels": [], "created_at": "2020-03-05T03:00:07Z", "updated_at": "2020-03-05T03:01:18Z", "comments": 0, "user": "K-M-Ibrahim-Khalilullah" }, { "repo": "pytorch/tutorials", "number": 873, "title": "Regarding torch.utils.data.DataLoader and batch_size", "body": "Hi all the high-level engineers! \r\n\r\nI am very new to pytorch and even to pythone.\r\nNonetheless, I am trying to understand pytorch by the documentation of the tutorial of pytorch.\r\nNow I am going through TRAINING A CLASSIFIER using dataset, CIFAR10.\r\n\r\nCode Link: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html\r\n\r\nIn the code, there is the lines as follows;\r\n\r\nfor i, data in enumerate(trainloader, 0):\r\n inputs, labels=data\r\n\r\nI have checked the size of data[0] and input[0], and they are different and it was [4, 3, 32, 32] and [3, 32, 32] respectively.\r\nI understand that data[0] has the size of [4, 3, 32, 32] as it is the first batch that contains 4 images.\r\n\r\nQuestion\r\n1. But why is the size of input[0] [3, 32, 32]? As I checked, input[0] is the first image of the data[0].\r\n Why is input[0] only taking the first image of data[0]? According to the code, input[0]=data[0], shouldn't it ?\r\n\r\n2. According to the tutorial, \"data is a list of [inputs, labels]\". But I don't see labels[0] value in data[0].\r\n Why is so?\r\n\r\nSorry if it is too basic, but please help.\r\n\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/873", "state": "closed", "labels": [], "created_at": "2020-03-03T18:48:08Z", "updated_at": "2021-07-30T21:15:54Z", "comments": 2, "user": "jjong2ya" }, { "repo": "pytorch/vision", "number": 1934, "title": "torchvision.models.detection.fasterrcnn_resnet50_fpn has loss_rpn_box_reg exploding to nan after evaluation", "body": "## \ud83d\udc1b Bug / Misuse?\r\nI'm attempting to use `torchvision.models.detection.fasterrcnn_resnet50_fpn` on a custom dataset, following along with the [Detection Finetuning Tutorial](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html) where applicable.\r\n\r\nHowever, I consistently have an issue where the loss of `loss_rpn_box_reg` appears to rapidly explode, but _only_ after the first epoch has completed **(edit: the culprit now appears to be the call to the evaluation function).**\r\n\r\nI've ruled out every annotation / dataloading issue I can think of (no boxes w/ non-positive dims or out-of-bounds coords, etc).\r\nI've also (hopefully) ruled out issues with the RPN by following the steps to replace the `AnchorGenerator` and `RPNHead` outlined in #978 \r\n```\r\n rpn_anchor_generator = AnchorGenerator(sizes=config.RPN_ANCHOR_SIZES,\r\n aspect_ratios=config.RPN_ANCHOR_ASPECT_RATIOS)\r\n rpn_head = RPNHead(in_channels=model.backbone.out_channels,\r\n num_anchors=rpn_anchor_generator.num_anchors_per_location()[0])\r\n model.rpn.anchor_generator = rpn_anchor_generator\r\n model.rpn.head = rpn_head\r\n```\r\n\r\nAnd, if it's relevant, replaced the `RoIHeads` box predictor:\r\n```\r\n box_predictor = FastRCNNPredictor(in_channels=model.roi_heads.box_predictor.cls_score.in_features,\r\n num_classes=len(dataset_util.ClassLabelEnum) + 1)\r\n model.roi_heads.box_predictor = box_predictor\r\n```\r\nI've also been tinkering with my choice of optimizer & scheduler as well as trying to drastically lower LR or change batch size, just to see if anything seems to impact it (to no avail).\r\n\r\nBelow is a stack trace of this behavior on a small dummy subset. The behavior is similar on the full dataset, with `loss_rpn_box_reg` apparently exploding to `nan` shortly after the first epoch.\r\n```\r\nEpoch: [0] [0/5] eta: 0:00:03 lr: 0.001254 loss: 4.8643 (4.8643) loss_classifier: 3.7656 (3.7656) loss_box_reg: 0.0778 (0.0778) loss_objectness: 0.6919 (0.6919) loss_rpn_box_reg: 0.3290 (0.3290) time: 0.7243 data: 0.1862 max mem: 2508\r\nEpoch: [0] [4/5] eta: 0:00:00 lr: 0.005000 loss: 2.9177 (3.2553) loss_classifier: 1.8250 (2.1365) loss_box_reg: 0.1284 (0.1319) loss_objectness: 0.6914 (0.6905) loss_rpn_box_reg: 0.3255 (0.2964) time: 0.4570 data: 0.0405 max mem: 2776\r\nEpoch: [0] Total time: 0:00:02 (0.4618 s / it)\r\ncreating index...\r\nindex created!\r\nTest: [0/5] eta: 0:00:01 model_time: 0.0841 (0.0841) evaluator_time: 0.0014 (0.0014) time: 0.2295 data: 0.1396 max mem: 2776\r\nTest: [4/5] eta: 0:00:00 model_time: 0.0819 (0.0821) evaluator_time: 0.0007 (0.0008) time: 0.1168 data: 0.0296 max mem: 2776\r\nTest: Total time: 0:00:00 (0.1209 s / it)\r\nAveraged stats: model_time: 0.0819 (0.0821) evaluator_time: 0.0007 (0.0008)\r\nAccumulating evaluation results...\r\nDONE (t=0.01s).\r\nIoU metric: bbox\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000\r\n Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000\r\n Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000\r\nEpoch: [1] [0/5] eta: 0:00:03 lr: 0.005000 loss: 17.6448 (17.6448) loss_classifier: 0.4119 (0.4119) loss_box_reg: 0.0663 (0.0663) loss_objectness: 0.6992 (0.6992) loss_rpn_box_reg: 16.4674 (16.4674) time: 0.6068 data: 0.1749 max mem: 2776\r\nLoss is nan, stopping training\r\n{'loss_classifier': tensor(0.4476, device='cuda:0', grad_fn=<NllLossBackward>), 'loss_box_reg': tensor(0.0805, device='cuda:0', grad_fn=<DivBackward0>), 'loss_objectness': tensor(0.6914, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward>), 'loss_rpn_box_reg': tensor(nan, device='cuda:0', grad_fn=<DivBackward0>)}\r\n```\r\n\r\nCould I be horribly misusing the model API or incorrectly setting any hparams? This is my first time really working with `torch` & `torchvision` in-depth, so any and all suggestions are hugely appreciated.\r\n\r\n## Environment\r\nBare-metal + conda environment, single GPU set-up.\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): **1.4.0 / 0.5.0**\r\n - OS (e.g., Linux): **Ubuntu 18.04**\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source): **pip (from pypi) i", "url": "https://github.com/pytorch/vision/issues/1934", "state": "closed", "labels": [ "question", "module: models", "module: reference scripts", "topic: object detection" ], "created_at": "2020-03-03T16:40:50Z", "updated_at": "2020-03-24T15:43:09Z", "user": "dsandii" }, { "repo": "pytorch/vision", "number": 1953, "title": "roi_pool is seem to be different from which in FasterRCNN", "body": "```python\r\nimport torchvision\r\nimport torch\r\n\r\na = torch.linspace(1,8*8,8*8).reshape(1, 1, 8, 8)\r\nboxes = torch.tensor([[0,0,3,3]],dtype=a.dtype)\r\nout = torchvision.ops.roi_pool(a, boxes, output_size=(2,2))\r\n```\r\ni expect that out would be [[10 12],[26 28]]\r\nbut it's [[26 26],[26 28]]\r\nand, i try more\r\n```python\r\nimport torchvision\r\nimport torch\r\na = torch.linspace(1,8*8,8*8).reshape(1, 1, 8, 8)\r\nboxes = torch.tensor([[1,1,3,3]],dtype=a.dtype)\r\nout = torchvision.ops.roi_pool(a, boxes, output_size=(2,2))\r\n```\r\nout is \r\n[[4.373779945881600000e+15\t4.373779945881600000e+15],\r\n[4.373779945881600000e+15\t4.373779945881600000e+15]]\r\nthat is so werid, i think roi_pool should be like [this](https://deepsense.ai/region-of-interest-pooling-explained/)\r\n\n\ncc @fmassa", "url": "https://github.com/pytorch/vision/issues/1953", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-03-03T11:16:38Z", "updated_at": "2020-03-10T12:34:38Z", "user": "SeniorCtrlPlayer" }, { "repo": "pytorch/vision", "number": 1933, "title": "error", "body": "Sorry to bother you here,I saw your comment at maskrcnn-benchmark, and I encountered the following error, even using boxlist[[0]] is the same error, can you help me please? Thank you!\r\n![image](https://user-images.githubusercontent.com/53242456/75760743-bc98b200-5d72-11ea-800f-a6cc055db20b.png)\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1933", "state": "closed", "labels": [ "invalid", "question" ], "created_at": "2020-03-03T09:19:16Z", "updated_at": "2020-03-10T14:47:23Z", "user": "buxpeng" }, { "repo": "pytorch/vision", "number": 1930, "title": "Summary of TorchScript and ONNX support", "body": "Hi,\r\nIs there a summary somewhere of the current state of support for TorchScript and ONNX by the models in this repo? A\u00a0complete summary would include the version of support for both CPU and GPU.\r\n\r\nI'm mostly interested in FasterRCNN but I\u00a0got confused looking around the related issues. I suppose a general summary could save time to other end users.", "url": "https://github.com/pytorch/vision/issues/1930", "state": "closed", "labels": [ "question", "module: models", "module: onnx", "torchscript" ], "created_at": "2020-03-02T22:30:31Z", "updated_at": "2020-03-10T16:12:12Z", "user": "MLaurenceFournier" }, { "repo": "pytorch/vision", "number": 1926, "title": "Support for different pooling options with Faster R-CNN", "body": "I've been trying to setup a Faster R-CNN network to detect lesions on CT images.\r\n\r\nTherefor I wanted to use the FasterRCNN class which is provided by torchvision. However, I've noticed that the framework is very great except for the supported pooling (roi_box_pooling). The pooling only allows a [MultiScaleRoIAlign](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/faster_rcnn.py#L168). Is it possible to use the MultiScaleRoIAlign for any use-case?\r\n\r\nI wanted to setup my network like it is described in this [project](https://github.com/chenyuntc/simple-faster-rcnn-pytorch):\r\n![Diagram](https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/master/imgs/model_all.png?raw=true)\r\n\r\nThat's my idea so far:\r\n```python\r\nclass BoxHead(torch.nn.Module):\r\n def __init__(self, vgg):\r\n super(BoxHead, self).__init__()\r\n self.classifier = torch.nn.Sequential(*list(vgg.classifier._modules.values())[:-1])\r\n\r\n def forward(self, x):\r\n x = x.flatten(start_dim=1)\r\n x = self.classifier(x)\r\n return x\r\n\r\n# VGG16 backbone\r\nvgg = vgg16(pretrained=True)\r\nbackbone = vgg.features[:-1]\r\nfor layer in backbone[:10]:\r\n for p in layer.parameters():\r\n p.requires_grad = False\r\nbackbone.out_channels = 512\r\n\r\nbox_head = BoxHead(vgg)\r\n\r\n# RPN - Anchor Generator\r\nanchor_generator = AnchorGenerator(sizes=((8, 16, 32),), aspect_ratios=((0.5, 1.0, 1.5),))\r\n\r\n# Head - Box RoI pooling\r\nroi_pooler = MultiScaleRoIAlign(featmap_names=['0'], output_size=7, sampling_ratio=2)\r\n\r\n# Faster RCNN - Model\r\nmodel = FasterRCNN(\r\n backbone=backbone,\r\n min_size=224, max_size=224,\r\n rpn_anchor_generator=anchor_generator,\r\n box_roi_pool=roi_pooler,\r\n box_head=box_head,\r\n box_predictor=FastRCNNPredictor(4096, num_classes=2)\r\n)\r\n```", "url": "https://github.com/pytorch/vision/issues/1926", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-03-01T16:31:14Z", "updated_at": "2020-03-10T14:57:42Z", "user": "FHellmann" }, { "repo": "pytorch/vision", "number": 1929, "title": "cannot use group norm in torchvision API", "body": "## \ud83d\udc1b Bug\r\n\r\nCannot use nn.GroupNorm for \"norm_layer\" in torchvision.models.resnet50. However, nn.GroupNorm is supposed to be OK since there are codes to initialize the weights of nn.GroupNorm in the resnet50 file. \r\n\r\n## To Reproduce\r\n```\r\nimport torch.nn as nn\r\nimport torchvision\r\nnet = torchvision.models.resnet50(norm_layer=nn.BatchNorm2d)\r\nnet = torchvision.models.resnet50(norm_layer=nn.GroupNorm)\r\n```\r\n## Expected behavior\r\n\r\nBoth nets should be OK, however the second raised an error in torchvision/models/resnet.py line 143. nn.GroupNorm should have two arguments.\r\n\r\n## Environment\r\n\r\nPyTorch version: 1.4.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: None\r\n\r\nOS: Mac OSX 10.13.6\r\nGCC version: Could not collect\r\nCMake version: version 3.14.5\r\n\r\nPython version: 3.7\r\nIs CUDA available: No\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.18.1\r\n[pip3] torch==1.4.0\r\n[pip3] torchvision==0.5.0\r\n[conda] torch 1.4.0 pypi_0 pypi\r\n[conda] torchvision 0.5.0 pypi_0 pypi\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1929", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2020-03-01T11:32:15Z", "updated_at": "2020-03-04T12:57:31Z", "user": "hukkai" }, { "repo": "pytorch/vision", "number": 1925, "title": "module 'torchvision' has no attribute '__version__' [0.4.2]", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. `print(\"Torchvision Version: \",torchvision.__version__)`\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\nAttributeError: module 'torchvision' has no attribute '__version__'\r\n\r\n## Environment\r\n\r\n - PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): pytorch: 1.3.1, torchvision: 0.4.2\r\n - OS (e.g., Linux): Ubuntu 19.10\r\n - How you installed PyTorch / torchvision (`conda`, `pip`, source): conda\r\n - Python version: 3.7\r\n - CUDA/cuDNN version: 1.01\r\n", "url": "https://github.com/pytorch/vision/issues/1925", "state": "closed", "labels": [ "question", "topic: binaries" ], "created_at": "2020-03-01T09:37:32Z", "updated_at": "2020-03-04T11:59:03Z", "user": "duskybomb" }, { "repo": "pytorch/xla", "number": 1708, "title": "How to properly clip gradients with data parallel training?", "body": "The XLA API recommends that we call `xm.optimizer_step(optimizer)`, which [allreduces the gradients and performs the optimization step](https://github.com/pytorch/xla/blob/ffde50813f01b57d6e782a63aac6453bfa12ffdf/torch_xla/core/xla_model.py#L429-L455).\r\n\r\nGradient clipping is typically performed on the reduced gradients, however it seems the current API doesn't give us access to the reduced gradients before taking the optimization step.\r\n\r\nThe code for `optimizer_step` is pretty simple, so I'm also wondering why we have `xm.optimizer_step` interface in the first place, compared to the more standard DistributedDataParallel interface that PyTorch uses. For example, in fairseq we have a [very simple DistributedDataParallel wrapper](https://github.com/pytorch/fairseq/blob/master/fairseq/legacy_distributed_data_parallel.py) that just calls allreduce, so it seems something like that could be a drop-in replacement here.\r\n\r\nIs there something else special happening with `xm.optimizer_step` that I'm missing?", "url": "https://github.com/pytorch/xla/issues/1708", "state": "closed", "labels": [], "created_at": "2020-02-29T14:13:03Z", "updated_at": "2023-08-31T12:27:07Z", "user": "myleott" }, { "repo": "pytorch/android-demo-app", "number": 62, "title": "How to apply quantization to models?", "body": "The iOS demo has to set a quantization backend before loading the model, is this operation necessary on Android? and how? ", "url": "https://github.com/pytorch/android-demo-app/issues/62", "state": "open", "labels": [], "created_at": "2020-02-27T21:54:17Z", "updated_at": "2020-05-15T08:47:13Z", "user": "himajin2045" }, { "repo": "pytorch/pytorch", "number": 33809, "title": "How to release the gpu memory after use interpolate\uff1f", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\n After used the interpolate \uff0cthe gpu memory can not be released during inference\u3002I try to use torch.cuda.empty_cache() after, but it can work only in feding the images one by one\u3002when theimages is concurrently fedding \uff0cthe GPU still increase\u3002Can you tell me how to do\uff1f \r\n", "url": "https://github.com/pytorch/pytorch/issues/33809", "state": "closed", "labels": [], "created_at": "2020-02-26T10:04:47Z", "updated_at": "2020-02-26T18:21:30Z", "user": "lyc6749" }, { "repo": "pytorch/vision", "number": 1912, "title": "A weird problem: \"No module named 'torchvision.models'; 'torchvision' is not a package\"", "body": "", "url": "https://github.com/pytorch/vision/issues/1912", "state": "closed", "labels": [ "question" ], "created_at": "2020-02-24T03:09:41Z", "updated_at": "2020-02-26T02:48:55Z", "user": "TengFeiHan0" }, { "repo": "pytorch/pytorch", "number": 33673, "title": "python int type how to convert to at::IntArrayRef", "body": "## \u2753 Questions and Help\r\nWhen I write C++/CUDA extension, I use at::IntArrayRef in my C++ source code. But I build custom layer using python, I find that I do not use my C++ extension. Because int of python does not convert to at::IntArrayRef of C++. How to solve this problem?\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/33673", "state": "closed", "labels": [], "created_at": "2020-02-23T16:56:54Z", "updated_at": "2020-02-23T19:34:10Z", "user": "ghost" }, { "repo": "pytorch/vision", "number": 1904, "title": "DenseNet generated image embeddings have very small variance and are sparse", "body": "I'm working with pretrained DenseNet161 model to learn and generate image embeddings.\r\nI consider output of adaptive_avg_pool2d layer image embedding. I noticed that it's very sparse and has small variance. If we consider a batch of 64 images, variance of the same feature over all images in a batch is ~2e-6. Also the feature values are just very small (~1e-3). This is for pretrained model, if I start fine-tuning the model, some features quickly become 0 across all images in the batch.\r\nSurprisingly, this is not the case for resnet. Running the same training for resnet50 results in expected features of size ~[0, 2] and variance around 1.\r\nThis is stunning difference for me. Not sure if there's some problem with implementation or this is just DenseNet works. Please let me know if you have similar experiences.", "url": "https://github.com/pytorch/vision/issues/1904", "state": "closed", "labels": [ "question", "awaiting response", "module: models", "topic: classification" ], "created_at": "2020-02-21T22:44:45Z", "updated_at": "2021-01-06T17:42:28Z", "user": "zlenyk" }, { "repo": "pytorch/xla", "number": 1675, "title": "What is \"Lowering\" referring to?", "body": "## \u2753 Questions and Help\r\nSorry for the stupid question, but what is meant by \"Lowering\", as it applies to LowerContext class, which appears to convert IR->XLA? \r\n\r\nsuch as:\r\n```cpp\r\nXlaOpVector LoweringContext::LowerNode(const Node* node) {\r\n...\r\n}\r\n```", "url": "https://github.com/pytorch/xla/issues/1675", "state": "closed", "labels": [], "created_at": "2020-02-21T22:29:36Z", "updated_at": "2020-02-21T23:48:31Z", "user": "cjolivier01" }, { "repo": "pytorch/xla", "number": 1672, "title": "How to join a model from distributed replicas?", "body": "## \u2753 Questions and Help\r\n\r\nFrom what I understand, a deepcopy model replica is deployed on each XLA device and then individually trained on non-overlapping data (via ParallelLoader). Each model can then be also evaluated via the same approach.\r\n\r\nI would like to save a model for production use. But now I assume I have 8 different models (from each XLA core) trained on different data batches. Are they all synced after training and have the same weights (so I just choose random XLA model) or do I need to somehow manually merge them?", "url": "https://github.com/pytorch/xla/issues/1672", "state": "closed", "labels": [], "created_at": "2020-02-21T04:45:16Z", "updated_at": "2020-02-21T05:56:23Z", "user": "AVancans" }, { "repo": "pytorch/vision", "number": 1901, "title": "DeeplapV3 accuracy result for person ", "body": "Can I know the accuracy of the Deeplap v3-Resnet-101 pre-trained model evaluated on COCO val2017 dataset\r\nfor **person** class only?\r\nI searched about it but I did not find the accuracy for each class?", "url": "https://github.com/pytorch/vision/issues/1901", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-02-20T12:56:41Z", "updated_at": "2020-02-25T15:50:39Z", "user": "muna-cs" }, { "repo": "pytorch/vision", "number": 1900, "title": "Why the inplace in the ReLU in deeplabv3.py is not set True?", "body": "Hi guys,\r\nI am reproducing DeepLabV3+ these days, and I learning about the code of [deeplabv3.py](https://github.com/pytorch/vision/blob/2f64dd90e14fe5463b4e5bd152d56e4a6f0419de/torchvision/models/segmentation/deeplabv3.py).\r\nAnd I found that some ReLUs in this code don't use \"inplace = True\", like,\r\n```python\r\ndef __init__(self, in_channels, atrous_rates):\r\n super(ASPP, self).__init__()\r\n out_channels = 256\r\n modules = []\r\n modules.append(nn.Sequential(\r\n nn.Conv2d(in_channels, out_channels, 1, bias=False),\r\n nn.BatchNorm2d(out_channels),\r\n nn.ReLU()))\r\n\r\n rate1, rate2, rate3 = tuple(atrous_rates)\r\n modules.append(ASPPConv(in_channels, out_channels, rate1))\r\n modules.append(ASPPConv(in_channels, out_channels, rate2))\r\n modules.append(ASPPConv(in_channels, out_channels, rate3))\r\n modules.append(ASPPPooling(in_channels, out_channels))\r\n\r\n self.convs = nn.ModuleList(modules)\r\n\r\n self.project = nn.Sequential(\r\n nn.Conv2d(5 * out_channels, out_channels, 1, bias=False),\r\n nn.BatchNorm2d(out_channels),\r\n nn.ReLU(),\r\n nn.Dropout(0.5))\r\n```\r\nSo what is the consideration of not using \"inplace = True\" here?\r\n\r\nAny answer or idea will be appreciated!\r\n", "url": "https://github.com/pytorch/vision/issues/1900", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-02-20T06:20:33Z", "updated_at": "2020-02-25T15:52:10Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1896, "title": "How can I get the intermediate layers if I used the nn.Sequential to make a Module", "body": "Hi guys,\r\nI am reproducing the DeepLabV3+ these days,\r\nand I write a Module like this,\r\n```python\r\n self.entry_flow = nn.Sequential()\r\n # entry_flow\u7684\u7b2c\u4e00\u4e2a\u5377\u79ef\u5c42\r\n self.entry_flow.add_module(\"conv1\", nn.Conv2d(3, 32, 3, 2, 0, bias=False))\r\n self.entry_flow.add_module(\"bn_relu1\", BNReLU(32))\r\n self.entry_flow.add_module(\"conv2\", nn.Conv2d(32, 64, 3, bias=False))\r\n self.entry_flow.add_module(\"bn_relu2\", BNReLU(64))\r\n # \u6dfb\u52a0\u4e09\u4e2aBlock\u6a21\u5757\r\n self.entry_flow.add_module(\"block1\", XceptionBlock(64, 64, [1, 1, 2]))\r\n self.entry_flow.add_module(\"block2\", XceptionBlock(128, 128, [1, 1, 2]))\r\n self.entry_flow.add_module(\"block2\", XceptionBlock(256, 256, [1, 1, 2]))\r\n```\r\nand I am wondering if I can get the the intermediate output of inner module \"block1\" and \"block2\"?\r\n\r\nAny answer or suggestion will be appreciated!", "url": "https://github.com/pytorch/vision/issues/1896", "state": "closed", "labels": [ "question", "module: models", "topic: feature extraction" ], "created_at": "2020-02-18T04:14:26Z", "updated_at": "2020-02-25T15:55:06Z", "user": "songyuc" }, { "repo": "pytorch/tutorials", "number": 853, "title": "happy to contribute my intuitive visual guide to how convolutions and transposed convenient work", "body": "Unlike fully connected linear layers, convolutions layers need a bit of work to calculate the size of the data as it passes through them. \r\n\r\nThere aren't many easy to understand guides on how convolution works. Many report that transposed convolution is particularly difficult to understand.\r\n\r\nAs part of my upcoming book on GANs with PyTorch I include an appendix with worked examples of convolutions and transposed convolutions, which I've also published a version of online for free access.\r\n\r\n[https://makeyourownneuralnetwork.blogspot.com/2020/02/calculating-output-size-of-convolutions.html](https://makeyourownneuralnetwork.blogspot.com/2020/02/calculating-output-size-of-convolutions.html)\r\n\r\nI'd be happy if you thought these should be included in the Pytorch tutorials, or linked to from there.\r\n\r\nAlso happy to receive feedback on improving them.\r\n\r\nThe main point of the guide is to develop an intuitive understanding, avoiding too much mathematical jargon. \r\n\r\nAn example of the friendly style of diagrams used ...\r\n\r\n![appendix_C_eg_7](https://user-images.githubusercontent.com/17411198/74676158-49f1d900-51ad-11ea-897f-74a089d07320.png)\r\n", "url": "https://github.com/pytorch/tutorials/issues/853", "state": "open", "labels": [], "created_at": "2020-02-17T17:46:06Z", "updated_at": "2020-02-20T13:56:34Z", "user": "makeyourownneuralnetwork" }, { "repo": "pytorch/vision", "number": 1895, "title": "It seems the IntermediateLayerGetter will not use the forward function in the original model", "body": "Hi guys,\r\nI am working on reproducing DeepLabV3+ model these days,\r\nand I need to get some intermediate layers from the backbone.\r\nAnd I found a class of `IntermediateLayerGetter` with similar effect.\r\nWhen I read the code, I found that in the `forward` function of `IntermediateLayerGetter`, that the `IntermediateLayerGetter` would do the inference like,\r\n```python\r\nfor name, module in self.named_children():\r\n x = module(x)\r\n # rest of the code\r\n```\r\nBut this may mean that, the `IntermediateLayerGetter` class would not use the `forward` function in the original wrapped model, which seems unsuual.\r\nI don't think this would let the original model behave in a right way.\r\n\r\nAny explanation or idea would be appreciated!\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1895", "state": "closed", "labels": [ "question", "module: models", "topic: feature extraction" ], "created_at": "2020-02-17T12:02:27Z", "updated_at": "2020-02-25T15:22:30Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1894, "title": "Can I use the IntermediateLayerGetter function for my customized backbone network?", "body": "Hi, guys,\r\nI am reimplementing DeepLabV3+ model these days,\r\nand I need to return some of the intermediate layers from my customized backbone network, \r\nand I found the `IntermediateLayerGetter` function with the similar effect.\r\nSo I am wondering if I can use the `IntermediateLayerGetter` function to get the intermediate layers from my customized backbone?\r\n\r\nAny idea or answer will be appreciated!", "url": "https://github.com/pytorch/vision/issues/1894", "state": "closed", "labels": [ "question", "module: models", "topic: feature extraction" ], "created_at": "2020-02-17T11:33:10Z", "updated_at": "2020-02-27T19:40:34Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1892, "title": "How to look at bbox predictions after training?", "body": "Hi, I finetuned a pretrained Faster RCNN model.\r\nI used the instance segmentation Mask RCNN pytorch tutorial as a guide.\r\n\r\nI finished training and can't figure out how to look at bbox predictions.\r\n\r\nFor segmentation prediction, the guide used the following to display the mask\r\nImage.fromarray(prediction[0]['masks'][0, 0].mul(255).byte().cpu().numpy())\r\n\r\nI tried \r\nImage.fromarray(prediction[0]['boxes'].mul(255).byte().cpu().numpy()) \r\nBut this doesn't work.\r\n\r\nlink to tutorial I followed : https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb#scrollTo=5v5S3bm07SO1\r\n\r\nI used the following to train on a custom dataset. \r\n \r\n\r\n- def get_faster_rcnn_model(num_classes):\r\n- model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\r\n- num_classes = 2 # 1 class (person) + background\r\n- in_features = model.roi_heads.box_predictor.cls_score.in_features\r\n- model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) \r\n- \r\n- return model\r\n", "url": "https://github.com/pytorch/vision/issues/1892", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-02-17T04:36:23Z", "updated_at": "2020-02-25T15:45:56Z", "user": "alareza619" }, { "repo": "pytorch/vision", "number": 1891, "title": "Pre-trained segmentation models can't load state dicts", "body": "<img width=\"1024\" alt=\"Screen Shot 2020-02-16 at 7 51 57 PM\" src=\"https://user-images.githubusercontent.com/37163544/74622526-07b99080-50f6-11ea-8483-bc891aeb5f3a.png\">\r\n", "url": "https://github.com/pytorch/vision/issues/1891", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-02-17T03:53:49Z", "updated_at": "2020-02-27T19:34:26Z", "user": "devanshuDesai" }, { "repo": "pytorch/vision", "number": 1888, "title": "convert_to_coco_api so slow", "body": "i found that it costs about 20min for the convert_to_coco_api to process 670 images when i evaluate my model per epoch. But WHY so slow?", "url": "https://github.com/pytorch/vision/issues/1888", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2020-02-15T05:14:15Z", "updated_at": "2020-02-27T19:43:04Z", "user": "cl2227619761" }, { "repo": "pytorch/pytorch", "number": 33343, "title": "How to convert the model to onnx in libtorch? ", "body": "struct Net : torch::nn::Module {\r\n\tNet()\r\n\t\t: conv1(torch::nn::Conv2dOptions(1, 20, /*kernel_size=*/5).stride(1)),\r\n\t\tconv2(torch::nn::Conv2dOptions(20, 40, /*kernel_size=*/5)),\r\n\t\tfc1(640, 120),\r\n\t\tfc2(120, 10) {\r\n\t\tregister_module(\"conv1\", conv1);\r\n\t\tregister_module(\"conv2\", conv2);\r\n\t\tregister_module(\"conv2_drop\", conv2_drop);\r\n\t\tregister_module(\"fc1\", fc1);\r\n\t\tregister_module(\"fc2\", fc2);\r\n\t}\r\n\ttorch::Tensor forward(torch::Tensor x) {\r\n\t\tx = torch::relu(torch::max_pool2d(conv1->forward(x), 2));//(28-5)+1=24,12 x 12 x 10\r\n\t\tx = torch::relu(torch::max_pool2d(conv2_drop->forward(conv2->forward(x)), 2));//(12-5)+1=8,4 x 4 x 20\r\n\t\t//x = torch::relu(torch::avg_pool2d(conv2_drop->forward(conv2->forward(x)), 2));//(12-5)+1=8,4 x 4 x 20\r\n\r\n\t\tx = x.view({ -1, 640 });\r\n\t\tx = torch::relu(fc1->forward(x));\r\n\t\tx = torch::dropout(x, /*p=*/0.5, /*training=*/is_training());\r\n\t\tx = fc2->forward(x);\r\n\t\treturn torch::log_softmax(x, /*dim=*/1);\r\n\t}\r\n\ttorch::nn::Conv2d conv1;\r\n\ttorch::nn::Conv2d conv2;\r\n\ttorch::nn::Dropout2d conv2_drop;\r\n\ttorch::nn::Linear fc1;\r\n\ttorch::nn::Linear fc2;\r\n};\n\ncc @yf225 @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/33343", "state": "closed", "labels": [ "module: onnx", "module: cpp", "triaged" ], "created_at": "2020-02-14T13:14:23Z", "updated_at": "2021-11-08T22:01:30Z", "user": "bjliuzp" }, { "repo": "pytorch/pytorch", "number": 33341, "title": "how-to-adjust-learning-rate using libtorch", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/33341", "state": "open", "labels": [ "triaged" ], "created_at": "2020-02-14T11:25:57Z", "updated_at": "2020-02-14T17:57:33Z", "user": "w1005444804" }, { "repo": "pytorch/examples", "number": 715, "title": "C++ tutorial on sentence classification", "body": "@soumith \r\nCurrently, all the examples in C++ are related to image classification/ GAN. There are not many examples on text/nlp. I would like to include a starter example on sentence classification in c++. Can I go ahead and work on this??", "url": "https://github.com/pytorch/examples/issues/715", "state": "open", "labels": [ "c++" ], "created_at": "2020-02-13T17:05:24Z", "updated_at": "2024-03-16T23:09:13Z", "comments": 4, "user": "avinashsai" }, { "repo": "pytorch/vision", "number": 1883, "title": "Torchvision NMS description", "body": "I think here should be `boxes with IoU >= iou_threshold`. Is this only a documentation typo and the cuda function called here is actually correctly implemented?\r\n\r\nhttps://github.com/pytorch/vision/blob/bf8595798eaccbaffb6c04db11406426eb1b3800/torchvision/ops/boxes.py#L22", "url": "https://github.com/pytorch/vision/issues/1883", "state": "closed", "labels": [ "question", "module: documentation" ], "created_at": "2020-02-13T14:53:30Z", "updated_at": "2020-02-13T18:03:20Z", "user": "sharifza" }, { "repo": "pytorch/vision", "number": 1882, "title": "How to modify the loss function of models in torchvison?", "body": "Excuse me if this question is a little stupid, for I just recently got access to this extraordinary field and cannot find the answer after some researching. \r\nI invoked the pretrained mrcnn model in torchvison however its output wasn't so ideal. So I wonder if I can modify the loss function to improve its performance without rewriting the whole framework?\r\nThanks a lot for any advice.", "url": "https://github.com/pytorch/vision/issues/1882", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-02-13T13:23:31Z", "updated_at": "2023-06-28T15:01:18Z", "user": "Michael-J98" }, { "repo": "pytorch/tutorials", "number": 850, "title": "Why is the pytorch sphinx theme included as a submodule?", "body": "I'm not an expert in sphinx, but after a lot of testing and headache while trying to improve a tutorial I really wonder why the sphinx theme under `./src` is included at all (as a submodule on github).\r\nIf you clone the repo with `git clone ...` it doesn't get downloaded.\r\nThe theme gets downloaded with `pip install -e git+git://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme` as it is defined in the `requirements.txt`. If the dir `src/pytorch-sphinx-theme` already exists you get ask if you want to wipe it, no matter if it is empty or not.\r\nAnd if you cloned the repo with `--recurse-submodules` you'd download an old version of the theme. \r\nSo why not drop the submodule and just include an empty `src` dir where the theme will be installed w/o error messages during installation from `requirements.txt`?", "url": "https://github.com/pytorch/tutorials/issues/850", "state": "closed", "labels": [ "build issue" ], "created_at": "2020-02-13T13:02:16Z", "updated_at": "2024-09-06T21:25:48Z", "comments": 1, "user": "wAuner" }, { "repo": "pytorch/vision", "number": 1878, "title": "So, what is the meaning for DeepLabHead in deeplabv3.py", "body": "Hi guys,\r\nI am implementing the deeplabv3+, imitating the pattern of deeplabv3.py,\r\nbut I don't quite understand the meaning for DeepLabHead,\r\nso do I need to put the upsampling operations in the DeepLabHead?\r\n\r\nAny answer and idea will be appreciated!", "url": "https://github.com/pytorch/vision/issues/1878", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-02-12T09:45:51Z", "updated_at": "2020-02-14T05:57:44Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1875, "title": "[Bug?] roialign operation returning incorrect numerics", "body": "torchvision.ops.roialign is returning incorrect results for a simple test case-\r\n\r\n```\r\n# x: tensor of size (1,1,3,3)\r\nx= torch.tensor([[[[1,2,3],[4,5,6],[7,8,9]]]], dtype=torch.float)\r\nboxes = torch.tensor(([[0, 0, 2, 2, 0]]), dtype=torch.float)\r\nz = torchvision.ops.roi_align(x, boxes, (2,2),sampling_ratio=1)\r\n\r\n\r\n```\r\n\r\nreturns z as -\r\n```\r\ntensor([[[[7.5000, 8.5000],\r\n [7.5000, 8.5000]]]])\r\n```\r\n\r\nshouldn't this be\r\n```\r\ntensor([[[[3.0000 4.0000],\r\n [6.0000, 7.0000]]]])\r\n```\r\n", "url": "https://github.com/pytorch/vision/issues/1875", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-02-11T21:06:04Z", "updated_at": "2020-02-14T13:24:48Z", "user": "coderAddy" }, { "repo": "pytorch/vision", "number": 1872, "title": "Shouldn't have a `+1` in the NMS implementation for the boxes width/height computation ?", "body": "The standard is to have a bounding box defined as quoted [here](https://github.com/facebookresearch/Detectron/blob/master/detectron/utils/boxes.py#L23).\r\n\r\nBut in the NMS [source code](https://github.com/pytorch/vision/blob/e2a8b4185e2b668b50039c91cdcf81eb4175d765/torchvision/csrc/cpu/nms_cpu.cpp), there is no `+1` when computing the areas and intersection values. This also leaves a bug in the case of getting `union = 0`, raising a `NaN` error when computing the `iou`.\r\n\r\nIf the code is correct, what am I missing ? Shouldn't the [documentation](https://pytorch.org/docs/stable/torchvision/ops.html#torchvision.ops.nms) explain this better ?\r\n\r\nThanks.", "url": "https://github.com/pytorch/vision/issues/1872", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-02-11T15:11:17Z", "updated_at": "2020-02-14T13:59:38Z", "user": "viniciusarruda" }, { "repo": "pytorch/vision", "number": 1870, "title": "Unexpected behavior of torchvision.ops.nms", "body": "Following the example below and looking the nms [source code](https://github.com/pytorch/vision/blob/e2a8b4185e2b668b50039c91cdcf81eb4175d765/torchvision/csrc/cpu/nms_cpu.cpp), I expected a `NaN` error, as the intersection and union will be zero.\r\n\r\n import torchvision # torchvision==0.5.0+cpu\r\n import torch # torch==1.4.0+cpu\r\n\r\n boxes = [[0.0, 0.0, 1.0, 1.0],\r\n [2.0, 1.0, 1.0, 2.0]]\r\n\r\n boxes = torch.tensor(boxes)\r\n scores = torch.tensor([1., 0.5])\r\n\r\n keep = torchvision.ops.nms(boxes, scores, 0.7)\r\n\r\nIf this same example is used with [this](https://github.com/rbgirshick/fast-rcnn/blob/master/lib/utils/nms.py) nms implementation (removing the +1 from the source code to be equivalent to the torchvision implementation), it raises a `NaN` error as expected.\r\n\r\nAm I missing something ?\r\nThanks.", "url": "https://github.com/pytorch/vision/issues/1870", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-02-11T12:09:02Z", "updated_at": "2020-02-27T19:57:35Z", "user": "viniciusarruda" }, { "repo": "pytorch/vision", "number": 1869, "title": "It seems there is no upsampling operations in the implementation of Deeplabv3?", "body": "Hi, guys,\r\nI am learning about the the implementation of Deeplabv3 today,\r\nand I find that it seems, there is no upsampling operations in deeplabv3.py,\r\nso where is the upsampling operations of Deeplabv3 model?\r\n\r\nAny answer or idea will be appreciated!", "url": "https://github.com/pytorch/vision/issues/1869", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-02-11T11:12:13Z", "updated_at": "2020-02-13T18:23:26Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1860, "title": "Is there a backbone implementation of Xception?", "body": "Hi, guys,\r\nI want to know if there is a backbone implementation of Xception?\r\n\r\nAny answer or idea will be appreciated!", "url": "https://github.com/pytorch/vision/issues/1860", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2020-02-10T10:06:27Z", "updated_at": "2020-02-10T13:46:21Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1859, "title": "Is there an implementation of Deeplabv3+?", "body": "Hi, guys,\r\nI want to know if there is an implementation of Deeplabv3+?\r\n\r\nAny answer will be appreciated!", "url": "https://github.com/pytorch/vision/issues/1859", "state": "closed", "labels": [ "question", "module: models", "topic: semantic segmentation" ], "created_at": "2020-02-10T07:24:51Z", "updated_at": "2020-02-10T14:10:28Z", "user": "songyuc" }, { "repo": "pytorch/vision", "number": 1856, "title": "FasterRCNN ground truth boxes reference system", "body": "Hi,\r\nI'm trying to train a FasterRCNN on a custom dataset.\r\nI have the ground truth bounding boxes in the [x1, y1, x2, y2] format, where:\r\n- 0 <= x1 <= x2 <= H\r\n- 0 <= y1 <= y2 <= W\r\n- `H, W = img.shape` with img being loaded with cv2\r\nWith numpy, if I extract `img[x1:x2, y1:y2]`, it's the correct portion of the image.\r\nNow, this seems to me the right way of formatting the boxes, since the documentation says:\r\n> boxes (``FloatTensor[N, 4]``): the ground-truth boxes in ``[x1, y1, x2, y2]`` format, with values\r\n between ``0`` and ``H`` and ``0`` and ``W``\r\n\r\nHowever, the network doesn't seem to be learning anything during training.\r\nInstead, if I switch x1 with y1, x2 with y2, the network starts working properly.\r\nIt seems to be a reference system problem.\r\nWhat am I missing? It feels like there is an easy explanation to this problem.\r\n\r\nThanks in advance!", "url": "https://github.com/pytorch/vision/issues/1856", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-02-07T12:55:44Z", "updated_at": "2020-02-11T07:54:48Z", "user": "Robylyon93" }, { "repo": "pytorch/vision", "number": 1854, "title": "Clarify the quantization bits in the pretrained models?", "body": "Thanks for the great work, and quantized pretrained models had been added in torchvision 0.5.\r\nhttps://github.com/pytorch/vision/releases\r\n\r\n>Quantized models\r\ntorchvision now provides quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2, as well as reference scripts for quantizing your own model in references/classification/train_quantization.py (https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py). \r\n\r\nHowever, I was confused what is the quantized bits this models are in.\r\nIs it in FP16 or INT8? I think this should be clarified to lessen confusion.\r\n", "url": "https://github.com/pytorch/vision/issues/1854", "state": "closed", "labels": [ "question", "module: documentation", "module: models.quantization" ], "created_at": "2020-02-07T04:50:29Z", "updated_at": "2020-03-10T10:39:08Z", "user": "kentaroy47" }, { "repo": "pytorch/pytorch", "number": 33022, "title": "How do you convert Torch output iOS NSNumber to UIImage", "body": "I recently trained a model in PyTorch and created the .pt model file. I was able to use the model file in iOS with https://pytorch.org/mobile/ios/ to get an output.\r\n\r\nBut the output is an array of NSNumber.\r\n\r\nHow can I convert that to UIImage?\r\n\r\nHere's how i'm loading the model:\r\n\r\n```\r\n private lazy var module: TorchModule = {\r\n if let filePath = Bundle.main.path(forResource: \"face\", ofType: \"pt\"),\r\n let module = TorchModule(fileAtPath: filePath) {\r\n print(\"Loaded Model\")\r\n return module\r\n } else {\r\n print(Bundle.main.path(forResource: \"face\", ofType: \"pt\"))\r\n fatalError(\"Can't find the model file!\")\r\n }\r\n }()\r\n```\r\n\r\nHere's how i'm passing and image and getting the NSNumber output:\r\n\r\n```\r\n let image = imageView.image!\r\n let resizedImage = image.resized(to: CGSize(width: 256, height: 256))\r\n guard var pixelBuffer = resizedImage.normalized() else {\r\n return\r\n }\r\n\r\n guard let outputs = module.predict(image: UnsafeMutableRawPointer(&pixelBuffer)) else {\r\n return\r\n }\r\n```\r\n\r\nAnd here are the numbers I'm getting back:\r\n\r\n```\r\n1000 elements\r\n - 0 : 0.9556794\r\n - 1 : 0.959437\r\n - 2 : 0.9545235\r\n - 3 : 0.9602792\r\n - 4 : 0.9626616\r\n - 5 : 0.9451413\r\n - 6 : 0.9630886\r\n - 7 : 0.9649493\r\n - 8 : 0.96794\r\n - 9 : 0.9451433\r\n - 10 : 0.9606364\r\n - 11 : 0.9666034\r\n - 12 : 0.9719177\r\n - 13 : 0.9503573\r\n - 14 : 0.9689084\r\n - 15 : 0.9644295\r\n - 16 : 0.9715278\r\n - 17 : 0.9545213\r\n - 18 : 0.9695826\r\n - 19 : 0.9616866\r\n - 20 : 0.9709251\r\n - 21 : 0.9504414\r\n - 22 : 0.9684582\r\n - 23 : 0.9636042\r\n - 24 : 0.9707479\r\n - 25 : 0.9474098\r\n - 26 : 0.9687761\r\n - 27 : 0.962492\r\n - 28 : 0.9722843\r\n - 29 : 0.9512891\r\n - 30 : 0.9713559\r\n - 31 : 0.9646252\r\n - 32 : 0.9709271\r\n - 33 : 0.9450958\r\n - 34 : 0.9687521\r\n - 35 : 0.9592332\r\n - 36 : 0.9614322\r\n - 37 : 0.9501442\r\n - 38 : 0.9671555\r\n - 39 : 0.9576904\r\n - 40 : 0.966316\r\n - 41 : 0.9518282\r\n - 42 : 0.9691417\r\n - 43 : 0.9573505\r\n - 44 : 0.9599486\r\n - 45 : 0.9461015\r\n - 46 : 0.9679283\r\n - 47 : 0.9560247\r\n - 48 : 0.9592899\r\n - 49 : 0.9511722\r\n - 50 : 0.9696479\r\n - 51 : 0.9560531\r\n - 52 : 0.9652212\r\n - 53 : 0.9524947\r\n - 54 : 0.9737433\r\n - 55 : 0.960919\r\n - 56 : 0.968053\r\n - 57 : 0.9475061\r\n - 58 : 0.9700636\r\n - 59 : 0.9567729\r\n - 60 : 0.9692516\r\n - 61 : 0.9438604\r\n - 62 : 0.9666854\r\n - 63 : 0.9534383\r\n - 64 : 0.9692665\r\n - 65 : 0.940613\r\n - 66 : 0.9655256\r\n - 67 : 0.9560776\r\n - 68 : 0.9666242\r\n - 69 : 0.9394323\r\n - 70 : 0.968111\r\n - 71 : 0.95995\r\n - 72 : 0.965363\r\n - 73 : 0.9503852\r\n - 74 : 0.9690766\r\n - 75 : 0.9677175\r\n - 76 : 0.9689373\r\n - 77 : 0.958289\r\n - 78 : 0.9717255\r\n - 79 : 0.9717532\r\n - 80 : 0.9726413\r\n - 81 : 0.9699872\r\n - 82 : 0.9718522\r\n - 83 : 0.970526\r\n - 84 : 0.9766954\r\n - 85 : 0.969599\r\n - 86 : 0.9727935\r\n - 87 : 0.9729283\r\n - 88 : 0.976265\r\n - 89 : 0.9681603\r\n - 90 : 0.9752769\r\n - 91 : 0.9746329\r\n - 92 : 0.9779454\r\n - 93 : 0.9716548\r\n - 94 : 0.9771305\r\n - 95 : 0.9763421\r\n - 96 : 0.9785836\r\n - 97 : 0.972732\r\n - 98 : 0.9775047\r\n - 99 : 0.972182\r\n - 100 : 0.9754875\r\n - 101 : 0.9716605\r\n - 102 : 0.9703948\r\n - 103 : 0.9705175\r\n - 104 : 0.9728737\r\n - 105 : 0.9674641\r\n - 106 : 0.9717978\r\n - 107 : 0.9679852\r\n - 108 : 0.9708558\r\n - 109 : 0.9624084\r\n - 110 : 0.971324\r\n - 111 : 0.9681918\r\n - 112 : 0.9727319\r\n - 113 : 0.9670874\r\n - 114 : 0.974831\r\n - 115 : 0.9708152\r\n - 116 : 0.9764423\r\n - 117 : 0.9653759\r\n - 118 : 0.9755697\r\n - 119 : 0.9701872\r\n - 120 : 0.9722598\r\n - 121 : 0.9629219\r\n - 122 : 0.9759187\r\n - 123 : 0.9682656\r\n - 124 : 0.9722873\r\n - 125 : 0.9610798\r\n - 126 : 0.9722118\r\n - 127 : 0.9668668\r\n - 128 : 0.9654322\r\n - 129 : 0.9550279\r\n - 130 : 0.9650962\r\n - 131 : 0.9669107\r\n - 132 : 0.9664246\r\n - 133 : 0.9492099\r\n - 134 : 0.968359\r\n - 135 : 0.961526\r\n - 136 : 0.9675772\r\n - 137 : 0.9473796\r\n - 138 : 0.9685749\r\n - 139 : 0.9654633\r\n - 140 : 0.9687688\r\n - 141 : 0.9504932\r\n - 142 : 0.9691511\r\n - 143 : 0.9665062\r\n - 144 : 0.9718524\r\n - 145 : 0.9436379\r\n - 146 : 0.9687477\r\n - 147 : 0.9655094\r\n - 148 : 0.9710371\r\n - 149 : 0.9442329\r\n - 150 : 0.9679898\r\n - 151 : 0.9687661\r\n - 152 : 0.9667206\r\n - 153 : 0.9499748\r\n - 154 : 0.9711047\r\n - 155 : 0.9650826\r\n - 156 : 0.9675245\r\n - 157 : 0.9424814\r\n - 158 : 0.9717015\r\n - 159 : 0.961861\r\n - 160 : 0.9632423\r\n - 161 : 0.95027\r\n - 162 : 0.9681548\r\n - 163 : 0.95991\r\n - 164 : 0.9622825\r\n - 165 : 0.9419831\r\n - 166 : 0.9676843\r\n - 167 : 0.9502627\r\n - 168 : 0.9604739\r\n - 169 : 0.9390262\r\n - 170 : 0.9632315\r\n - 171 : 0.9489474\r\n - 172 : 0.9538567\r\n - 173 : 0.9387113\r\n - 174 : 0.9685857\r\n - 175 : 0.9537058\r\n - 176 : 0.9516653\r\n - 177 : 0.9406225\r\n - 178 : 0.9654861\r\n - 179 : 0.9563531\r\n - 180 : 0.9503596\r\n - 181 : 0.9421797\r\n - 182 : 0.9610486\r\n - 183 : 0.9516525\r\n - 184 : 0.9575865\r\n - 185 : 0.9422593\r\n - 186 : 0.9571754\r\n - 187", "url": "https://github.com/pytorch/pytorch/issues/33022", "state": "closed", "labels": [ "oncall: mobile", "module: ios" ], "created_at": "2020-02-05T21:41:29Z", "updated_at": "2020-02-07T19:12:04Z", "user": "rooseveltrp" }, { "repo": "pytorch/vision", "number": 1848, "title": "training FCN and DeepLab for segmentation", "body": "does PyTorch provide steps on how to use the deeplab or fcn for training a segmentation task?\r\nif it already exists, where I can find it?", "url": "https://github.com/pytorch/vision/issues/1848", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: semantic segmentation" ], "created_at": "2020-02-04T19:34:28Z", "updated_at": "2020-02-13T17:50:09Z", "user": "isalirezag" }, { "repo": "pytorch/vision", "number": 1847, "title": "Required range is confusing in torchvision.utils.save_image", "body": "https://discuss.pytorch.org/t/float-vs-int-in-torchvision-utils-save-image/68596", "url": "https://github.com/pytorch/vision/issues/1847", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2020-02-04T07:47:28Z", "updated_at": "2025-01-23T10:55:55Z", "user": "chinglamchoi" }, { "repo": "pytorch/pytorch", "number": 32690, "title": "How to customize build torchscript model to be used in end devices codebase", "body": "## \ud83d\ude80 Feature\r\nI want to compile my model to be executed in the Python/C script running on our customers computers/end devices, without the need to load the entire torch/libtorch package, but only what is needed based on the model operations.\r\n\r\n## Motivation\r\nCurrently, the size of my ResNet model (for example) is ~100MB but it needs torch/libtorch, which requires ~1.5GB of space.\r\nEnd devices (smart cameras, robots, etc.) are low in resources. R&D efforts for deployment on end devices includes a large efforts to optimize the model and reduce its size to minimum. Having my model accompanied by torch/libtorch is a difficult restriction. I am aware that the mobile community is leading the attention for similar features. However, considering modern smartphones resources, there is even a greater need for such a solution for other end devices.\r\n\r\n## Current status\r\nCurrently i am doing this series of commands:\r\n`model = torchvision.models.resnet50(pretrained=True)`\r\n`model.eval()`\r\n`example = torch.ones(1, 3, 224, 224)`\r\n`traced_model = torch.jit.trace(model, example)`\r\n`ops = torch.jit.export_opnames(model)`\r\n`traced_model.save('traced_model.pt')`\r\n`with open('model_ops.yaml', 'w') as output:`\r\n` yaml.dump(ops, output)`\r\nThe request is to enable building a model i can use in another python/c script without the need to load the entire torch or libtorch packages, but only what is needed based on the model operations.\r\n\r\n## Alternatives\r\nI am not aware of such alternatives. Will be happy to hear about them, if there are any.\r\n\r\n\r\n\r\ncc @suo", "url": "https://github.com/pytorch/pytorch/issues/32690", "state": "open", "labels": [ "oncall: jit", "triaged", "oncall: mobile" ], "created_at": "2020-01-28T10:11:07Z", "updated_at": "2020-02-28T18:54:55Z", "user": "danmalowany-allegro" }, { "repo": "pytorch/tutorials", "number": 833, "title": "Using encoder output in attention model", "body": "I study this [NLP from scratch](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html) tutorial. Encoder's output shape is `(seq_len, batch, hidden_size)`\r\n\r\nWhy does the author only save `[0, 0]` part (later is needed for attention weights) but not `[0]`:\r\nhttps://github.com/pytorch/tutorials/blob/8244bffa52641fab0c37d35c6843faa1beaba06b/intermediate_source/seq2seq_translation_tutorial.py#L563\r\n\r\nIs there a mistake?", "url": "https://github.com/pytorch/tutorials/issues/833", "state": "closed", "labels": [], "created_at": "2020-01-25T18:06:14Z", "updated_at": "2020-01-29T19:03:51Z", "comments": 0, "user": "kenenbek" }, { "repo": "pytorch/pytorch", "number": 32485, "title": "How to specify pytroch as a package requirement on windows ?", "body": "## \u2753 Questions and Help\r\n\r\nI have a python package which depends on pytorch and which I\u2019d like windows users to be able to install via pip (the specific package is: https://github.com/mindsdb/lightwood, but I don\u2019t think this is very relevant to my question).\r\n\r\nWhat are the best practices for going about this ?\r\n\r\nAre there some project I could use as examples ?\r\n\r\nIt seems like the pypi hosted version of torch & torchvision aren\u2019t windows compatible and the \u201cgetting started\u201d section suggests installing from the custom pytorch repository, but beyond that I\u2019m not sure what the ideal solution would be to incorporate this as part of a setup script.", "url": "https://github.com/pytorch/pytorch/issues/32485", "state": "closed", "labels": [], "created_at": "2020-01-22T09:31:44Z", "updated_at": "2020-01-22T10:27:03Z", "user": "George3d6" }, { "repo": "pytorch/tutorials", "number": 828, "title": "Multiple input tutorial", "body": "I am currently trying to build a model that takes two different inputs into account, trying to generalize the interaction between both from their properties. \r\nHowever, I cannot find any resource on how to build a dataset that allows multiple inputs, while it seems to be quite simple to build the neural net itself. Yet, I haven't found a solution. It would be great to address this issue in the PyTorch documentation, or give a tutorial for this. ", "url": "https://github.com/pytorch/tutorials/issues/828", "state": "closed", "labels": [], "created_at": "2020-01-20T08:21:57Z", "updated_at": "2021-06-09T21:14:17Z", "comments": 6, "user": "THinnerichs" }, { "repo": "pytorch/pytorch", "number": 32418, "title": "how to install pytorch on AMD GPU", "body": "I find that the pytorch offer one version of downloading which not requires CUDA. And I follow the instruction.\r\nI choose the pytorch 1.4.\r\nMy OS is Windows.\r\nPip is used to install.\r\nMy version of python is python 3.6\r\nCUDA None\r\nand I run the command pip3 install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html\r\nHowever, here comes two errors\r\nERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)\r\nERROR: No matching distribution found for torch==1.4.0+cpu\r\nWhy? Thanks a lot for help", "url": "https://github.com/pytorch/pytorch/issues/32418", "state": "closed", "labels": [], "created_at": "2020-01-20T06:19:18Z", "updated_at": "2023-04-10T18:58:46Z", "user": "PIPIKAI-Sung" }, { "repo": "pytorch/pytorch", "number": 32403, "title": "How to accelerate the compiling of pytorch ", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nI modify some file of Aten for some reason, when I compile the pytorch project, it takes a lot of time, almost 5 minutes in my computer...\r\npython setup install costs a lot of time, can anybody help me accelerate the compiling of pytorch, thanks a lot ", "url": "https://github.com/pytorch/pytorch/issues/32403", "state": "open", "labels": [ "module: build", "triaged" ], "created_at": "2020-01-19T13:42:14Z", "updated_at": "2020-01-21T23:25:36Z", "user": "daydayfun" }, { "repo": "pytorch/java-demo", "number": 3, "title": "how and where is it better to install the LIBTORCH library localy for the project?", "body": "how and where is it better to install the LIBTORCH library localy for the project in linux(Ubuntu)?\r\nWhile make proj Intellij idea write Error: \"A problem occurred evaluating root project 'java-demo'. > LIBTORCH_HOME not present in environment.\"\r\n", "url": "https://github.com/pytorch/java-demo/issues/3", "state": "closed", "labels": [], "created_at": "2020-01-18T18:03:04Z", "updated_at": "2020-04-29T02:53:34Z", "user": "vit1967" }, { "repo": "pytorch/pytorch", "number": 32282, "title": "How to convert layer_norm layer to ONNX?", "body": "I\u2019m trying to convert my model to ONNX format for further deployment in TensorRT. Here is a sample code to illustrate my problem in layer_norm here.\r\n\r\n``` python\r\nimport torch\r\nfrom torch import nn\r\n\r\nclass ExportModel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n\r\n def forward(self, x):\r\n # n, c, h, w = x.shape\r\n # y = nn.functional.layer_norm(x, [c, h, w]) # not working\r\n # y = nn.functional.layer_norm(x, x.size()[1:]) # not working\r\n y = nn.functional.layer_norm(x, [16, 32, 128])\r\n\r\n return y\r\n\r\ndef main():\r\n model = ExportModel()\r\n\r\n dummy_input = torch.randn(64, 16, 32, 128)\r\n input_names = [ \"input\" ]\r\n output_names = [ \"output\" ]\r\n\r\n with torch.no_grad():\r\n torch.onnx.export(\r\n model, dummy_input, \"sample.onnx\", verbose=True,\r\n input_names=input_names, output_names=output_names\r\n )\r\n return\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nIt could only work when the parameter of layer_norm is constant number. If not, the following error will occur.\r\n\r\n``` shell\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 31, in <module>\r\n main()\r\n File \"sample.py\", line 26, in main\r\n verbose=True, input_names=input_names, output_names=output_names\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/__init__.py\", line 148, in export\r\n strip_doc_string, dynamic_axes, keep_initializers_as_inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py\", line 66, in export\r\n dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py\", line 409, in _export\r\n fixed_batch_size=fixed_batch_size)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py\", line 289, in _model_to_graph\r\n fixed_batch_size=fixed_batch_size)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py\", line 132, in _optimize_graph\r\n graph = torch._C._jit_pass_onnx(graph, operator_export_type)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/__init__.py\", line 179, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py\", line 647, in _run_symbolic_function\r\n return op_fn(g, *inputs, **attrs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py\", line 128, in wrapper\r\n args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py\", line 128, in <listcomp>\r\n args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py\", line 81, in _parse_arg\r\n \"', since it's not constant, please try to make \"\r\nRuntimeError: Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible\r\n```\r\n\r\nI have few code blocks in my model have layer_norm op. It would turn into some ugly code if I explicitly mark all parameters constant number. Is there any \u201cbest practice\u201d of how to use dynamic shape for this kind of use case?\r\n\r\nAlso, I have posted the same issue on [forum](https://discuss.pytorch.org/t/how-to-convert-layer-norm-layer-to-onnx/66841). I'm not sure where is the better place for this kind of quesion, so I duplicate the issue here.\r\n\r\nThanks in advance.\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/32282", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2020-01-16T10:53:52Z", "updated_at": "2020-03-23T08:24:02Z", "user": "rtrobin" }, { "repo": "pytorch/vision", "number": 1757, "title": "Torchvision Resnet 50 accuracy", "body": "Hey, Pytorch\u2019s (torchvision) Resnet 50 accuracy is declared to be 76.15.\r\nBut when I\u2019m using the training script from PyTorch\u2019s repo, which is mentioned in the official torchvision website(https://pytorch.org/docs/stable/torchvision/models.html#classification):\r\n[https://github.com/pytorch/examples/blob/master/imagenet/main.py]\r\nand the Resnet50 from torchvision:\r\n[https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py]\r\nWhen training it, after one epoch I\u2019m getting an accuracy of 76.6, how can it be? isn\u2019t the models fully trained?\r\n\r\nThanks!", "url": "https://github.com/pytorch/vision/issues/1757", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2020-01-16T09:43:54Z", "updated_at": "2021-06-30T15:08:29Z", "user": "Esaada" }, { "repo": "pytorch/vision", "number": 1751, "title": " module 'torchvision' has no attribute 'ops'", "body": "torchvision. ops implements operators that are specific for Computer Vision. Those operators currently do not support TorchScript. Performs non-maximum suppression (NMS) on the boxes according to their intersection-over-union (IoU)\r\n\r\noutput[image_i] = pred[torchvision.ops.boxes.batched_nms(pred[:, :4], pred[:, 4], c, iou_thres)]\r\n\r\nAttributeError: module 'torchvision' has no attribute 'ops'\r\n\r\n[https://github.com/ultralytics/yolov3/blob/master/utils/utils.py](url)\r\n\r\ncan anyone please help me to bypass this problem?", "url": "https://github.com/pytorch/vision/issues/1751", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2020-01-15T15:01:54Z", "updated_at": "2020-01-15T18:45:32Z", "user": "omizonly" }, { "repo": "pytorch/vision", "number": 1737, "title": "Pyramid layer", "body": "I want to extract the third layer of feature pyramid from \r\n\r\nfeatures = self.backbone(images.tensors) in generalized_rcnn.py\r\n\r\nany help please?", "url": "https://github.com/pytorch/vision/issues/1737", "state": "open", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2020-01-10T15:44:20Z", "updated_at": "2020-01-10T16:29:44Z", "user": "MitraTj" }, { "repo": "pytorch/pytorch", "number": 32041, "title": "How to export L2-normalization to onnx", "body": "## \ud83d\ude80 Feature\r\nSupport export for LpNormalization from PyTorch to ONNX, thus it could be used in TensorRT model.\r\n\r\n\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/32041", "state": "closed", "labels": [ "module: onnx", "triaged", "enhancement", "onnx-needs-info" ], "created_at": "2020-01-10T14:37:38Z", "updated_at": "2022-10-24T18:08:40Z", "user": "stoneyang" }, { "repo": "pytorch/vision", "number": 1732, "title": "How to use Resnet to deal with one channel input through pytorch.hub ?", "body": "I did this to load the Resnet model, and since my input contains only one channel, the model does not work.\r\n\r\n`model = torch.hub.load('pytorch/vision:v0.4.2', 'resnet18', pretrained=True)`\r\n\r\nI know how to modify the 'resnet.py' file to satisfy my demands, but that means I must include the modified 'resnet.py' file in my project, which may be unnecessary. It will be a lot better if the model can be loaded simply from pytorch.\r\n\r\nAnyone has solutions? Thanks a lot.\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1732", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2020-01-09T09:22:50Z", "updated_at": "2020-01-09T20:22:18Z", "user": "PhilWallace" }, { "repo": "pytorch/pytorch", "number": 31984, "title": "Question about how to predict the derivation of the output?", "body": "I expect a neural network predict a value and the derivation of value.Is the following code the correct way?\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.autograd import grad\r\n\r\nclass net(nn.Module):\r\n def __init__(self):\r\n super(net, self).__init__()\r\n self.lin1 = nn.Linear(3, 30)\r\n self.lin2 = nn.Linear(30, 1)\r\n\r\n def forward(self, p):\r\n x = self.lin1(p)\r\n x = nn.ReLU()(x)\r\n return self.lin2(x)\r\n\r\nx = torch.randn(1000, 3)\r\ny = (5 * torch.sin(x) + 3 * torch.cos(x)).sum(dim=-1).unsqueeze(-1)\r\nz = (5 * torch.cos(x) - 3 * torch.sin(x)).sum(dim=-1).unsqueeze(-1)\r\nmodel = net()\r\noptimizer = torch.optim.Adam(model.parameters(), lr=3e-3)\r\n\r\nfor epoch in range(10000):\r\n model.train()\r\n x.requires_grad = True\r\n optimizer.zero_grad()\r\n output = model(x)\r\n grad_x = grad(output.sum(), x, retain_graph=True)[0]\r\n loss_y = nn.MSELoss()(output, y)\r\n loss_z = nn.MSELoss()(grad_x.sum(dim=-1).unsqueeze(-1), z)\r\n loss = loss_y + loss_z\r\n loss.backward(retain_graph=True)\r\n optimizer.step()\r\n print('Loss_y = {:.4f} | Loss_z = {:.4f}.'.format(loss_y.item(), loss_z.item())\r\n```\r\nI check the grad_fn of variable ```loss_z```,find ```loss_y.grad_fn = <MseLossBackward object at 0x0000024F2AB8DF98>```,but ```loss_z.grad_fn = None```.So although ```loss_z``` decreases,this means the loss of the derivation of output doesn\u2019t participate in the gradient decent.Maybe just the model predicts ```y``` very well,so it can predict ```z``` well.If the dataset is not as easy as this form,loss_z even doesn\u2019t decrease.\r\nThen I try to only predict z without predict y,like the following code:\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.autograd import grad\r\n\r\nclass net(nn.Module):\r\n def __init__(self):\r\n super(net, self).__init__()\r\n self.lin1 = nn.Linear(3, 30)\r\n self.lin2 = nn.Linear(30, 1)\r\n\r\n def forward(self, p):\r\n x = self.lin1(p)\r\n x = nn.ReLU()(x)\r\n return self.lin2(x)\r\n\r\nx = torch.randn(100, 3)\r\ny = (5 * torch.sin(x) + 3 * torch.cos(x)).sum(dim=-1).unsqueeze(-1)\r\nz = (5 * torch.cos(x) - 3 * torch.sin(x)).sum(dim=-1).unsqueeze(-1)\r\nmodel = net()\r\noptimizer = torch.optim.Adam(model.parameters(), lr=3e-3)\r\n\r\nfor epoch in range(1000):\r\n model.train()\r\n x.requires_grad = True\r\n optimizer.zero_grad()\r\n output = model(x)\r\n grad_x = grad(output.sum(), x, retain_graph=True)[0]\r\n loss_z = nn.MSELoss()(grad_x.sum(dim=-1).unsqueeze(-1), z)\r\n print(loss_z.grad_fn) # None\r\n loss_z.backward()\r\n optimizer.step()\r\n print('Loss_z = {:.4f}.'.format(loss_z.item()))\r\n```\r\nThis code can't run,with the error:\r\n```python\r\nTraceback (most recent call last):\r\n File \"c:/Users/wz/Desktop/test.py\", line 33, in <module>\r\n loss_z.backward()\r\n File \"C:\\Users\\wz\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\torch\\tensor.py\", line 118, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"C:\\Users\\wz\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\torch\\autograd\\__init__.py\", line 93, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\r\n```\r\nI print ```loss_z.grad_fn``` and find it's None,but I don't know how to fix it.So how to predict the derivation of the output correctly?", "url": "https://github.com/pytorch/pytorch/issues/31984", "state": "closed", "labels": [], "created_at": "2020-01-09T07:31:25Z", "updated_at": "2020-01-09T18:57:24Z", "user": "thu-wangz17" }, { "repo": "pytorch/vision", "number": 1723, "title": "torchvision fail to use GPU.", "body": "While I am using [detectron2](https://github.com/facebookresearch/detectron2), I meet the problem that some function in torchvision can't use GPU.\r\n\r\nThe details are here: https://github.com/facebookresearch/detectron2/issues/469\r\n\r\nIt seems an install problem. Directly using conda to install torchvision should be ok for most situations, but I am not sure whether this will lead to cuda usage error. \r\n\r\nCould you give some suggestions to fix this problem? : )", "url": "https://github.com/pytorch/vision/issues/1723", "state": "closed", "labels": [ "question", "topic: build" ], "created_at": "2020-01-07T09:23:49Z", "updated_at": "2020-05-11T12:18:51Z", "user": "dihuangdh" }, { "repo": "pytorch/vision", "number": 1720, "title": "Enquiry on Implementation of RandomHorizontalFlip (in transforms.py from references folder)", "body": "I am a bit confused by the implementation RandomHorizontalFlip defined [here](https://github.com/pytorch/vision/blob/master/references/detection/transforms.py). Note the following snippet extracted:\r\n```\r\nclass RandomHorizontalFlip(object):\r\n def __init__(self, prob):\r\n self.prob = prob\r\n\r\n def __call__(self, image, target):\r\n if random.random() < self.prob:\r\n height, width = image.shape[-2:]\r\n image = image.flip(-1)\r\n bbox = target[\"boxes\"]\r\n bbox[:, [0, 2]] = width - bbox[:, [2, 0]]\r\n target[\"boxes\"] = bbox\r\n```\r\n\r\nshould ```bbox[:, [0, 2]] = width - bbox[:, [2, 0]]``` be ```bbox[:, [1, 3]] = width - bbox[:, [3, 1]]``` instead? \r\nLet original bounding box be ```[xmin, ymin, xmax, ymax]``` and image have size ```(height, width)```. After horizontal flip, the bounding box location should be ```[xmin, width - ymax, xmax, width - ymin]```. \r\n(Please correct me if I have something wrong)", "url": "https://github.com/pytorch/vision/issues/1720", "state": "closed", "labels": [ "question", "module: transforms", "module: reference scripts" ], "created_at": "2020-01-05T11:04:12Z", "updated_at": "2020-01-08T10:28:44Z", "user": "riven314" }, { "repo": "pytorch/pytorch", "number": 31869, "title": "How to save int value in ctx.save_for_backward", "body": "I want to define a new memory op, and first impl a new memory function(torch.autograd.Function), but forward and backward are static method, \r\nand inputs have some int value for some config(like stride in conv function), ctx.save_for_backward can't save int value, How to fix this problem? \r\n\r\nFirst, i want to follow torch.nn.conv1d example, but i can't find any source for F.conv1d function? \r\n", "url": "https://github.com/pytorch/pytorch/issues/31869", "state": "closed", "labels": [], "created_at": "2020-01-05T07:13:11Z", "updated_at": "2020-01-06T05:22:12Z", "user": "kuramawzw1" }, { "repo": "pytorch/pytorch", "number": 31865, "title": "how to install pytorch 0.4.1", "body": "For some reason I have to install 0.4.1, I tired many times including install from source, I tried to install 0.4.1 under cuda9.0 and cuda 9.2, but it failed. my card is 2080ti. please help and tell me if there is a way to solve the problem, thanks!", "url": "https://github.com/pytorch/pytorch/issues/31865", "state": "closed", "labels": [], "created_at": "2020-01-05T03:25:46Z", "updated_at": "2020-01-06T05:24:02Z", "user": "lapetite123" }, { "repo": "pytorch/pytorch", "number": 31853, "title": "How to modify the internal calculation process of LSTM in pytorch-v1.1.0?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nI want to modify the calculation process inside the LSTM. However, when I queried the _VF.lstm() method, no corresponding python implementation was found. Then I found the C++ implementation at this address (i.e., https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native/RNN.cpp) on GitHub. My question is which files need to be modified under the local PyTorch directory.\r\n", "url": "https://github.com/pytorch/pytorch/issues/31853", "state": "closed", "labels": [], "created_at": "2020-01-04T03:14:06Z", "updated_at": "2020-01-06T05:24:17Z", "user": "zwd2016" }, { "repo": "pytorch/pytorch", "number": 31823, "title": "How to set quantization aware training scaling factors?", "body": "## \u2753 Questions and Help\r\n\r\nwhen i use quantization aware training , The weight tensor scaling factors is a standard floating point number.\r\nI want to convert my model as 8bit at FPGA, so the weight tensor scaling factor must be an integer power-of-two value exponent. Is there such an option? what should I do?\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/31823", "state": "closed", "labels": [], "created_at": "2020-01-03T10:53:36Z", "updated_at": "2020-01-06T05:24:37Z", "user": "sunkr1995" }, { "repo": "pytorch/pytorch", "number": 31821, "title": "How to convert model with a new QConv to onnx?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nI wrapped a new conv class to support quantization. When I convert this model to onnx, I want each conv in the onnx model to have quantized parameters such as quantization bits. Could you tell me how to convert this model to onnx.\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a", "url": "https://github.com/pytorch/pytorch/issues/31821", "state": "closed", "labels": [ "module: onnx", "oncall: quantization", "triaged" ], "created_at": "2020-01-03T07:56:58Z", "updated_at": "2021-12-16T00:16:35Z", "user": "Wuqiman" }, { "repo": "pytorch/pytorch", "number": 31818, "title": "How to distinguish different layers in hook\uff1f", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nA way to distinguish different layers in each module itself\r\n\r\n## Motivation\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nI'd like to store some intermedia data such as output data of all conv layers, and I want to use hook. It is easy to judge which class the module is in hook function like \" if isinstance(module, nn.Conv2d):\", but if I want to store the data, I need a name which can be got in hook function to be the file name so that data from different layers will be saved in different files. e.g. \"save(filename, output)\" How can I get this name?\r\n\r\nEven if I collect all output data in a list and save it outside the hook function, I still don't know to which layer each data belongs. \r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nThere is no way to identify each layer now, a unique name or id.\r\n```\r\ndef hook(moudle, input, output):\r\n name = get_unique_name(module)\r\n save(name+'.h5', output)\r\n\r\nfor n,m in model.named_module():\r\n m.register_forward_hook(hook)\r\n```\r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered if any. -->\r\nBecause we can only get names from parent modules using\"named_module\", it will also work if I can pass arguments to hook function.\r\n```\r\ndef hook(moudle, input, output, n): \r\n save(n+'.h5', output) \r\n\r\nfor n,m in model.named_module(): \r\n m.register_forward_hook(hook, n)\r\n```\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/31818", "state": "open", "labels": [ "module: nn", "triaged" ], "created_at": "2020-01-03T03:48:13Z", "updated_at": "2022-09-22T22:55:48Z", "user": "I-Doctor" }, { "repo": "pytorch/examples", "number": 689, "title": "DDP training multi nodes nccl error", "body": "pytroch:1.3.1\r\npython:3.6\r\nsystem:ubuntu 16\r\ncuda:10.0\r\n\r\nwhen i run imagenet main.py in multi-nodes ,there is a error likes,(single node can run ):\r\nUse GPU: 1 for training\r\nUse GPU: 0 for training\r\n=> creating model 'resnet50'\r\n=> creating model 'resnet50'\r\n\r\nid-d3:714:714 [0] misc/ibvwrap.cu:63 NCCL WARN Failed to open libibverbs.so[.1]\r\nNCCL version 2.4.2+cuda9.0\r\n\r\nid-d3:715:715 [1] misc/ibvwrap.cu:63 NCCL WARN Failed to open libibverbs.so[.1]\r\n\r\nid-d3:715:790 [1] include/socket.h:382 NCCL WARN Connect to 172.18.0.1<49273> failed : Connection refused\r\nTraceback (most recent call last):\r\n File \"dis_train.py\", line 455, in <module>\r\n main()\r\n File \"dis_train.py\", line 120, in main\r\n mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))\r\n File \"/usr/local/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 167, in spawn\r\n while not spawn_context.join():\r\n File \"/usr/local/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 114, in join\r\n raise Exception(msg)\r\nException:\r\n\r\n-- Process 1 terminated with the following error:\r\nTraceback (most recent call last):\r\n File \"/usr/local/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 19, in _wrap\r\n fn(i, *args)\r\n File \"/mnt/sdc/zhangwg/cv/image_review/src/dis_train.py\", line 197, in main_worker\r\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])\r\n File \"/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 286, in __init__\r\n self.broadcast_bucket_size)\r\n File \"/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 410, in _dist_broadcast_coalesced\r\n dist._dist_broadcast_coalesced(self.process_group, tensors, buffer_size, False)\r\nRuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:272, unhandled system error\r\n\r\ndoes somebody konw how to fix it ? \r\nthanks a lot ", "url": "https://github.com/pytorch/examples/issues/689", "state": "open", "labels": [ "distributed" ], "created_at": "2020-01-02T03:56:27Z", "updated_at": "2024-09-27T05:43:31Z", "comments": 1, "user": "ciel-zhang" }, { "repo": "pytorch/vision", "number": 1710, "title": "finetuning inception_v3", "body": "finetuning resnet18 as\r\ntrain: `models.resnet18(pretrained=True)`\r\nval: `models.resnet18()`\r\nBut while finetuning inception_v3 as above, I got poor result. The valuation must be\r\nval: `models.inception_v3(pretrained=True)`\r\nI spent much time stucking here..", "url": "https://github.com/pytorch/vision/issues/1710", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2020-01-01T14:45:26Z", "updated_at": "2020-01-08T10:54:17Z", "user": "stormchasingg" }, { "repo": "pytorch/vision", "number": 1707, "title": "'loss_dict' error from 'train_one_epoch'", "body": "Navigating through the code in 'train_one_epoch', running this line:\r\n`loss_dict = model(image,targets)`\r\ngives the error:\r\n\r\n> 397 # RPN uses all feature maps that are available\r\n--> 398 features = list(features.values())\r\n 399 objectness, pred_bbox_deltas = self.head(features)\r\n 400 anchors = self.anchor_generator(images, features)\r\nAttributeError: 'tuple' object has no attribute 'values'\r\n\r\nCan anyone help?", "url": "https://github.com/pytorch/vision/issues/1707", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2019-12-30T10:32:15Z", "updated_at": "2020-10-10T09:43:24Z", "user": "madiltalay" }, { "repo": "pytorch/pytorch", "number": 31699, "title": "How to implement multiple different kernel shapes in 2D convolution?", "body": "Hello. I\u2019m currently working on spherical convolutional network topic. Right now I\u2019m trying to develop a new kind of kernel used for the convolutional layer.\r\nThe usual kernel is 3x3 matrix. But for spherical images, after being projected onto a plane using equirectangular projection, there will be distortion. So I want to define the kernel as a spherical cap and project it on plane according to its position.\r\nFor example, the kernel at different positions of the sphere perspective to the panorama pictures will look like this:\r\n![image](https://user-images.githubusercontent.com/51077545/71574908-c6f5c000-2b25-11ea-80e1-1387ccdcfc82.png)\r\nIs there any way to determine the shape of the kernels in these ways? I have already had the whole coordinate of the points in every case. I would very appreciate any help and information.\r\nThank you guys very much!\r\n\n\ncc @csarofeen @ptrblck", "url": "https://github.com/pytorch/pytorch/issues/31699", "state": "closed", "labels": [ "feature", "module: nn", "triaged", "needs research" ], "created_at": "2019-12-30T08:59:59Z", "updated_at": "2020-01-07T15:14:06Z", "user": "vhchuong" }, { "repo": "pytorch/pytorch", "number": 31696, "title": "how to set cuda stream by call Aten function", "body": "at::Tensor a = at::ones({16, 32}, opts);\r\nat::Tensor b = at::randn({32, 64}, opts);\r\nat::Tensor b1 = at::randn({32, 64}, opts);\r\nauto c = at::matmul(a,b);\r\nauto c1 = at::matmul(a,b1);\r\n I want to call matmul by attach different cuda stream.\r\n call at::matmul(a,b) by using stream1 , and call at::matmul(a,b1) by using stream2.\r\nHow to do it? Thanks\n\ncc @ngimel", "url": "https://github.com/pytorch/pytorch/issues/31696", "state": "closed", "labels": [ "module: cuda", "triaged" ], "created_at": "2019-12-30T05:44:55Z", "updated_at": "2019-12-31T06:48:42Z", "user": "kuramawzw1" }, { "repo": "pytorch/pytorch", "number": 31685, "title": "What is the significance of torchvision._is_tracing()? ", "body": "## What is the significance of torchvision._is_tracing()? \u2753 \r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @fmassa", "url": "https://github.com/pytorch/pytorch/issues/31685", "state": "open", "labels": [ "triaged", "module: vision" ], "created_at": "2019-12-29T04:07:08Z", "updated_at": "2019-12-30T21:50:08Z", "user": "AyanKumarBhunia" }, { "repo": "pytorch/tutorials", "number": 799, "title": "Should I rewrote the \"dcgan_faces_tutorial notebook\" for the student to able to run it on colab for that 1GB dataset?", "body": "OK, I see it sets \" data root = \"/home/ubuntu/facebook/datasets/celeba...\"\". This is definitely not for Colab, and there are some students' computer does not have a GPU. I have a solution. I have rewritten it, so we can just download the zip file from google drive and unzip it. However, this requires to upload the 1GB data set to the student's own google drive, or someone can tell me that I can upload that 1 GB dataset to somewhere and be able to download with a link ending to .zip.\r\n\r\nThus, should I rewrite it so the student can run it on colab with GPU instead of their local computer?\r\n", "url": "https://github.com/pytorch/tutorials/issues/799", "state": "closed", "labels": [], "created_at": "2019-12-27T14:44:39Z", "updated_at": "2019-12-29T12:07:31Z", "comments": 0, "user": "AliceSum" }, { "repo": "pytorch/vision", "number": 1701, "title": "Errors with COCO targets", "body": "I am using the COCO dataset for training with annotations available at the COCO website.\r\nI use this dataloader:\r\n`train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=True, num_workers=4, collate_fn=collate_fn)\r\n`\r\n\r\nRunning one iteration:\r\n`image, target = next(iter(train_dataloader))`\r\ngives 'image' and 'target' of type 'tuple'\r\n\r\nTo convert the 'target' into the desired type (list of dicts), I use:\r\n`target = [[{k: v for k, v in obj.items()} for obj in t] for t in target]`\r\n\r\nNow when I run:\r\n`loss_dict = model(image,target)`\r\nIt gives:\r\n\r\n> /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py in resize(self, image, target)\r\n 73 return image, target\r\n 74 \r\n---> 75 bbox = target[\"boxes\"]\r\n 76 bbox = resize_boxes(bbox, (h, w), image.shape[-2:])\r\n 77 target[\"boxes\"] = bbox\r\nTypeError: list indices must be integers or slices, not str\r\n\r\nI try to play around:\r\n```\r\nnew_target = {} \r\nnew_target['boxes'] = [t['bbox'] for t in target[0]] \r\nnew_target['labels'] = [t['category_id'] for t in target[0]] \r\nnew_target = [new_target]\r\n```\r\nAnd it gives another error:\r\n\r\n> /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py in resize_boxes(boxes, original_size, new_size)\r\n 135 ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(new_size, original_size))\r\n 136 ratio_height, ratio_width = ratios\r\n--> 137 xmin, ymin, xmax, ymax = boxes.unbind(1)\r\n 138 xmin = xmin * ratio_width\r\n 139 xmax = xmax * ratio_width\r\nAttributeError: 'list' object has no attribute 'unbind'\r\n\r\nCan anyone please help?", "url": "https://github.com/pytorch/vision/issues/1701", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2019-12-27T07:17:14Z", "updated_at": "2020-01-08T10:44:53Z", "user": "madiltalay" }, { "repo": "pytorch/pytorch", "number": 31643, "title": "how to know the input_shape of a pretrained model ?", "body": "\r\nhi,dear,\r\nJust wanna know the model's input_shape,\r\nbut got nothing,\r\nSo could you help me ?\r\nthx\r\n", "url": "https://github.com/pytorch/pytorch/issues/31643", "state": "closed", "labels": [], "created_at": "2019-12-27T01:12:54Z", "updated_at": "2019-12-27T01:49:43Z", "user": "ucasiggcas" }, { "repo": "pytorch/vision", "number": 1699, "title": "'train_one_epoch' gives error while using COCO annotations", "body": "I am using the COCO dataset for training with annotations available at the COCO website.\r\nWhile using the code from: [https://github.com/pytorch/vision/blob/master/references/detection/engine.py](url), I get an error:\r\n\r\n> AttributeError: 'list' object has no attribute 'items'\r\n\r\nfor the code snippet:\r\n`targets = [{k: v.to(device) for k, v in t.items()} for t in targets]`\r\n\r\nFurther digging into the issue, I find that the 'targets' I receive from the 'for loop':\r\n`for images, targets in metric_logger.log_every(data_loader, print_freq, header):`\r\n\r\nare in tuple format, with length equal to the batch_size.\r\nMoreover, each item in this tuple is a list, and each list consists of seven dictionaries containing the annotation information.\r\nWhen I apply this code to an individual object, it works fine:\r\n```\r\ntarget = targets[0]\r\nobj_1 = target[0]\r\ndict_1 = [{k: v for k, v in obj_1.items()}]\r\n```\r\nSo I suppose the code might be written as follows:\r\n`targets = [[{k: v for k, v in obj.items()} for obj in target] for target in targets]`\r\n\r\nCan you guys please confirm this and provide help in this regard?\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1699", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2019-12-25T10:15:51Z", "updated_at": "2022-10-07T16:13:55Z", "user": "madiltalay" }, { "repo": "pytorch/text", "number": 669, "title": "How to use datasets for distributed training?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\n\r\nI built a dataset from my corpus, and use each line as an Example.\r\nIt works fine at first until I try to use it for distributed training.\r\n\r\nIt seems that torch.nn.parallel.DistributedParallel has to use DistributedSampler, but it's not compatible with torchtext datasets.\r\n\r\nIs there any idea to use torchtext datasets for distributed training?\r\nThx!", "url": "https://github.com/pytorch/text/issues/669", "state": "open", "labels": [], "created_at": "2019-12-22T03:20:56Z", "updated_at": "2020-01-02T17:56:48Z", "user": "styxjedi" }, { "repo": "pytorch/pytorch", "number": 31543, "title": "how to install torch by python3.8? ", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/31543", "state": "closed", "labels": [], "created_at": "2019-12-21T03:15:45Z", "updated_at": "2019-12-21T05:43:47Z", "user": "Fenghuixueha" }, { "repo": "pytorch/android-demo-app", "number": 46, "title": "How to create custom model for the PyTorchDemoApplication?Thanks", "body": "Hi, I want to learn about how to apply pytorch model on andorid platform. And this android-demo-app is very useful to me. \r\nThe PyTorchDemoApp has already been deployed on my android mobile ,and it can be runned successfully.\r\nBut I want to know how to create a custom model with my own Image data.\r\nWhen I copy the model.pt from HelloWorldApp, the PyTorchDemoApp crashes and tells me \" Sorry There is an error\"\r\nCan anyone tell me how to create a custom model? \r\nThanks very much.\r\n", "url": "https://github.com/pytorch/android-demo-app/issues/46", "state": "open", "labels": [], "created_at": "2019-12-20T08:55:31Z", "updated_at": "2021-06-27T18:52:02Z", "user": "btdan" }, { "repo": "pytorch/xla", "number": 1490, "title": "pytorch/xla vs TF", "body": "## \u2753 Questions and Help\r\n\r\nHi, is training a model with pytorch xla slower than training a model with tf? Are there any other limitations to using pytorch/xla compared to TF?", "url": "https://github.com/pytorch/xla/issues/1490", "state": "closed", "labels": [ "question" ], "created_at": "2019-12-19T21:03:11Z", "updated_at": "2019-12-19T22:01:41Z", "user": "bilal2vec" }, { "repo": "pytorch/pytorch", "number": 31466, "title": "how to pass trained weight to neural network module", "body": "Suppose i used own data and trained a `conv1d`, how could we pass the weight to `conv1d` in c++ like what the `PyTorch` acts ?\r\n\r\nNoticed that the implementation of `conv1d` in `PyTorch`, we could update the parameters like `in_channels`, `out_channels`, etc in the `__init__` function. If we want to update the `weights` and `bias`, which are from pretrained model, we could rewrite the `Conv1d`, which may not so difficult.\r\n\r\n```\r\nclass Conv1d(_ConvNd):\r\n\r\n def __init__(self, in_channels, out_channels, kernel_size, stride=1,\r\n padding=0, dilation=1, groups=1,\r\n bias=True, padding_mode='zeros'):\r\n kernel_size = _single(kernel_size)\r\n stride = _single(stride)\r\n padding = _single(padding)\r\n dilation = _single(dilation)\r\n super(Conv1d, self).__init__(\r\n in_channels, out_channels, kernel_size, stride, padding, dilation,\r\n False, _single(0), groups, bias, padding_mode)\r\n\r\n def forward(self, input):\r\n if self.padding_mode == 'circular':\r\n expanded_padding = ((self.padding[0] + 1) // 2, self.padding[0] // 2)\r\n return F.conv1d(F.pad(input, expanded_padding, mode='circular'),\r\n self.weight, self.bias, self.stride,\r\n _single(0), self.dilation, self.groups)\r\n return F.conv1d(input, self.weight, self.bias, self.stride,\r\n self.padding, self.dilation, self.groups)\r\n```\r\nWhile notice the `conv1d` implementation in libtorch, noticed that \r\n\r\n```\r\nnamespace nn {\r\nConv1dImpl::Conv1dImpl(\r\n Conv1dOptions options_)\r\n : ConvNdImpl(\r\n detail::ConvNdOptions<1>(\r\n /*in_channels=*/options_.in_channels(),\r\n /*out_channels=*/options_.out_channels(),\r\n /*kernel_size=*/options_.kernel_size())\r\n .stride(options_.stride())\r\n .padding(options_.padding())\r\n .dilation(options_.dilation())\r\n .transposed(false)\r\n .output_padding(0)\r\n .groups(options_.groups())\r\n .bias(options_.bias())\r\n .padding_mode(options_.padding_mode())) {}\r\n\r\nTensor Conv1dImpl::forward(const Tensor& input) {\r\n if (c10::get_if<enumtype::kCircular>(&options.padding_mode())) {\r\n std::vector<int64_t> expanded_padding = {((*options.padding())[0] + 1) / 2, (*options.padding())[0] / 2};\r\n return F::detail::conv1d(\r\n F::detail::pad(input, expanded_padding, torch::kCircular, 0),\r\n weight, bias,\r\n options.stride(),\r\n /*padding=*/0,\r\n options.dilation(),\r\n options.groups());\r\n }\r\n return F::detail::conv1d(\r\n input,\r\n weight,\r\n bias,\r\n options.stride(),\r\n options.padding(),\r\n options.dilation(),\r\n options.groups());\r\n}\r\n``` \r\nSo how could we pass the weight in the c++ version ?", "url": "https://github.com/pytorch/pytorch/issues/31466", "state": "closed", "labels": [], "created_at": "2019-12-19T10:18:14Z", "updated_at": "2019-12-19T14:53:57Z", "user": "OswaldoBornemann" }, { "repo": "pytorch/examples", "number": 682, "title": "\"EOFError: Ran out of input\u201c occurred in example mnist_hogwild", "body": "Hi, when I ran example **mnist_hogwild** on cuda, errors occurred as below:\r\n```\r\nFile \"main.py\", line 66, in <module>\r\n p.start()\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\process.py\", line 112, in start\r\n self._popen = self._Popen(self)\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\context.py\", line 223, in _Popen\r\n return _default_context.get_context().Process._Popen(process_obj)\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\context.py\", line 322, in _Popen\r\n return Popen(process_obj)\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\popen_spawn_win32.py\", line 89, in __init__\r\n reduction.dump(process_obj, to_child)\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\reduction.py\", line 60, in dump\r\n ForkingPickler(file, protocol).dump(obj)\r\n File \"D:\\Python3.7.3\\lib\\site-packages\\torch\\multiprocessing\\reductions.py\", line 232, in reduce_tensor\r\n event_sync_required) = storage._share_cuda_()\r\nRuntimeError: cuda runtime error (71) : operation not supported at C:\\w\\1\\s\\windows\\pytorch\\torch/csrc/generic/StorageSharing.cpp:245\r\n\r\nC:\\Users\\audrey\\Desktop\\test>Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\spawn.py\", line 105, in spawn_main\r\n exitcode = _main(fd)\r\n File \"D:\\Python3.7.3\\lib\\multiprocessing\\spawn.py\", line 115, in _main\r\n self = reduction.pickle.load(from_parent)\r\n```\r\nMy system: **Windows10**\r\ndevice: GeForce RTX 2080 Ti\r\nPyTorch version: 1.2.0\r\n\r\nHow to fix this? Thanks!", "url": "https://github.com/pytorch/examples/issues/682", "state": "open", "labels": [ "distributed", "pickle" ], "created_at": "2019-12-19T05:06:30Z", "updated_at": "2023-10-11T06:19:14Z", "comments": 2, "user": "audreycs" }, { "repo": "pytorch/examples", "number": 681, "title": "SNLI: The examples doesn't work", "body": "\r\nhelp, I try to run the snli task in examples\uff0cand I got the errors as follow:\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/syk/Desktop/git/examples/snli/train.py\", line 35, in <module>\r\n inputs.vocab.load_vectors(wv_dir=args.data_cache, wv_type=args.word_vectors, wv_dim=args.d_embed)\r\nTypeError: load_vectors() missing 1 required positional argument: 'vectors'\r\n\r\nit seems that the vocab.load_vectors need an argument vectors according to the definition of this function.\r\n\r\nDoes anyone know how to solve this? \r\nI'm not sure if it's my problem. thank you very much\uff01", "url": "https://github.com/pytorch/examples/issues/681", "state": "closed", "labels": [], "created_at": "2019-12-18T12:50:50Z", "updated_at": "2020-09-13T13:50:53Z", "comments": 0, "user": "Youarerare" }, { "repo": "pytorch/tutorials", "number": 793, "title": "Explain how we can use same dataset for training an non-training", "body": "In the [Training a Classifer tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py), explain how can we use the same dataset for training and non-training? Is it cause we shuffle to randomize and use a subset?", "url": "https://github.com/pytorch/tutorials/issues/793", "state": "closed", "labels": [ "60_min_blitz" ], "created_at": "2019-12-16T23:24:55Z", "updated_at": "2020-05-18T17:58:46Z", "comments": 1, "user": "jlin27" }, { "repo": "pytorch/tutorials", "number": 790, "title": "Clarify why there are 6 output channels", "body": "In the [Define the network section of the Neural Network tutorial](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py), clarify why is it 6 outputs? Is it bias? \r\n\r\n![image](https://user-images.githubusercontent.com/8042156/70950107-730eb580-2014-11ea-8cc2-21b28ed3e15b.png)\r\n", "url": "https://github.com/pytorch/tutorials/issues/790", "state": "closed", "labels": [ "60_min_blitz" ], "created_at": "2019-12-16T22:58:37Z", "updated_at": "2020-05-18T17:59:34Z", "comments": 4, "user": "jlin27" }, { "repo": "pytorch/vision", "number": 1669, "title": "Question regarding only bbox", "body": "https://github.com/pytorch/vision/blob/bce17fddd4da744e23512b8e224d085818e6d921/references/detection/coco_utils.py#L231\r\n``\r\nWhat if there are only bbox annotations and no segmentation available at all?! ", "url": "https://github.com/pytorch/vision/issues/1669", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2019-12-16T14:24:54Z", "updated_at": "2019-12-16T14:54:44Z", "user": "gaussiangit" }, { "repo": "pytorch/tutorials", "number": 772, "title": "Text classification dataset", "body": "Where can I find the dataset for text classification tutorial? I mean \r\nhttps://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html", "url": "https://github.com/pytorch/tutorials/issues/772", "state": "closed", "labels": [], "created_at": "2019-12-15T17:21:34Z", "updated_at": "2021-06-10T21:18:29Z", "comments": 1, "user": "mahmoodn" }, { "repo": "pytorch/tutorials", "number": 771, "title": "Using CUDA for deep learning", "body": "For the deep learning [tutorial](https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html), I have added the device command at the top to offload the work on GPU.\r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\ntorch.device(\"cuda:0\")\r\n```\r\nHowever, no process will go to the GPU. I see only CPU usage. \r\nHow can I fix that?", "url": "https://github.com/pytorch/tutorials/issues/771", "state": "closed", "labels": [], "created_at": "2019-12-15T17:06:10Z", "updated_at": "2021-07-30T21:55:36Z", "comments": 1, "user": "mahmoodn" }, { "repo": "pytorch/vision", "number": 1665, "title": "Automatic Background Removal technology", "body": "I am looking for a deep learning library/sdk which can be used to remove the background from any image automatically (with quality as good as www.remove.bg).\r\n\r\nI tried some image segmentation SDKs with pre-trained models such as Tensorflow Lite & Fritz AI, but the accuracy of the cutout mask was very low, amongst other issues.\r\n\r\nCriteria :-\r\n\r\n1) Background Removal rather than just Human/Portrait Segmentation\r\n\r\nIf the foreground consists of person holding a balloon, sittting on a chair, with a pet on his side, then I want all of this to get extracted. Not just the human cutout. The segmentation SDKs I tried are only extracting humans (the chair gets vanished), that too with a very low quality mask (hair gets cut, parts of ear gets cut, etc).\r\n\r\n2) Mask quality should be Super-Accurate\r\n\r\nI want even the finer details like the hair, delicate clothes, etc to be extracted perfectly.\r\n\r\n3) Fast & Lightweight (for mobile phone)\r\n\r\nI want to use this technology on mobile phones (in an Android app) which should ideally work even in an offline environment. If this option is difficult to achieve, then plan B would be install the technoloy on our server.\r\n\r\n4) Technology\r\nWhat technology should I be exploring to achieve this? Is it called image segmentation or the better term would be image matting? (e.g. http://alphamatting.com/eval_25.php)\r\n\r\nI have been reading a lot and I am currently lost in the sea of various technologies out there (OpenCV, Deep Matting, Mask RCNN, Instance Segmentation, Detectron2, Tensorflow, Pytorch, etc). I wonder what magic is happening behind the curtains of www.remove.bg\r\n\r\nWould your library help me me to achieve what I am looking for? Any help you could provide would be awesome.\r\n\r\nThanks a ton!", "url": "https://github.com/pytorch/vision/issues/1665", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2019-12-15T06:53:21Z", "updated_at": "2020-03-24T15:44:36Z", "user": "InternetMaster1" }, { "repo": "pytorch/pytorch", "number": 31246, "title": "How to do independent random number generatation in multiprocessing dataloader.", "body": "When I use num_woker > 0 in DataLoader, and I generate a random number in __getitem__ function.\r\n\r\nI found all threads will generate the same random number... \r\n\r\nFor example, I set num_worker=8, and I want to got a random number to define my scale augmentation. \r\n\r\nI will get \r\n0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9\r\neight same 0.9!\r\n\r\nSo I want to know how to inplement independent random number generation in multiprocessing dataloader.\r\n\r\nTHanks...\r\n\n\ncc @SsnL", "url": "https://github.com/pytorch/pytorch/issues/31246", "state": "closed", "labels": [ "module: dataloader", "triaged" ], "created_at": "2019-12-13T08:34:29Z", "updated_at": "2019-12-16T17:29:43Z", "user": "EricKani" }, { "repo": "pytorch/text", "number": 666, "title": "How to use torchtext for tasks involving image/tabular data like image captioning?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\nHi, thanks for the great library. I am wondering is there a way to use torchtext Dataset for multi-modal data? An example task will be image captioning, where we need to generate some text based on the input image. Or generating text from tabular data, from example table summarization. \r\n", "url": "https://github.com/pytorch/text/issues/666", "state": "open", "labels": [], "created_at": "2019-12-13T05:24:33Z", "updated_at": "2020-04-11T07:55:54Z", "user": "Hans0124SG" }, { "repo": "pytorch/pytorch", "number": 31098, "title": "How to install pytorch for CUDA 10.2?", "body": "Hello everyone. I have installed CUDA 10.2 and i tried to install pytorch on windows.\r\nBut I catched error like this:\r\nFAILED: build.ninja\r\nC:\\Users\\TensorFlow\\.conda\\envs\\torch\\Library\\bin\\cmake.exe -SF:\\Git\\pytorch -BF:\\Git\\pytorch\\build\r\nninja: error: rebuilding 'build.ninja': subcommand failed\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 755, in <module>\r\n build_deps()\r\n File \"setup.py\", line 316, in build_deps\r\n cmake=cmake)\r\n File \"F:\\Git\\pytorch\\tools\\build_pytorch_libs.py\", line 62, in build_caffe2\r\n cmake.build(my_env)\r\n File \"F:\\Git\\pytorch\\tools\\setup_helpers\\cmake.py\", line 337, in build\r\n self.run(build_args, my_env)\r\n File \"F:\\Git\\pytorch\\tools\\setup_helpers\\cmake.py\", line 141, in run\r\n check_call(command, cwd=self.build_dir, env=env)\r\n File \"C:\\Users\\TensorFlow\\.conda\\envs\\torch\\lib\\subprocess.py\", line 311, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nsubprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1.\r\n\r\nHelp me, please. How to I can fix this bug?", "url": "https://github.com/pytorch/pytorch/issues/31098", "state": "closed", "labels": [], "created_at": "2019-12-11T06:56:22Z", "updated_at": "2019-12-11T17:01:39Z", "user": "tensor2flow" }, { "repo": "pytorch/text", "number": 665, "title": "How to load downloaded dataset?", "body": "I download sougoNews and try to use it like this:\r\n`train_dataset, test_dataset = datasets.SogouNews(root='data',ngrams=3)`\r\nbut it didn't work.still autodownload the datasets.", "url": "https://github.com/pytorch/text/issues/665", "state": "closed", "labels": [], "created_at": "2019-12-11T01:03:17Z", "updated_at": "2022-06-24T00:20:48Z", "user": "LotusQing" }, { "repo": "pytorch/pytorch", "number": 31041, "title": "How to load PyTorch model using C++ api", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/31041", "state": "closed", "labels": [], "created_at": "2019-12-10T09:57:21Z", "updated_at": "2019-12-10T10:30:15Z", "user": "henbucuoshanghai" }, { "repo": "pytorch/pytorch", "number": 30962, "title": "How can I add masks to parameters", "body": "Hi,\r\n\r\nCan I use hook to add a parameter masking function to Conv2d. Specifically, I\u2019d like to add a binary mask buffer to each Conv2d module, during each training step, I need to update the mask buffer and then use it to mask the weight.\r\n\r\nOr, is there any method to add masks and apply the masks to Conv2d in a given model.\r\n\r\nThanks!", "url": "https://github.com/pytorch/pytorch/issues/30962", "state": "open", "labels": [ "module: nn", "triaged" ], "created_at": "2019-12-09T12:50:11Z", "updated_at": "2019-12-11T07:37:43Z", "user": "tzm1003306213" }, { "repo": "pytorch/tutorials", "number": 761, "title": "RuntimeError: CUDA error: out of memory", "body": "I'm trying to run the code below:\r\n\r\n_if torch.cuda.is_available():\r\n device = torch.device(\"cuda\") # a CUDA device object\r\n y = torch.ones_like(x, device=device) # directly create a tensor on GPU\r\n x = x.to(device) # or just use strings ``.to(\"cuda\")``\r\n z = x + y\r\n print(z)\r\n print(z.to(\"cpu\", torch.double)) # ``.to`` can also change dtype together!_\r\n\r\nbut I always get the error:\r\n**y = torch.ones_like(x, device=device) # directly create a tensor on GPU\r\n RuntimeError: CUDA error: out of memory**\r\n\r\nI'm running this on CUDA version 10.1.243 and torch version 1.3.1 .\r\nAnyone knows what is the problem?!\r\n\r\n\r\nthe source of the code: https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors \r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/761", "state": "closed", "labels": [], "created_at": "2019-12-09T10:03:49Z", "updated_at": "2021-07-30T22:15:11Z", "comments": 3, "user": "Ala770" }, { "repo": "pytorch/examples", "number": 676, "title": "Reading my own dataset ", "body": "Hi, I want to read/load my own dataset and build my models by using these datasets. But, I did not understand how can I read/load my own dataset. All examples are using PyTorch's datasets but do not help for me. Can you help me with this problem? ", "url": "https://github.com/pytorch/examples/issues/676", "state": "closed", "labels": [], "created_at": "2019-12-08T08:49:16Z", "updated_at": "2019-12-09T14:48:38Z", "comments": 2, "user": "gozeloglu" }, { "repo": "pytorch/vision", "number": 1646, "title": "What is the meta.bin file used by the ImageNet dataset?", "body": "[Comment from @kanonjz in #1457](https://github.com/pytorch/vision/pull/1457#issuecomment-562807954)\r\n\r\n> I downloaded imagenet myself and used `parse_val_archive` to prepare the folders, but got an error below. What is the `meta.bin`? I didn't find it in the imagenet.\r\n> \r\n> `The meta file meta.bin is not present in the root directory or is corrupted. \" \"This file is automatically created by the ImageNet dataset.`", "url": "https://github.com/pytorch/vision/issues/1646", "state": "closed", "labels": [ "module: datasets" ], "created_at": "2019-12-07T12:30:20Z", "updated_at": "2019-12-10T13:07:42Z", "user": "pmeier" }, { "repo": "pytorch/pytorch", "number": 30929, "title": "How to set not to build libtorch_cpu.so and libmkl_*.so dependencies?", "body": "``` linux-vdso.so.1 (0x00007fffa4bfc000)\r\n libtorch_cpu.so => /home/xxxxx/workfiles/work/pytorch/torch/lib/./libtorch_cpu.so (0x00007f63d4f6c000)\r\n librt.so.1 => /lib64/librt.so.1 (0x00007f63d4d52000)\r\n libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f63d4b3c000)\r\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f63d4938000)\r\n libmkl_intel_lp64.so => /lib/libmkl_intel_lp64.so (0x00007f63d3e06000)\r\n libmkl_gnu_thread.so => /lib/libmkl_gnu_thread.so (0x00007f63d25cd000)\r\n libmkl_core.so => /lib/libmkl_core.so (0x00007f63ce494000)\r\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f63ce275000)\r\n libm.so.6 => /lib64/libm.so.6 (0x00007f63cdf73000)\r\n libc10.so => /home/xxxxx/workfiles/work/pytorch/torch/lib/./libc10.so (0x00007f63cdd31000)\r\n libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f63cd9ae000)\r\n libgomp.so.1 => /lib64/libgomp.so.1 (0x00007f63cd788000)\r\n libc.so.6 => /lib64/libc.so.6 (0x00007f63cd3dc000)\r\n /lib64/ld-linux-x86-64.so.2 (0x000055795895f000)\r\n```\r\n", "url": "https://github.com/pytorch/pytorch/issues/30929", "state": "open", "labels": [ "module: build", "triaged", "module: mkl" ], "created_at": "2019-12-07T04:08:13Z", "updated_at": "2020-05-01T18:47:25Z", "user": "LinGeLin" }, { "repo": "pytorch/examples", "number": 675, "title": "what do parameters 'ndf' and 'ngf' mean?", "body": "Thanks for your code. However, I was wondering if you could tell me what 'ndf' and 'ngf' mean? I do know how these two parameters are used, but I do not know why they are called 'ndf' and 'ngf' , respectively. Looking forward to your reply.", "url": "https://github.com/pytorch/examples/issues/675", "state": "closed", "labels": [], "created_at": "2019-12-06T21:29:40Z", "updated_at": "2022-03-09T21:52:39Z", "comments": 1, "user": "jianzhuwang" }, { "repo": "pytorch/pytorch", "number": 30869, "title": "How to specify install path when build libtorch\uff1fno use cmake-gui", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/30869", "state": "closed", "labels": [], "created_at": "2019-12-06T12:28:59Z", "updated_at": "2019-12-06T13:39:39Z", "user": "LinGeLin" }, { "repo": "pytorch/pytorch", "number": 30796, "title": "How to Build pytorch with local protobuf rather than third_party/protobuf?", "body": "## \u2753 Questions and Help\r\nI want to build pytorch with my own os built protobuf lib rather than third_part/protobuf, Which prefix to change, Can anyone help me?\r\n\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/30796", "state": "closed", "labels": [], "created_at": "2019-12-05T06:11:52Z", "updated_at": "2019-12-06T17:31:11Z", "user": "Raneee" }, { "repo": "pytorch/text", "number": 660, "title": "How to prefetch data?", "body": "Currently, the bottleneck of my model training is on the data loading part, is there any example about how to prefetch data? Like the `pin_memory` and `num_workers` arguments of `torch.utils.data.DataLoader`", "url": "https://github.com/pytorch/text/issues/660", "state": "closed", "labels": [], "created_at": "2019-12-04T14:04:20Z", "updated_at": "2022-06-24T00:39:44Z", "user": "speedcell4" }, { "repo": "pytorch/vision", "number": 1633, "title": "how can I use ROI align in torch version 1.0", "body": "", "url": "https://github.com/pytorch/vision/issues/1633", "state": "closed", "labels": [ "question", "module: ops" ], "created_at": "2019-12-04T13:25:24Z", "updated_at": "2019-12-04T14:51:40Z", "user": "scut-salmon" }, { "repo": "pytorch/pytorch", "number": 30720, "title": "what is tensor's storage C++ pointer?", "body": "Recently I look into PyTorch source codes. tensor's impl object is created after a tensor is created. But I can't know where the tensor's storage is and its pointer.\r\nCould anyone give me some help? \ud83d\ude0a \r\n", "url": "https://github.com/pytorch/pytorch/issues/30720", "state": "closed", "labels": [], "created_at": "2019-12-04T08:38:09Z", "updated_at": "2019-12-04T16:22:54Z", "user": "alanzhai219" }, { "repo": "pytorch/xla", "number": 1448, "title": "python on XLA for CPU/GPU?", "body": "IIUC, with the same HLO, XLA is able to run on GPU and TPU. \r\n\r\nI wonder if this project allows running PyTorch on top of XLA for CPU/GPU and future AI chips (as soon as they support XLA)?\r\n\r\nThanks,\r\nTiezhen", "url": "https://github.com/pytorch/xla/issues/1448", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2019-12-04T06:51:32Z", "updated_at": "2020-01-26T17:08:48Z", "user": "wangtz" }, { "repo": "pytorch/examples", "number": 672, "title": "I faced on the build error of libtorch:mnist.cpp in Ubuntu18.04", "body": "(1)Issue\r\n I faced the build error of one of libtorch examples :mnist.cpp in Ubuntu18.04.\r\n Please tell me the way to solve the build error.\r\n![builderror](https://user-images.githubusercontent.com/18341725/70106407-e7e20700-1686-11ea-8856-da44dc548b9f.png)\r\n\r\n(2)Enviroment\r\n OS:Ubbuntu18.04LTS\r\n libtorch: I downloaded https://download.pytorch.org/libtorch/cu101/libtorch-shared-with-deps-1.3.1.zip\r\n cmake version 3.16.0\r\n CUDA:10.1\r\n\r\n(3)the way to reproduce of error\r\n1.$mkdir OnPre and cd OnPre\r\n\r\n2.I downloaded libtorch-shared-with-deps-1.3.1.zip and $unzip libtorch-shared-with-deps-1.3.1.zip.\r\n\r\n3.the folder \"libtorch\" was made and $ cd libtorch.\r\n\r\n4.$mkdir mnist and $cd mnist\r\n\r\n5.I copied CMakeLists.txt and mnist.cpp from https://github.com/pytorch/examples/tree/master/cpp/mnist\r\n\r\n6.$mkdir build and cd build\r\n\r\n7.$ cmake -DCMAKE_PREFIX_PATH=/home/yoshiki/OnPre/libtorch ..\r\n -- The C compiler identification is GNU 7.4.0\r\n-- The CXX compiler identification is GNU 7.4.0\r\n-- Check for working C compiler: /usr/bin/cc\r\n-- Check for working C compiler: /usr/bin/cc -- works\r\n-- Detecting C compiler ABI info\r\n-- Detecting C compiler ABI info - done\r\n-- Detecting C compile features\r\n-- Detecting C compile features - done\r\n-- Check for working CXX compiler: /usr/bin/c++\r\n-- Check for working CXX compiler: /usr/bin/c++ -- works\r\n-- Detecting CXX compiler ABI info\r\n-- Detecting CXX compiler ABI info - done\r\n-- Detecting CXX compile features\r\n-- Detecting CXX compile features - done\r\n-- Looking for pthread.h\r\n-- Looking for pthread.h - found\r\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\r\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed\r\n-- Looking for pthread_create in pthreads\r\n-- Looking for pthread_create in pthreads - not found\r\n-- Looking for pthread_create in pthread\r\n-- Looking for pthread_create in pthread - found\r\n-- Found Threads: TRUE \r\n-- Found CUDA: /usr/local/cuda (found version \"10.1\") \r\n-- Caffe2: CUDA detected: 10.1\r\n-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc\r\n-- Caffe2: CUDA toolkit directory: /usr/local/cuda\r\n-- Caffe2: Header version is: 10.1\r\n-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so \r\n-- Found cuDNN: v7.6.5 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)\r\n-- Autodetected CUDA architecture(s): 5.2\r\n-- Added CUDA NVCC flags for: -gencode;arch=compute_52,code=sm_52\r\n-- Found torch: /home/yoshiki/OnPre/libtorch/lib/libtorch.so \r\n-- Downloading MNIST dataset\r\n-- Configuring done\r\n-- Generating done\r\n-- Build files have been written to: /home/yoshiki/OnPre/libtorch/mnist/build\r\n\r\n8.$ make\r\nScanning dependencies of target mnist\r\n[ 50%] Building CXX object CMakeFiles/mnist.dir/mnist.cpp.o\r\n/home/yoshiki/OnPre/libtorch/mnist/mnist.cpp: In function \u2018void test(Net&, c10::Device, DataLoader&, size_t)\u2019:\r\n/home/yoshiki/OnPre/libtorch/mnist/mnist.cpp:102:26: error: \u2018at::Reduction\u2019 has not been declared\r\n at::Reduction::Sum)\r\n ^~~~~~~~~\r\nCMakeFiles/mnist.dir/build.make:62: recipe for target 'CMakeFiles/mnist.dir/mnist.cpp.o' failed\r\nmake[2]: *** [CMakeFiles/mnist.dir/mnist.cpp.o] Error 1\r\nCMakeFiles/Makefile2:75: recipe for target 'CMakeFiles/mnist.dir/all' failed\r\nmake[1]: *** [CMakeFiles/mnist.dir/all] Error 2\r\nMakefile:83: recipe for target 'all' failed\r\nmake: *** [all] Error 2\r\n\r\n9.The build error appeared . \r\n", "url": "https://github.com/pytorch/examples/issues/672", "state": "closed", "labels": [], "created_at": "2019-12-04T02:13:45Z", "updated_at": "2019-12-04T07:35:11Z", "comments": 1, "user": "yoshihingis" }, { "repo": "pytorch/vision", "number": 1630, "title": "GeneralizedRCNNTransform doesn't work with four-channel inputs", "body": "When I modify the input channel of FasterRCNN from 3 to 4, GeneralizedRCNNTransform doesn't work.\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py\", line 47, in forward\r\n images, targets = self.transform(images, targets)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py\", line 40, in forward\r\n image = self.normalize(image)\r\n File \"/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py\", line 55, in normalize\r\n return (image - mean[:, None, None]) / std[:, None, None]\r\nRuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0\r\n```", "url": "https://github.com/pytorch/vision/issues/1630", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-12-04T00:53:20Z", "updated_at": "2019-12-04T12:58:30Z", "user": "ZhiangChen" }, { "repo": "pytorch/xla", "number": 1447, "title": "How to use a specific commit of pytorch-xla in Colab?", "body": "## \u2753 Questions and Help\r\n\r\nHi,\r\n\r\nI'm eager to use a specific commit (or the latest) in Colab. My current setup is this cell:\r\n\r\n```bash\r\nXRT_VERSION = \"nightly\"\r\nDIST_BUCKET = \"gs://tpu-pytorch/wheels\"\r\nTORCH_WHEEL = \"torch-{}-cp36-cp36m-linux_x86_64.whl\".format(XRT_VERSION)\r\nTORCH_XLA_WHEEL = \"torch_xla-{}-cp36-cp36m-linux_x86_64.whl\".format(XRT_VERSION)\r\nTORCHVISION_WHEEL = \"torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl\"\r\n\r\n# Update TPU XRT version\r\nimport os\r\nimport requests\r\nimport threading\r\ndef update_server_xrt():\r\n print(\"Updating server-side XRT...\")\r\n url = 'http://{TPU_ADDRESS}:8475/requestversion/{XRT_VERSION}'.format(\r\n TPU_ADDRESS=os.environ['COLAB_TPU_ADDR'].split(':')[0],\r\n XRT_VERSION=XRT_VERSION,\r\n )\r\n print(\"Done updating server-side XRT: {}\".format(requests.post(url)))\r\n\r\nupdate = threading.Thread(target=update_server_xrt)\r\nupdate.start()\r\n\r\n# Install Colab TPU compat PyTorch/TPU wheels and dependencies\r\n!pip uninstall -y torch torchvision\r\n!gsutil cp \"$DIST_BUCKET/$TORCH_WHEEL\" .\r\n!gsutil cp \"$DIST_BUCKET/$TORCH_XLA_WHEEL\" .\r\n!gsutil cp \"$DIST_BUCKET/$TORCHVISION_WHEEL\" .\r\n!pip install \"$TORCH_WHEEL\"\r\n!pip install \"$TORCH_XLA_WHEEL\"\r\n!pip install \"$TORCHVISION_WHEEL\"\r\n!sudo apt-get install libomp5\r\nupdate.join()\r\n```\r\n\r\nBut that only gets the nightly version. Is there some way to name a specific commit?\r\n", "url": "https://github.com/pytorch/xla/issues/1447", "state": "closed", "labels": [ "question" ], "created_at": "2019-12-03T20:04:55Z", "updated_at": "2020-02-12T17:36:30Z", "user": "hrbigelow" }, { "repo": "pytorch/vision", "number": 1629, "title": "Reference detection script image sizes help", "body": "Hi @fmassa , \r\nSomehow the reference detection script does not handle big images of size > 3000. \r\nAlways throw me cuda out of memory error. \r\nAny suggestions on that ? ", "url": "https://github.com/pytorch/vision/issues/1629", "state": "closed", "labels": [ "question", "module: models", "module: reference scripts", "topic: object detection" ], "created_at": "2019-12-03T12:11:21Z", "updated_at": "2019-12-03T12:30:55Z", "user": "gaussiangit" }, { "repo": "pytorch/pytorch", "number": 30655, "title": "How to convert Tensor back to BitMap or any image format in Android?", "body": "I have converted a PyTorch model for Android mobile. The purpose of the model is to achieve Super Resolution. The problem I am facing is that the model gives output in the form of Tensor. Whereas I want to convert that tensor into some imaging format but I haven't been able to find a method to achieve this task. \r\n\r\nI cannot find something suitable in Pytorch Java documentation for this certain task. Please advise regarding this issue.\r\n", "url": "https://github.com/pytorch/pytorch/issues/30655", "state": "closed", "labels": [ "module: android", "oncall: mobile" ], "created_at": "2019-12-03T09:32:11Z", "updated_at": "2023-09-29T16:39:11Z", "user": "nauyan" }, { "repo": "pytorch/pytorch", "number": 30654, "title": "What is the different between nn.Functional.conv2d and nn.Conv2d?It seems a bit redundant?", "body": "## \u2753 Questions and Help\r\nHi,I have just started learning pytorch recently. In the official website tutorials, I often see nn.Conv2d and nn.Functional.conv2d. I don't understand the difference between the two writing methods. It seems that one of these two is enough.\r\n", "url": "https://github.com/pytorch/pytorch/issues/30654", "state": "closed", "labels": [], "created_at": "2019-12-03T08:27:21Z", "updated_at": "2019-12-04T01:09:44Z", "user": "wulongjian" }, { "repo": "pytorch/xla", "number": 1442, "title": "Out of memory error?", "body": "Is the following an out-of-memory error from the TPU?: \r\n\r\n![image](https://user-images.githubusercontent.com/37097934/70020496-abf15980-1541-11ea-8dd5-406c295ecd8c.png)\r\n\r\nThe text just keeps scrolling with similar messages.\r\n\r\nIt's surprising I get this error, because all I wanted to do is have a batch of 512 for 224x224 images, which I thought the TPU could handle.", "url": "https://github.com/pytorch/xla/issues/1442", "state": "closed", "labels": [ "question" ], "created_at": "2019-12-03T04:26:23Z", "updated_at": "2019-12-10T18:57:22Z", "user": "tmabraham" }, { "repo": "pytorch/examples", "number": 671, "title": "nn.Transformer tutorial uses nn.TransformerEncoder only", "body": "hello,\r\nwhen I search for nn.Transformer use example, I find example which uses nn.TransformerEncoder, is there example use of nn.Transformer?", "url": "https://github.com/pytorch/examples/issues/671", "state": "closed", "labels": [ "question" ], "created_at": "2019-12-02T12:49:33Z", "updated_at": "2022-03-10T04:46:18Z", "user": "vainaixr" }, { "repo": "pytorch/vision", "number": 1625, "title": "Why does the rpn use the L1_Loss?", "body": "https://github.com/pytorch/vision/blob/master/torchvision/models/detection/rpn.py#L426\r\n\r\nthe code in the rpn.py , line 426 as follows:\r\n\r\n**box_loss = F.l1_loss(\r\n pred_bbox_deltas[sampled_pos_inds],\r\n regression_targets[sampled_pos_inds],\r\n reduction=\"sum\",\r\n ) / (sampled_inds.numel())**\r\n\r\nHowever, as said in the paper of Faster RCNN, the loss funtion used in the rpn training stage is smooth_L1_LOSS.\r\n\r\nand I found that when computing the **rcnn_box_loss**, the loss function used in the torchvsion is **Smooth_L1_Loss**,:\r\nhttps://github.com/pytorch/vision/blob/master/torchvision/models/detection/roi_heads.py#L47\r\n\r\nWhy not use the **Smooth_L1_LOSS** in both places ?", "url": "https://github.com/pytorch/vision/issues/1625", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-12-01T12:54:15Z", "updated_at": "2019-12-02T12:14:14Z", "user": "TeeyoHuang" }, { "repo": "pytorch/vision", "number": 1618, "title": "is faster rcnn scriptable\uff1fI tried\uff0cbut failed~", "body": "", "url": "https://github.com/pytorch/vision/issues/1618", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-11-27T06:32:41Z", "updated_at": "2019-11-30T15:24:03Z", "user": "dao-kun" }, { "repo": "pytorch/vision", "number": 1617, "title": "Question about converting custom dataset to coco api", "body": "https://github.com/pytorch/vision/blob/a44d55d87ba3628ac79292fdcaead7fb98fc130b/references/detection/coco_utils.py#L163\r\n\r\nIf the box is [3,10,6,20](xyxy format),the converted box should be [3,10,4,11]. I think this code should be added 1. Because there are 4 pixels between [3,6] and 11 pixels between [10,20]. It actually computes the pixels in grid.\r\nMay be the original computation of the area need to do this as well. Such as this tutorial, https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html\r\n`area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])`\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1617", "state": "closed", "labels": [ "question", "module: reference scripts" ], "created_at": "2019-11-27T03:22:53Z", "updated_at": "2019-12-02T12:26:12Z", "user": "kangkang59812" }, { "repo": "pytorch/tutorials", "number": 735, "title": "Dataloader with SAMPLER tutorial missing. ", "body": "Original discussion thread: https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252\r\n\r\nPreviously closed issue: https://github.com/pytorch/tutorials/issues/78\r\nRelated PR Merged: https://github.com/pytorch/tutorials/pull/96\r\nAgain posting a new issue because the previous issue has been closed and pr merged without providing a complete and thorough tutorial as was felt required in the initial discussion. \r\n\r\ntldr; how to properly implement \r\n\r\n> torch.utils.data.Sampler \r\n\r\n\r\n\r\nSpecifically for my current use-case, I have a deep metric loss model that implements an online hard mining strategy (probability of the selection of some samples per epoch is higher than rest based on certain metrics ). \r\n\r\nIt didn't feel correct putting the logic in the transforms, and I currently do the mining in the \"run\" function:\r\n- Pull the current minibatch1 from the dataloader \r\n- Apply hard mining logic to find samples to train on from current batch : \r\n - dry forward run without back-prop\r\n - get all misclassified samples as 'hard samples' for current batch\r\n - calculate probability ranking of this subset based on certain heuristics ( Wrongly classified sample of higher similarity will have higher probability)\r\n- based on sample rankings again create a dataset on the fly for these samples, wherein `__getitem__` : chooses a minibatch2 as subset of these hard samples (might have repeated samples which have a higher probability ranking)\r\n- run forward and backward pass for samples in minibatch2 \r\n\r\nFor reference size of minibatch1 ~ 10X minibatch2\r\n\r\nThe strategy works pretty well in training; though one can imagine the code sanity and running time :disappointed: \r\n\r\n\r\nI understand, if the dataloader class was not intended for online sampling which requires a forward pass; \r\nbut can we atleast have the *complete* tutorial on the data.sampler et al methods showing different offline sampling techniques - choosing samples from the current batch based on some set heuristics. \r\n\r\nOr did I completely misunderstand the use of the Samplers ?? \r\n\r\n\r\n@soumith @chsasank @apaszke ", "url": "https://github.com/pytorch/tutorials/issues/735", "state": "closed", "labels": [], "created_at": "2019-11-27T00:28:36Z", "updated_at": "2021-07-30T22:19:49Z", "comments": 3, "user": "crazysal" }, { "repo": "pytorch/text", "number": 652, "title": "How to add special token in torch text.Data.Field( )?", "body": "Hello,\r\n\r\nI defined my text Field as below:\r\n```js\r\nTEXT_openbookQA = Field(tokenize = \"spacy\",\r\n init_token = '<sos>',\r\n eos_token = '<eos>',\r\n unk_token = '<unk>',\r\n pad_token = '<pad>',\r\n tokenizer_language = 'en',\r\n lower = True)\r\n```\r\nHowever, in the text `openbookQA`, there is a special token named `<mcoption>`. How can I make the text Field to recognize this special token?\r\n\r\nThank you,", "url": "https://github.com/pytorch/text/issues/652", "state": "closed", "labels": [], "created_at": "2019-11-26T12:50:00Z", "updated_at": "2019-11-26T13:40:24Z", "user": "h56cho" }, { "repo": "pytorch/pytorch", "number": 30408, "title": "Where is the script of the synchronization of gradients during the backwards for DDP", "body": "## \u2753 Questions and Help\r\n\r\nHi, I know the synchronization of gradients happens during the backwards for DDP. But I didn\u2019t find the corresponding script in backwards. Where can I find it?\r\n", "url": "https://github.com/pytorch/pytorch/issues/30408", "state": "closed", "labels": [], "created_at": "2019-11-25T17:15:45Z", "updated_at": "2019-11-26T00:49:21Z", "user": "meiluzhu" }, { "repo": "pytorch/vision", "number": 1610, "title": "code for visualization in the object detection tutorial", "body": "At the end of the [object detection tutorial ](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#torchvision-object-detection-finetuning-tutorial) it visualizes the masks.\r\ncan you please provide the code for that task? or guide how to do it?", "url": "https://github.com/pytorch/vision/issues/1610", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-11-25T15:41:21Z", "updated_at": "2020-07-07T21:21:26Z", "user": "isalirezag" }, { "repo": "pytorch/vision", "number": 1608, "title": "What's the input format of the fasterrcnn_resnet50_fpn? I mean RGB or BGR.", "body": "### pytorch>=1.1\r\n\r\nI notice that both the RGB and BGR input of `[n,c,h,w]` can get a good result(BGR is slightly higher). \r\n```\r\nmodel = fasterrcnn_resnet50_fpn(pretrained=True)\r\nmodel.eval()\r\n\r\n## RGB\r\nimg1 = Image.open('image1.jpg')\r\n## BGR\r\nimg2 = np.array(img1)[:, :, [2, 1, 0]].copy()\r\n\r\nx1= [transforms.ToTensor()(img1)]\r\nx2= [transforms.ToTensor()(img2)]\r\n\r\npredictions1 = model(x1)\r\npredictions2 = model(x2)\r\n```\r\nIt seems that `predictions2` is better. So, should I use the BGR format to fine-tuning and eval ? I can't find this information in the code and I only know the size is `[n,c,h,w]`. In the config of the detectron2 of facebook, it says \r\n```\r\n# Values to be used for image normalization (BGR order).\r\n# To train on images of different number of channels, just set different mean & std.\r\n# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]\r\n_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675]\r\n```\r\nSo BGR is the one we should choose?", "url": "https://github.com/pytorch/vision/issues/1608", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-11-25T12:20:25Z", "updated_at": "2019-11-25T12:52:15Z", "user": "kangkang59812" }, { "repo": "pytorch/text", "number": 649, "title": "How to perform common sense reasoning task with GPT-2?", "body": "Hello,\r\n\r\nI am new to NLP so I have lots of questions.\r\nI am interested in carrying out common sense reasoning task with GPT-2, for example, with Winograd Schema Challenge dataset.\r\n\r\nQ1. How should I tokenize the Winograd Schema Challenge dataset to process it with GPT-2 (with the double heads model, for instance)? Can someone please give me an example?\r\n\r\nQ2. Can GPT2DoubleHeadsModel be used to conduct common sense reasoning task with Winograd Schema Challenge dataset?\r\n\r\nThank you,", "url": "https://github.com/pytorch/text/issues/649", "state": "closed", "labels": [], "created_at": "2019-11-22T12:52:44Z", "updated_at": "2019-11-23T14:38:47Z", "user": "h56cho" }, { "repo": "pytorch/xla", "number": 1399, "title": "Why does printing progress every step slow things down?", "body": "## \u2753 Questions and Help\r\n\r\n@dlibenzi You mentioned the ParallelLoader background sender and its ability somehow to overlap communication between TPU and CPU without interrupting the flow of TPU computations. But, you also mentioned that printing the values of summary statistics (which ultimately requires calling `loss.item()` and so forth) triggers \"an exit from the tensor world to CPU world\". I'm wondering why this would be the case? Couldn't there be some sort of asynchronous process in which the tensor world does the quick `item()` calculation, sends the value to the CPU in the \"background\" and resumes its cycle, while the CPU goes to work printing the result?\r\n\r\nThanks very much,\r\n\r\nHenry\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/1399", "state": "closed", "labels": [ "question" ], "created_at": "2019-11-21T22:29:43Z", "updated_at": "2019-11-22T17:29:28Z", "user": "hrbigelow" }, { "repo": "pytorch/xla", "number": 1398, "title": "Should CPU constants be ported to tensors to prevent IR recompilation?", "body": "## \u2753 Questions and Help\r\n\r\nI have various constructs in my code like:\r\n\r\n```python\r\nrec_loss = - log_pred_target.mean()\r\nze_norm = (self.bottleneck.ze ** 2).sum(dim=1).sqrt()\r\nnorm_loss = self.norm_gamma * torch.abs(ze_norm - 1.0).mean()\r\ntotal_loss = rec_loss + norm_loss\r\n```\r\n\r\nWould moving the `2` and `1.0` constants from CPU to scalar TPU tensors improve anything or will this be cached efficiently in the IR graph?", "url": "https://github.com/pytorch/xla/issues/1398", "state": "closed", "labels": [ "good first issue", "question", "stale" ], "created_at": "2019-11-21T22:18:27Z", "updated_at": "2019-12-28T23:23:21Z", "user": "hrbigelow" }, { "repo": "pytorch/vision", "number": 1599, "title": "ResNet identity (line 55) mustn't be mutable", "body": "The identity variable in line 55 is mutable\r\n def forward(self, x):\r\n identity = x\r\n\r\nIt must be immutable as follows:\r\n\r\n def forward(self, x):\r\n identity = 1*x", "url": "https://github.com/pytorch/vision/issues/1599", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2019-11-20T12:39:36Z", "updated_at": "2019-11-21T13:53:04Z", "user": "Abolfazl-Mehranian" }, { "repo": "pytorch/vision", "number": 1598, "title": "How to feed negative samples during Faster R-CNN training", "body": "Hi all,\r\nI have lots of non-annotated images in my training set, where there is no object of interest but there are couple other objects that should be interpreted as part of background. Is there any way I can provide background (negative) samples explicitly in my dataloder? \r\nI tried to set a single fake bounding box with label zero for those non-annotated images, and set my num_classes as 3, i.e., I have 2 objects and background, and then performed transfer learning,\r\n\r\n`model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True,\r\n pretrained_backbone=False)`\r\n`in_features = model.roi_heads.box_predictor.cls_score.in_features`\r\n`model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)`\r\n\r\n But I received a crash at `/torchvision/models/detection/roi_heads.py\", line 34, in fastrcnn_loss`\r\n`sampled_pos_inds_subset = torch.nonzero(labels > 0).squeeze(1)`\r\n\r\nI think this is happening because I have fed some images with only label zero, i.e., with no positive bbox.\r\nIs there any workaround for that purpose?", "url": "https://github.com/pytorch/vision/issues/1598", "state": "closed", "labels": [ "enhancement", "help wanted", "module: models", "topic: object detection" ], "created_at": "2019-11-20T12:15:54Z", "updated_at": "2023-03-29T16:37:30Z", "user": "kkirtac" }, { "repo": "pytorch/xla", "number": 1385, "title": "How original pytorch calls xla's ops?", "body": "## \u2753 Questions and Help\r\nRecently, I am looking into pytorch/xla code but I am confused with some things.\r\n\r\n- How original pytorch calls xla's ops?\r\n\r\nIs there pytorch-xla internal mechanism\uff1f\r\n\r\nAny reply will be much appreciated. THX", "url": "https://github.com/pytorch/xla/issues/1385", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2019-11-19T07:45:18Z", "updated_at": "2019-12-28T16:29:15Z", "user": "alanzhai219" }, { "repo": "pytorch/examples", "number": 666, "title": "Distributed training resnet50 using 4 nodes 32 TeslaV100", "body": "I checked a lot of literature, but I didn't find the results. The questions are as follows:\r\nHow many hours can it converge\uff1f\uff08Distributed training resnet50 using 4 nodes 32 TeslaV100 cards\uff09\r\n\r\nDo you have internal test results that can be displayed to better understand the performance of your distributed training.", "url": "https://github.com/pytorch/examples/issues/666", "state": "open", "labels": [ "distributed" ], "created_at": "2019-11-19T06:01:31Z", "updated_at": "2022-03-09T20:52:45Z", "comments": 0, "user": "gentelyang" }, { "repo": "pytorch/FBGEMM", "number": 199, "title": "[Question] 8bit integers and negative numbers", "body": "Hey,\r\n\r\nI have been reading the code for sparse 8bit gemm: https://github.com/pytorch/FBGEMM/blob/master/test/SpMMI8Test.cc and I have a few questions.\r\n\r\nI noticed that `getRandomSparseVector` will only generate positive numbers. Is this because you rely on the `maddubs` instruction? Does it mean that the A matrix can only contain positive numbers?\r\n\r\nI noticed this bit in the code:\r\n```c++\r\nfor (int i = 0; i < m * k; ++i) {\r\n aptr[i] &= 0x7F;\r\n }\r\n```\r\n\r\nYou avoid large numbers to avoid saturation. Does this mean there is no handling of saturation when it happens?\r\n\r\nThanks,\r\n\r\nNick", "url": "https://github.com/pytorch/FBGEMM/issues/199", "state": "closed", "labels": [ "question" ], "created_at": "2019-11-18T17:09:26Z", "updated_at": "2019-11-20T18:08:28Z", "user": "XapaJIaMnu" }, { "repo": "pytorch/vision", "number": 1592, "title": "unable to load inception model. Or any other architect other than alexnet", "body": "import torchvision.models.inception\r\n\r\n# works fine\r\narch = torchMd.alexnet(pretrained=True)\r\n\r\n# gives error, also tried vgg, densenet\r\narch = torchMd.inception(pretrained=True)\r\n\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-43-3882461a2f37> in <module>\r\n----> 1 print(torchvision.__version__)\r\n\r\nAttributeError: module 'torchvision' has no attribute '__version__'\r\n", "url": "https://github.com/pytorch/vision/issues/1592", "state": "closed", "labels": [ "question", "module: models" ], "created_at": "2019-11-18T07:30:01Z", "updated_at": "2019-11-19T10:44:02Z", "user": "richesh09" }, { "repo": "pytorch/xla", "number": 1379, "title": "Successive frames growing, but why?", "body": "## \u2753 Questions and Help\r\n\r\nIn the attached report below, I see successive frames growing by ~30 lines at each. The relevant code is below. The approach I used was to load all of the training data (about 300 mb) into memory into two tensors (`data_source.snd_data` and `data_source.mel_data`) and then at each training step, fill the batch with a different slice of those tensors. I thought the varying slices at each iteration were causing graph recompilation. But, in the code below, I replace that step with the same hard-coded slice, and the problem remains.\r\n\r\nWould anyone have any insights into this problem?\r\n\r\nAny help would be greatly appreciated!\r\n\r\n```python\r\n def set(self, b, sample_slice, data_source):\r\n ss = sample_slice\r\n # self.voice_index[b] = ss.voice_index\r\n wo = ss.wav_offset\r\n mo = ss.mel_offset\r\n dws = ss.dec_wav_slice\r\n mis = ss.mel_in_slice\r\n\r\n self.lcond_slice[b] = ss.lcond_slice \r\n self.loss_wav_slice[b] = ss.loss_wav_slice \r\n # self.wav_input[b,...] = data_source.snd_data[wo + dws[0]:wo + dws[1]] \r\n # self.mel_input[b,...] = data_source.mel_data[mo + mis[0]:mo +\r\n # mis[1],:].transpose(1, 0)\r\n\r\n self.wav_input[b,...] = data_source.snd_data[3184397:3186543]\r\n self.mel_input[b,...] = \\\r\n174 => data_source.mel_data[19855:19899,:].transpose(1, 0)\r\n```\r\n\r\n[xla.report.618294e.txt](https://github.com/pytorch/xla/files/3855904/xla.report.618294e.txt)\r\n[xla_metrics.618294e.txt](https://github.com/pytorch/xla/files/3855905/xla_metrics.618294e.txt)\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/1379", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2019-11-17T18:04:30Z", "updated_at": "2019-12-29T18:57:28Z", "user": "hrbigelow" }, { "repo": "pytorch/vision", "number": 1591, "title": "Training data set for pretrained resnet18", "body": "Anybody knows what the training data set of pretrained resnet18 is .\r\nI cannot find the official information of training data set used for pretrained models in torchvision.models. ", "url": "https://github.com/pytorch/vision/issues/1591", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: classification" ], "created_at": "2019-11-17T09:01:12Z", "updated_at": "2019-11-18T14:37:55Z", "user": "pantheon5100" }, { "repo": "pytorch/vision", "number": 1588, "title": "pretrained model", "body": "Anybody know how to train a pretrain model(etc mobile net v2 in pysot ) ?", "url": "https://github.com/pytorch/vision/issues/1588", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: classification" ], "created_at": "2019-11-16T08:30:07Z", "updated_at": "2019-11-26T01:58:11Z", "user": "zhu2014yi" }, { "repo": "pytorch/text", "number": 643, "title": "How to skip last batch that has a different batch size?", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\n\r\nSorry if this is a newbie question.\r\nIn `torch.nn.utils.data.dataloader` we can drop the last batch by specifying `drop_last=True`.\r\nDo we have something equivalent for our `Iterator`? Currently I continue the training loop if I see the current `batch_size` is different from my preset `batch_size`. Is there something built-in?\r\n\r\nThank you very much!\r\n", "url": "https://github.com/pytorch/text/issues/643", "state": "closed", "labels": [], "created_at": "2019-11-16T04:08:41Z", "updated_at": "2019-11-18T15:54:07Z", "user": "Hans0124SG" }, { "repo": "pytorch/tutorials", "number": 725, "title": "transfer_learning_tutorial get a warning under pytorch1.3", "body": ">`/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:100: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\r\n \"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\", UserWarning)`\r\nHello, I'm new to pytorch. In tutorial [transfer_learning_tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html), I run it in google colab, and got this warning, how to fix this?\r\n", "url": "https://github.com/pytorch/tutorials/issues/725", "state": "closed", "labels": [], "created_at": "2019-11-15T08:21:45Z", "updated_at": "2019-11-15T08:32:13Z", "comments": 1, "user": "neo0801" }, { "repo": "pytorch/xla", "number": 1368, "title": "How to tell if a graph recompilation is happening?", "body": "## \ud83d\udcda Documentation\r\n\r\nThanks so much for the great library! I'm running my Pytorch model on Google Colab with TPU. Following the tips in TROUBLESHOOTING.md, I see the following in my XLA_METRICS_FILE:\r\n```\r\nMetric: CompileTime\r\n TotalSamples: 12\r\n Accumulator: 44s280ms699.409us\r\n ValueRate: 952ms609.347us / second\r\n Rate: 0.25789 / second\r\n Percentiles: 1%=230ms554.170us; 5%=230ms554.170us; 10%=253ms547.395us; 20%=256ms764.061us; 50%=304ms288.564us; 80%=12s512ms567.169us; 90%=12s778ms277.508us; 95%=18s450ms18.269us; 99%=18s450ms18.269us\r\n...\r\nMetric: CompileTime\r\n TotalSamples: 14\r\n Accumulator: 01m03s282ms217.172us\r\n ValueRate: 924ms148.456us / second\r\n Rate: 0.20445 / second\r\n Percentiles: 1%=026ms136.061us; 5%=026ms136.061us; 10%=230ms554.170us; 20%=253ms547.395us; 50%=304ms288.564us; 80%=12s778ms277.508us; 90%=18s450ms18.269us; 95%=19s976ms381.702us; 99%=19s976ms381.702us\r\n[more to follow]\r\n```\r\n\r\nThere is one of these sections produced per SGD iteration. Does the fact that the ValueRate value is about the same in each one, mean that the graph is being compiled each time? If so, how do I tell what is causing it? I have studied the output of XLA_SAVE_TENSORS_FILE, and I can't find any place where the tensor dimensions are different.\r\n\r\nHowever, I do also see lots of occurrences of `aten::permute`, `aten::view`, `aten::squeeze`, `aten::relu`, etc.\r\n\r\nI also find that the code runs quite slow compared to GPU.\r\n\r\nThanks again,\r\n\r\nHenry\r\n\r\n<!-- A clear and concise description of what content is an issue. -->\r\n", "url": "https://github.com/pytorch/xla/issues/1368", "state": "closed", "labels": [ "question" ], "created_at": "2019-11-15T03:13:04Z", "updated_at": "2019-12-03T02:37:56Z", "user": "hrbigelow" }, { "repo": "pytorch/vision", "number": 1578, "title": "pilImage convert to tensor, than convert back to pilimage is not the same to the original", "body": "I convert a PILImage to tensor and than convert it back to PILImage. Saving the result, and compare to the original PILImage I loaded, they are not the same. \r\n\r\nWhy it is so?", "url": "https://github.com/pytorch/vision/issues/1578", "state": "closed", "labels": [ "question", "module: transforms" ], "created_at": "2019-11-14T21:37:00Z", "updated_at": "2019-11-26T12:43:44Z", "user": "Yumin-Sun-00" }, { "repo": "pytorch/pytorch", "number": 29802, "title": "How to release gpu memory of intermediate result tensor", "body": "In the example below, after calling torch.matmul, the gpu memory usage increases by 181796864 bytes, which is almost the sum of the sizes of c and b.transpose(2,3). So I guess the unreferenced intermediate result b.transpose(2,3) is stored in gpu memory. How could I release the gpu memory allocated to this intermediate result to save gpu memory?\r\n\r\n\r\nimport torch\r\nfrom torch.autograd import Variable\r\na = Variable(torch.rand(32, 8, 151, 1024), requires_grad=True).cuda()\r\nb = Variable(torch.rand(32, 8, 151, 1024), requires_grad=True).cuda()\r\ntorch.cuda.memory_allocated(0) # 316669952\r\nc=torch.matmul(a, b.transpose(2,3))\r\ntorch.cuda.memory_allocated(0) # 498466816, increased by 181796864\r\nc.element_size() * c.nelement() # 23348224\r\nb.transpose(2,3).element_size() * b.transpose(2,3).nelement() #158334976\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.0.1\r\n - OS (e.g., Linux): centos\r\n - How you installed PyTorch (`conda`, `pip`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.6.9\r\n - CUDA/cuDNN version: cuda9.2/cudnn7.4.2\r\n - GPU models and configuration:NVIDIA 1080TI\r\n - Any other relevant information:\r\n\r\n\n\ncc @ngimel", "url": "https://github.com/pytorch/pytorch/issues/29802", "state": "closed", "labels": [ "module: cuda", "module: memory usage", "triaged" ], "created_at": "2019-11-14T11:36:21Z", "updated_at": "2019-11-15T15:49:09Z", "user": "akikaaa" }, { "repo": "pytorch/android-demo-app", "number": 31, "title": "How to add built AAR libraries to a project", "body": "Hi, \r\n\r\nI've faced an issue. On PyTorch website there's an intro how to build and deploy pytorch-mobile from source (https://pytorch.org/mobile/android/#building-pytorch-android-from-source) but the part with Gradle won't work for me.\r\n\r\nI've succesfully build AAR files, then edited `HelloWorldApp/app/gradle.build` as it said in intro, and added this AAR files to `HelloWorldApp/app/libs/`\r\n\r\nAnd run it `./gradlew installDebug --stacktrace`\r\n\r\n```\r\n> Task :app:javaPreCompileDebug FAILED\r\n\r\nFAILURE: Build failed with an exception.\r\n\r\n* What went wrong:\r\nExecution failed for task ':app:javaPreCompileDebug'.\r\n> Could not resolve all files for configuration ':app:debugCompileClasspath'.\r\n > Failed to transform artifact 'pytorch_android-release.aar (:pytorch_android-release:)' to match attributes {artifactType=android-classes, org.gradle.usage=java-api}.\r\n > Execution failed for JetifyTransform: /root/android-demo-app/HelloWorldApp/app/libs/pytorch_android-release.aar.\r\n > Java heap space\r\n\r\n* Try:\r\nRun with --info or --debug option to get more log output. Run with --scan to get full insights.\r\n\r\n* Exception is:\r\norg.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:javaPreCompileDebug'.\r\n at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:38)\r\n at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:73)\r\n at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)\r\n at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)\r\n at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)\r\n at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)\r\n at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)\r\n at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)\r\n at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)\r\n at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)\r\n at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)\r\n at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:49)\r\n at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43)\r\n at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)\r\n at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)\r\n at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)\r\n at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322)\r\n at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134)\r\n at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129)\r\n at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202)\r\n at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193)\r\n at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129)\r\n at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)\r\n at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)\r\n at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)\r\nCaused by: org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException: Could not resolve all files for configuration ':app:debugCompileClasspath'.\r\n at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.rethrowFailure(DefaultConfiguration.java:1195)\r\n at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$2100(DefaultConfiguration.java:138)\r\n at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getFiles(DefaultConfiguration.java:1170)\r\n at org.gradle.api.internal.file.AbstractFileCollection.iterator(AbstractFileCollection.java:72)\r", "url": "https://github.com/pytorch/android-demo-app/issues/31", "state": "closed", "labels": [], "created_at": "2019-11-14T10:41:10Z", "updated_at": "2022-08-13T17:06:38Z", "user": "zetyquickly" }, { "repo": "pytorch/examples", "number": 663, "title": "how do we pass multiple indices as input to generate multiple outputs in word_language model", "body": "The current codebase of [`word_language_model/generate.py`](https://github.com/pytorch/examples/blob/master/word_language_model/generate.py) uses a single (randomly sampled) index as `input` and generates a text based on this.\r\n\r\nNow, I'd like to extend this a bit and would like to pass a set of indices (i.e. > 1) as `input` and be able to generate a set of texts as output. I tried it with a simple loop based approach of iteratively querying the model but it's taking hours to do this task, since it has to be done sequentially.\r\n\r\nAny ideas about how to pass in a list of indices as input, particularly in the line: [`word_language_model/generate.py#L56`](https://github.com/pytorch/examples/blob/master/word_language_model/generate.py#L56) ? This can be called as *batchified generate* function!", "url": "https://github.com/pytorch/examples/issues/663", "state": "open", "labels": [ "nlp" ], "created_at": "2019-11-14T03:55:33Z", "updated_at": "2022-03-09T23:42:32Z", "user": "kmario23" }, { "repo": "pytorch/pytorch", "number": 29745, "title": "How to add PyTorch to requirements.txt", "body": "I'm trying to include PyTorch in a requirements.txt file to be installed in a Docker container, but can't seem to get it to work. I've tried adding the following with no luck:\r\n\r\n```\r\ntorch==1.3.1\r\n> ERROR: Could not find a version that satisfies the requirement torch==1.3.1 (from -r /requirements/./base.txt (line 28))\r\n```\r\n\r\n```\r\ntorch==1.2.0+cpu\r\n> Could not find a version that satisfies the requirement torch==1.2.0+cpu (from -r /requirements/./base.txt (line 28)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)\r\n```\r\n\r\nHow do you add PyTorch to requirements.txt?", "url": "https://github.com/pytorch/pytorch/issues/29745", "state": "closed", "labels": [], "created_at": "2019-11-13T20:12:58Z", "updated_at": "2021-01-19T13:35:28Z", "user": "econti" }, { "repo": "pytorch/xla", "number": 1348, "title": "How to downgrade torch version?", "body": "Hey guys, I'm trying to train my image classification model on multi-cores. I'm using Pytorch-nightly version but the problem is that torch version is 1.4.0a0+be75795, which isn't compatible with my Torchvision version(0.3.0). It gives the following error-\r\n\r\n`AttributeError: module 'torch' has no attribute 'gels'`\r\n\r\nThis gels attribute is defined in previous torch versions, so how can I downgrade only the torch version to 1.2.0 once I'm inside the container?\r\n\r\nThanks", "url": "https://github.com/pytorch/xla/issues/1348", "state": "closed", "labels": [ "bug" ], "created_at": "2019-11-13T06:17:52Z", "updated_at": "2019-11-14T00:21:42Z", "user": "ajay960singh" }, { "repo": "pytorch/examples", "number": 660, "title": "how to run resnet on Single node, multiple GPUs", "body": "can i use \"CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python main.py -a resnet50 .......\"", "url": "https://github.com/pytorch/examples/issues/660", "state": "closed", "labels": [], "created_at": "2019-11-12T03:42:46Z", "updated_at": "2019-11-12T03:43:53Z", "user": "gentelyang" }, { "repo": "pytorch/examples", "number": 659, "title": "Do we need average_gradient when we do mutiprocess distributed training?", "body": "In the tutorial, it is said that we need to write `avereage_gradients` to get the average gradient for different process, then we can do `optimizer.step()`, however, in the imagenet example, `avereage_gradients` is not there. Does it means we do not need this function in new version of pytorch for mutiprocess distributed training?(I am using torch 1.3.0)", "url": "https://github.com/pytorch/examples/issues/659", "state": "closed", "labels": [], "created_at": "2019-11-12T00:11:38Z", "updated_at": "2019-11-12T03:43:36Z", "comments": 1, "user": "dzk9528" }, { "repo": "pytorch/pytorch", "number": 29521, "title": "How to perform multi-task regression with pytorch?", "body": "```\r\nimport torch\r\nfrom torch import nn\r\nimport torch.nn.functional as F\r\n\r\nclass mynet(nn.Module):\r\n def __init__(self):\r\n super(mynet, self).__init__()\r\n self.lin1 = nn.Linear(5, 10)\r\n self.lin2 = nn.Linear(10, 3)\r\n self.lin3 = nn.Linear(10, 4)\r\n\r\n def forward(self, x):\r\n x = self.lin1(x)\r\n x1 = self.lin2(x)\r\n x2 = self.lin3(x)\r\n return x1, x2\r\n\r\nif __name__ == '__main__':\r\n x = torch.randn(1000, 5)\r\n y1 = torch.randn(1000, 3)\r\n y2 = torch.randn(1000, 4)\r\n model = mynet()\r\n optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-4)\r\n for epoch in range(100):\r\n model.train()\r\n optimizer.zero_grad()\r\n out1, out2 = model(x)\r\n loss = 0.2 * F.mse_loss(out1, y1) + 0.8 * F.mse_loss(out2, y2)\r\n loss.backward()\r\n optimizer.step()\r\n\r\n```\r\n\r\nAlthough the code above can run,I have a question that if I expect loss=0.2*loss1+0.8*loss2,how can loss be divides into two parts in proportion when backward propagating?", "url": "https://github.com/pytorch/pytorch/issues/29521", "state": "closed", "labels": [], "created_at": "2019-11-10T11:35:47Z", "updated_at": "2019-11-11T03:50:40Z", "user": "thu-wangz17" }, { "repo": "pytorch/pytorch", "number": 29517, "title": "Where is the source code for mathematical operations like specifically torch.mean()?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/29517", "state": "closed", "labels": [], "created_at": "2019-11-10T07:08:09Z", "updated_at": "2019-11-10T08:56:45Z", "user": "C-Weed28" }, { "repo": "pytorch/pytorch", "number": 29441, "title": "error when export to onnx:Auto nesting doesn't know how to process an input object of type maskrcnn_benchmark.structures.image_list.ImageList. Accepted types: Tensors, or lists/tuples of them", "body": "## \u2753 Questions and Help\r\npytorch:1.0.0\r\ncuda:10.0\r\ntorchvision:0.2.1\r\nubuntu:16.04\r\n\r\ni clone the [facebook/maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark), and want to export the model to onnx:\r\n```\r\nx = torch.ones(1, 3, 224, 224, requires_grad=True)\r\ntorch.onnx.export(model, x, \"faster.onnx\", export_params=True)\r\n```\r\nbut it get the error:\r\n```\r\n......\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 487, in __call__\r\n result = self._slow_forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 477, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 357, in forward\r\n return self.module(*inputs[0], **kwargs[0])\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 487, in __call__\r\n result = self._slow_forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 477, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py\", line 50, in forward\r\n proposals, proposal_losses = self.rpn(images, features, targets)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 487, in __call__\r\n result = self._slow_forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 464, in _slow_forward\r\n input_vars = tuple(torch.autograd.function._iter_tensors(input))\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py\", line 284, in _iter\r\n for var in _iter(o):\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py\", line 293, in _iter\r\n if condition_msg else \"\"))\r\nValueError: Auto nesting doesn't know how to process an input object of type maskrcnn_benchmark.structures.image_list.ImageList. Accepted types: Tensors, or lists/tuples of the\r\n```\r\n\r\n\r\n\n\ncc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof", "url": "https://github.com/pytorch/pytorch/issues/29441", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2019-11-08T06:14:52Z", "updated_at": "2021-12-23T01:43:59Z", "user": "zsk423200" }, { "repo": "pytorch/pytorch", "number": 29434, "title": "How to know which whl version can be selected?", "body": "@svenstaro @eklitzke @jfsantos I wan't use pip install torch with cuda10, i know use bash like this: \r\npip3 install https://download.pytorch.org/whl/cu100/torch-1.0.1.post2-cp36-cp36m-linux_x86_64.whl\r\nwhen i choose python vision is cp37, It will be reported wrong: \r\nERROR: torch-1.1.0-cp37-cp37m-linux_x86_64.whl is not a supported wheel on this platform.\r\nso i want know which whl version can be selected?\r\n\n\ncc @ezyang", "url": "https://github.com/pytorch/pytorch/issues/29434", "state": "closed", "labels": [ "module: binaries", "triaged" ], "created_at": "2019-11-08T02:52:04Z", "updated_at": "2019-11-09T05:54:40Z", "user": "moyans" }, { "repo": "pytorch/pytorch", "number": 29422, "title": "How to inference with nn.TransformerDecoder layer", "body": "I am using customized Transformer with nn.TransformerDecoder layer . It seem like nn.TransformerDecoder layer doesn't support inference process(generation/testing), like sending token id one by one with fixed memory generated from nn.TransformerEncoder layer. I am wondering is there a tutorial that I can refer to as I didn't find a tutorial in the official documents. Thank you in advance for your help! \r\n", "url": "https://github.com/pytorch/pytorch/issues/29422", "state": "closed", "labels": [], "created_at": "2019-11-07T23:41:50Z", "updated_at": "2019-11-08T21:47:20Z", "user": "xdwang0726" }, { "repo": "pytorch/vision", "number": 1557, "title": "Can KeypointRCNN also detect objects that do not need to be predicted with keypoints?", "body": "As far as I understand keypoints would be computed for all the box classes (apart from background) in Keypoint-RCNN. I need to do object detection and keypoint prediction at the same time, however keypoints should only be predicted for one class. Does current version support this? \r\n\r\nIf not, I would need to modify some of the code in FasterRCNN and KeypointRCNN as far as I understand but in principle it is possible, right?", "url": "https://github.com/pytorch/vision/issues/1557", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-11-06T00:26:08Z", "updated_at": "2019-11-06T09:56:00Z", "user": "anuar12" }, { "repo": "pytorch/examples", "number": 656, "title": "About DCGAN datasets", "body": "May I know what is the dataset URL for fake input in DCGAN example?", "url": "https://github.com/pytorch/examples/issues/656", "state": "closed", "labels": [], "created_at": "2019-11-05T18:51:11Z", "updated_at": "2022-03-09T23:28:56Z", "comments": 1, "user": "mahmoodn" }, { "repo": "pytorch/pytorch", "number": 29190, "title": "How to run two different jit models in two GPUs respectively in one scrip?", "body": "I have an encoder-decoder model. After converted encoder and decoder model into jit models, I want to load encoder on GPU:0 and the encoder outputs **Keys** and **Value**. Then I move the **Keys** and **Values** to GPU:1 since the decoder is loaded on GPU:1. \r\n\r\n encoder = torch.jit.load(feat_model).cuda(0)\r\n gru_decoder = torch.jit.load(gru_model, map_location=torch.device(\"cpu\")).cuda(1)\r\n\r\n loader = getLoader(data_path, batch_size)\r\n for data in loader:\r\n audio, label = data\r\n batch_size = audio.size(0)\r\n k, v = encoder(audio.type(\"torch.FloatTensor\").cuda(0))\r\n k = k.cuda(1)\r\n v = v.cuda(1)\r\n hidden = torch.zeros(1, batch_size, 512).type(\"torch.FloatTensor\").cuda(1)\r\n target = torch.tensor(sos_id).repeat(batch_size).cuda(1)\r\n\r\n for step in range(k.size(1)):\r\n probs, hidden = gru_decoder(target, hidden, k, v)\r\n target = torch.argmax(probs, dim=-1)\r\nI have checked that target, hidden, k, v are on GPU:1. However, an error occurs:\r\n\r\n arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:519:\r\n operation failed in interpreter:\r\n op_version_set = 0\r\n def forward(self,\r\n target: Tensor,\r\n hx: Tensor,\r\n keys: Tensor,\r\n values: Tensor) -> Tuple[Tensor, Tensor]:\r\n input_1 = torch.to(target, dtype=4, layout=0, device=torch.device(\"cuda\"), non_blocking=False, copy=False)\r\n _0 = torch.embedding(self.classifier.embedding.weight, input_1, -1, False, False)\r\n ~~~~~~~~~~~~~~~ <--- HERE\r\n input_2 = torch.unsqueeze(_0, 1)\r\n _1 = [self.classifier.rnn.weight_ih_l0, self.classifier.rnn.weight_hh_l0, self.classifier.rnn.bias_ih_l0, self.classifier.rnn.bias_hh_l0]\r\n querys, _2 = torch.gru(input_2, hx, _1, True, 1, 0., False, False, True)\r\n _3 = torch.matmul(keys, torch.permute(querys, [0, 2, 1]))\r\n input_3 = torch.div(_3, CONSTANTS.c0)\r\n attn_w = torch.softmax(input_3, 1)\r\n sums = torch.matmul(torch.permute(attn_w, [0, 2, 1]), values)\r\n input_4 = torch.view(torch.add(querys, sums, alpha=1), [-1, 512])\r\n input = torch.addmm(self.classifier.h.bias, input_4, torch.t(self.classifier.h.weight), beta=1, alpha=1)\r\n\r\nIt seems that the embedding layer in decoder is on GPU:0. But I have already set decoder on CUDA:1. \r\nDoes anyone have any solutions or ideas? Thanks a lot.\r\n\r\ncc @suo", "url": "https://github.com/pytorch/pytorch/issues/29190", "state": "open", "labels": [ "oncall: jit", "triaged" ], "created_at": "2019-11-05T10:09:34Z", "updated_at": "2020-03-19T06:14:43Z", "user": "lzj9072" }, { "repo": "pytorch/vision", "number": 1553, "title": "Trained Mask RCNN without ground truth bounding boxes ", "body": "Hi all,\r\n\r\nIs acceptable to train mask rcnn without bounding boxes? I want to generate only negative samples after RPN model in order to lower false positive cases.", "url": "https://github.com/pytorch/vision/issues/1553", "state": "closed", "labels": [ "question", "module: models", "topic: object detection" ], "created_at": "2019-11-05T03:08:39Z", "updated_at": "2019-11-05T10:48:14Z", "user": "ghost" }, { "repo": "pytorch/vision", "number": 1552, "title": "Best practice to run Mask R-CNN in parallel", "body": "What ist the best practice to run Mask R-CNN in parallel?\r\n\r\n@fmassa wrote in #1255 \r\n\r\n> The current code assumes that you are using 1 GPU per process, with DistributedDataParallel.\r\n\r\nIs this information up-to-date?", "url": "https://github.com/pytorch/vision/issues/1552", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2019-11-04T15:56:36Z", "updated_at": "2019-11-05T10:42:50Z", "user": "maxfrei750" }, { "repo": "pytorch/QNNPACK", "number": 68, "title": "How to build dependencies separately", "body": "I'm trying to add a package for QNNPACK to the [Spack package manager](https://spack.io). I see that QNNPACK downloads its own dependencies, and that this can be avoided by setting `*_SOURCE_DIR` via cmake. Is there a way to point to an existing external installation instead of a source directory so that Spack doesn't need to rebuild all of these dependencies? Spack is designed to work on air-gapped supercomputers that don't have internet access, so I can't have it download anything at build time.", "url": "https://github.com/pytorch/QNNPACK/issues/68", "state": "open", "labels": [], "created_at": "2019-11-01T22:08:50Z", "updated_at": "2019-11-01T22:08:50Z", "user": "adamjstewart" }, { "repo": "pytorch/examples", "number": 653, "title": "What is the meaning of transforms.Normalize((0.1307,), (0.3081,)) in mnist", "body": "In mnist/main.py, when reading the dataset using DataLoader, there is a line:\r\n\r\n`transforms.Normalize((0.1307,), (0.3081,))`\r\n\r\ncan any one explain its meaning? I know that it tries to normalize the data, but why there are two parameters and where do those 0.1307 and 0.3081 come from?", "url": "https://github.com/pytorch/examples/issues/653", "state": "closed", "labels": [], "created_at": "2019-11-01T15:41:13Z", "updated_at": "2024-07-30T12:09:26Z", "user": "copyrightly" }, { "repo": "pytorch/examples", "number": 652, "title": "why not divide by batch size ?", "body": "https://github.com/pytorch/examples/blob/4e00723456160d910092aae567a0b8daf66c49ec/vae/main.py#L82\r\n\r\nI think finally loss should be **(BCE+KLD) / batch_size** , is right?", "url": "https://github.com/pytorch/examples/issues/652", "state": "closed", "labels": [], "created_at": "2019-11-01T08:56:53Z", "updated_at": "2022-03-09T23:26:30Z", "comments": 2, "user": "Johnson-yue" }, { "repo": "pytorch/xla", "number": 1280, "title": "machine translation validation fails with multi-process", "body": "## \u2753 Questions and Help\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. create an instance using the latest torch-xla\r\n```bash\r\nexport PROJECT_NAME=xxx\r\ngcloud config set project ${PROJECT_NAME}\r\ngcloud compute --project=${PROJECT_NAME} instances create instance-1 \\\r\n--zone=europe-west4-a \\\r\n--machine-type=n1-standard-8 \\\r\n--image=debian-9-torch-xla-v20191026 \\\r\n--image-project=ml-images \\\r\n--boot-disk-size=200GB\r\n```\r\n2. conda activate `torch-xla-nightly`\r\n3. run machine translation scirpt following https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch in tpu branch of fairseq-tpu (https://github.com/pytorch-tpu/fairseq/tree/tpu) as\r\n```bash\r\ngcloud compute tpus create transformer-pytorch-tutorial \\\r\n--zone=europe-west4-a \\\r\n--network=default \\\r\n--range=10.2.3.0 \\\r\n--version=pytorch-nightly \\\r\n--accelerator-type=v3-8\r\n\r\nexport TPU_IP_ADDRESS=ip-address; \\\r\nexport XRT_TPU_CONFIG=\"tpu_worker;0;$TPU_IP_ADDRESS:8470\";\r\n\r\npython train.py \\\r\n $HOME/pytorch-tutorial-data/wmt18_en_de_bpej32k \\\r\n --save-interval=1 \\\r\n --arch=transformer_vaswani_wmt_en_de_big \\\r\n --max-target-positions=64 \\\r\n --attention-dropout=0.1 \\\r\n --no-progress-bar \\\r\n --criterion=label_smoothed_cross_entropy \\\r\n --source-lang=en \\\r\n --lr-scheduler=inverse_sqrt \\\r\n --min-lr 1e-09 \\\r\n --skip-invalid-size-inputs-valid-test \\\r\n --target-lang=de \\\r\n --label-smoothing=0.1 \\\r\n --update-freq=1 \\\r\n --optimizer adam \\\r\n --adam-betas '(0.9, 0.98)' \\\r\n --warmup-init-lr 1e-07 \\\r\n --lr 0.0005 \\\r\n --warmup-updates 4000 \\\r\n --share-all-embeddings \\\r\n --dropout 0.3 \\\r\n --weight-decay 0.0 \\\r\n --valid-subset=valid \\\r\n --max-epoch=25 \\\r\n --input_shapes 128x64 \\\r\n --num_cores=8 \\\r\n --metrics_debug \\\r\n --log_steps=100\r\n```\r\n\r\nAfter the first epoch during validation, it reports \r\n`/anaconda3/envs/torch-xla-nightly/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))` and then crushes. There is no checkpoint saved, too.\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\nIt crushes with the SIGKILL from multiprocessing:\r\n```base\r\nTraceback (most recent call last):\r\n File \"train.py\", line 632, in <module>\r\n cli_main()\r\n File \"train.py\", line 623, in cli_main\r\n xmp.spawn(_mp_fn, args=(args,), nprocs=args.num_cores)\r\n File \"/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 154, in spawn\r\n _start_fn, args=(fn, args), nprocs=nprocs, join=join, daemon=daemon)\r\n File \"/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 171, in spawn\r\n while not spawn_context.join():\r\n File \"/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 107, in join\r\n (error_index, name)\r\nException: process 0 terminated with signal SIGKILL\r\n```\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n - reproducible on XLA backend [CPU/TPU]: TPU\r\n - torch_xla version: torch-xla-nightly (v1026)\r\n - Any other relevant information:", "url": "https://github.com/pytorch/xla/issues/1280", "state": "closed", "labels": [ "question" ], "created_at": "2019-10-31T20:24:20Z", "updated_at": "2021-05-22T04:59:05Z", "user": "sIncerass" }, { "repo": "pytorch/vision", "number": 1538, "title": "Problem train/finetuning segmentation (fcn_resnet101) on voc data", "body": "Hi\r\nThanks for a great api. \r\n\r\nI am trying to train/finetuning the trained fcn_resnet101 trained on coco dataset, but it seems like after 1. epoch it is way worse on voc data, than it is before. \r\n\r\nIf i test the already trained fcn_resnet101 on the voc data i get mean IoU: 73.3. \r\n\r\nThen i train the fcn_resnet101 on the voc data and after 1. epoch i get mean IoU: 3.8.\r\nWhy is it so much worse after training 1. epoch? \r\nSeems like i am not using the pretrained network for training.\r\n\r\nThe command line i am using for finetuning the network is:\r\npython3 -m torch.distributed.launch --use_env train.py --lr 0.02 --dataset voc -b 2 --model fcn_resnet101 --aux-loss --pretrained\r\n\r\nAnd for testing i use:\r\npython3 -m torch.distributed.launch --use_env train.py --lr 0.02 --dataset voc -b 2 --model fcn_resnet101 --aux-loss --pretrained --test-only\r\n\r\nHave somebody experinced the same?\r\n\r\nHope somebody can help?\r\n\r\nBecause after this i want to train on my own dataset.\r\n\r\n\r\n \r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1538", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: semantic segmentation" ], "created_at": "2019-10-30T15:10:32Z", "updated_at": "2019-11-01T08:37:40Z", "user": "Denlar2" }, { "repo": "pytorch/examples", "number": 650, "title": "This project fast-neural-style takes too long, how to solve?", "body": "1. 32346.619 ms\r\n2. 12375.127ms", "url": "https://github.com/pytorch/examples/issues/650", "state": "closed", "labels": [], "created_at": "2019-10-30T10:56:34Z", "updated_at": "2022-03-09T23:24:07Z", "user": "tcxia" }, { "repo": "pytorch/xla", "number": 1262, "title": "How to share weights memory while running big models", "body": "## \u2753 Questions and Help\r\n\r\nHello, I use pytorch-xla multiprocessing approach to train my gpt2 model from `huggingface-transformers`. When training from pretrained weights, the model is however loaded multiple times, which increase the need for host memory. While for GPT2-small it's not a problem. GPT2-large can fill up to 80GB of ram when loaded by all processes. What is the suggested way to share host memory for model weights while running multiprocessing? Is this possible at all?\r\n\r\nThe part of code, run by each thread that causes the problem:\r\n```\r\n model_class = GPT2LMHeadModel\r\n model = model_class.from_pretrained(\"gpt2\")\r\n model.to(device).train()\r\n```", "url": "https://github.com/pytorch/xla/issues/1262", "state": "closed", "labels": [ "question" ], "created_at": "2019-10-30T09:59:28Z", "updated_at": "2020-02-27T18:44:54Z", "user": "Glorf" }, { "repo": "pytorch/pytorch", "number": 28868, "title": "How to build caffe2 with ONNX opset version greater than 9?", "body": "## \u2753 Questions and Help\r\n\r\nHello,\r\nI've currently worked with freshly merged feature pytorch/vision#1401 and won't able to find a way to make Caffe2 work with ONNX operation set 10?\r\n\r\nIs there a way to build a Caffe2 from source with this opset?\r\n ", "url": "https://github.com/pytorch/pytorch/issues/28868", "state": "closed", "labels": [], "created_at": "2019-10-30T09:51:52Z", "updated_at": "2019-10-31T00:50:02Z", "user": "zetyquickly" }, { "repo": "pytorch/vision", "number": 1534, "title": "The output of features is 512*7*7,why we still need AdaptiveAvgPool2d here to make the output size 7*7 output diamension ", "body": "https://github.com/pytorch/vision/blob/13b35ffaa5167f3713ea7a53c43395d90b3a7cbc/torchvision/models/vgg.py#L44", "url": "https://github.com/pytorch/vision/issues/1534", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2019-10-30T02:44:27Z", "updated_at": "2019-10-30T10:04:25Z", "user": "shenlinyao" }, { "repo": "pytorch/xla", "number": 1260, "title": "Is tensorboard visualization of computation graphs supported?", "body": "Hi. I would like to know is it possible to dump a tensorboard visualization of the structure of the computation graph and the TPU compatibility graph for debugging purposes.\r\n[reference](https://cloud.google.com/tpu/docs/cloud-tpu-tools#profile_tab) This can be done in TF by setting the \"model_dir\" attribute of tf.estimator API.\r\n\r\n\r\n", "url": "https://github.com/pytorch/xla/issues/1260", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2019-10-30T02:18:09Z", "updated_at": "2019-12-13T07:44:20Z", "user": "20171130" }, { "repo": "pytorch/android-demo-app", "number": 24, "title": "Hello, I use Java to load my training model, the program is stuck in \u201cmodule.forward\uff08\uff09\u201d this step is not gone, how to do?", "body": "", "url": "https://github.com/pytorch/android-demo-app/issues/24", "state": "closed", "labels": [], "created_at": "2019-10-28T10:23:04Z", "updated_at": "2019-11-20T23:35:59Z", "user": "niushaoda" }, { "repo": "pytorch/pytorch", "number": 28778, "title": "andoroid quantization model (mobilenetv2) first forward very slow? but second forward faster why how to fix it", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a", "url": "https://github.com/pytorch/pytorch/issues/28778", "state": "closed", "labels": [ "oncall: quantization", "triaged" ], "created_at": "2019-10-28T04:42:36Z", "updated_at": "2019-10-29T05:53:17Z", "user": "hexiangquan" }, { "repo": "pytorch/pytorch", "number": 28776, "title": "How to use torch.quantization.get_observer_dict(mod, target_dict, prefix='')", "body": "## \u2753 How to use torch.quantization.get_observer_dict(mod, target_dict, prefix='') to get the observer dict\r\n\r\nCan you provide an example for this usage? Thanks a lot!\r\n\n\ncc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a", "url": "https://github.com/pytorch/pytorch/issues/28776", "state": "closed", "labels": [ "oncall: quantization", "triaged" ], "created_at": "2019-10-28T04:13:45Z", "updated_at": "2019-10-29T01:40:28Z", "user": "vippeterhou" }, { "repo": "pytorch/pytorch", "number": 28771, "title": "why the data-type of output is quint8 in static quantize? what static quantize does under the hood?", "body": "Here is a example of static quantize,My python is version 3.7 and torch is 1.3.:\r\n`\r\nimport torch\r\nimport torch.nn as nn\r\nm = nn.quantized.Linear(20,30)\r\ninput = torch.randn(128.20)\r\ninput = torch.quantize_per_tensor(input,1.0,0,torch.quint8)\r\noutput = m(input)\r\nprint (output.dtype)\r\n`\r\nI feel confused why the data-type of output is quint8 rather than float or int-32 in static quantize\uff0cit may will cause accuracy loss when model 'm' is joint with a soft-max layer at last ?what static quantize does underlying?\r\n\n\ncc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @pytorch/quantization", "url": "https://github.com/pytorch/pytorch/issues/28771", "state": "closed", "labels": [ "oncall: quantization", "triaged" ], "created_at": "2019-10-28T02:11:47Z", "updated_at": "2020-04-15T01:02:31Z", "user": "litaozijin" }, { "repo": "pytorch/examples", "number": 648, "title": "C++ MNIST without CUDA", "body": "Hi\r\n\r\nFollowing instructions for MNIST in C++ I get this after make:\r\n\r\n```\r\n-- The C compiler identification is GNU 4.8.5\r\n-- The CXX compiler identification is GNU 4.8.5\r\n-- Check for working C compiler: /usr/bin/cc\r\n-- Check for working C compiler: /usr/bin/cc -- works\r\n-- Detecting C compiler ABI info\r\n-- Detecting C compiler ABI info - done\r\n-- Detecting C compile features\r\n-- Detecting C compile features - done\r\n-- Check for working CXX compiler: /usr/bin/c++\r\n-- Check for working CXX compiler: /usr/bin/c++ -- works\r\n-- Detecting CXX compiler ABI info\r\n-- Detecting CXX compiler ABI info - done\r\n-- Detecting CXX compile features\r\n-- Detecting CXX compile features - done\r\n-- Looking for pthread.h\r\n-- Looking for pthread.h - found\r\n-- Looking for pthread_create\r\n-- Looking for pthread_create - not found\r\n-- Looking for pthread_create in pthreads\r\n-- Looking for pthread_create in pthreads - not found\r\n-- Looking for pthread_create in pthread\r\n-- Looking for pthread_create in pthread - found\r\n-- Found Threads: TRUE \r\nCUDA_TOOLKIT_ROOT_DIR not found or specified\r\n-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) \r\n\r\nCMake Error at (my path to)/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:90 (message):\r\n Your installed Caffe2 version uses CUDA but I cannot find the CUDA\r\n libraries. Please set the proper CUDA prefixes and / or install CUDA.\r\n```\r\n\r\nI was wondering if there is a way to run this without CUDA. \r\nThanks", "url": "https://github.com/pytorch/examples/issues/648", "state": "open", "labels": [ "c++" ], "created_at": "2019-10-27T22:44:15Z", "updated_at": "2022-03-09T20:49:35Z", "comments": 0, "user": "maziar840" }, { "repo": "pytorch/text", "number": 629, "title": "How to use custom-built Torchtext vocabulary with the HuggingFace TransfoXLLMHeadModel?", "body": "Hello,\r\n\r\nI am trying to use my custom built vocabulary which I defined using Torchtext functions with the HuggingFace TransfoXLLMHeadModel, and I am having some troubles with it.\r\nI defined my text field as below:\r\n```js\r\n\r\n# Import packages \r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nfrom transformers import TransfoXLConfig, TransfoXLTokenizer, TransfoXLLMHeadModel\r\nfrom transformers import AdamW, WarmupLinearSchedule\r\nimport spacy\r\nimport torchtext\r\nfrom torchtext.data.utils import get_tokenizer\r\nfrom torchtext.data import Field, BPTTIterator, TabularDataset \r\nimport tensorflow as tf\r\n#import lineflow as lf\r\n#import lineflow.datasets as lfds\r\nimport math\r\nimport random\r\nimport numpy as np\r\nimport pandas as pd \r\nimport time\r\n\r\n# define tokenizer\r\nen = spacy.load('en')\r\n\r\ndef Sp_Tokenizer(text): \r\n return [tok.text for tok in en.tokenizer(text)]\r\n\r\n# define the English text field\r\nTEXT = Field(tokenize = Sp_Tokenizer,\r\n init_token='< sos >',\r\n eos_token='< eos >',\r\n unk_token='< unk >',\r\n tokenizer_language='en',\r\n lower=True)\r\n\r\n# load WikiText-2 dataset and split it into train and test set\r\ntrain_Wiki2, val_Wiki2, test_Wiki2 = torchtext.datasets.WikiText2.splits(TEXT)\r\ntrain_Wiki103, val_Wiki103, test_Wiki103 = torchtext.datasets.WikiText103.splits(TEXT)\r\ntrain_Penn, val_Penn, test_Penn = torchtext.datasets.PennTreebank.splits(TEXT)\r\n\r\n# build custom vocabulary based on the field that we just defined.\r\nTEXT.build_vocab(train_Wiki2, val_Wiki2, test_Wiki2, \r\n train_Wiki103, val_Wiki103, test_Wiki103,\r\n train_Penn, val_Penn, test_Penn)\r\n```\r\nand then I defined the HuggingFace transformer's configuration as below:\r\n```js\r\n\r\n# set hyperparameter ntokens\r\nntokens = len(TEXT.vocab.stoi)\r\n\r\n# define transformer-XL configuration.\r\ntransfoXLconfig = TransfoXLConfig(vocab_size_or_config_json_file = ntokens,\r\n cutoffs = [20000, 40000, 200000], \r\n d_model = 64, \r\n d_embed = 64, \r\n n_head = 16, \r\n d_head = 64,\r\n n_layer = 5,\r\n attn_type = 0,\r\n dropout = 0.1, \r\n output_hidden_states = True,\r\n output_attentions = True)\r\n\r\n# define the transformer-XL model based on the specified configuration.\r\nmodel = TransfoXLLMHeadModel(transfoXLconfig)\r\n\r\n# add new tokens to the embeddings of our model\r\nmodel.resize_token_embeddings(ntokens)\r\n```\r\nand then I want to somehow specify that I want to use my `TEXT.vocab` that I defined earlier via Torchtext for my vocabulary along with the TransfoXLLMHeadModel, but I am not sure how to do this. Can someone help me on this? Thank you!\r\n", "url": "https://github.com/pytorch/text/issues/629", "state": "closed", "labels": [], "created_at": "2019-10-27T09:02:13Z", "updated_at": "2019-11-01T15:21:23Z", "user": "h56cho" }, { "repo": "pytorch/android-demo-app", "number": 23, "title": "Does \"pth\" model need to convert \"pt\"? and how to convert", "body": "", "url": "https://github.com/pytorch/android-demo-app/issues/23", "state": "open", "labels": [], "created_at": "2019-10-25T09:52:27Z", "updated_at": "2020-08-25T03:50:08Z", "user": "niushaoda" }, { "repo": "pytorch/vision", "number": 1523, "title": "Unable to pass `extensions` when creating custom `Kinetics400` Video Dataset", "body": "Thank you for the video support!\r\n\r\nWhen imported using `from torchvision.datasets.kinetics import *`, the `Kinetics400` class doesn't accept an `extensions` argument:\r\n```python\r\ndata = Kinetics400(root=data_path, frames_per_clip=32, extensions=('.mp4',))\r\n\r\n\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-4-904c8992e847> in <module>\r\n----> 1 data = Kinetics400(root=data_path, frames_per_clip=32, extensions=('.mp4',))\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'extensions'\r\n```\r\n\r\nHowever, if I copy-paste the code from `kinetics.py` in my script/notebook, I have the option to pass in an `extensions` argument and it works fine for a dataset with, say, `.mp4` videos. \r\n\r\nWhy does this happen?\r\n\r\nI'm using `python 3.7+`, torch `1.3.0` and `torchvision 0.4.1`\r\n\r\n\r\n", "url": "https://github.com/pytorch/vision/issues/1523", "state": "closed", "labels": [ "question", "module: datasets", "module: video" ], "created_at": "2019-10-25T07:28:37Z", "updated_at": "2019-10-25T09:48:03Z", "user": "rsomani95" }, { "repo": "pytorch/examples", "number": 645, "title": "Add Siamese Network example", "body": "Hi, I want to add an example for Siamese network, since it is one of the popular use cases in ML. I am thinking of implementing it in a way similar to other examples viz. command line arguments to choose which dataset to train, hyperparameters etc.\r\nIs there something I need to keep in mind specifically apart from these:\r\n- Use torchvision's Dataset class and PyTorch's DataLoader class to handle data.\r\n- Implement a simple CNN as a nn.Module subclass\r\n- Implement triplet loss\r\n- Create train and test functions and a main function that calls those 2 methods at each epoch.\r\n- Report final loss and accuracy\r\n\r\nIs this something that is worth adding to the repository. ", "url": "https://github.com/pytorch/examples/issues/645", "state": "open", "labels": [ "good first issue" ], "created_at": "2019-10-24T11:08:50Z", "updated_at": "2022-05-13T18:17:30Z", "comments": 4, "user": "piyush01123" }, { "repo": "pytorch/vision", "number": 1521, "title": "per class mAP in coco_eval script?", "body": "Hi,\r\nI was looking around the eval code and did not find function to calculate **per class mAP**? Is there an easy work around to include that. Thanks. @fmassa ", "url": "https://github.com/pytorch/vision/issues/1521", "state": "closed", "labels": [ "question", "module: reference scripts", "topic: object detection" ], "created_at": "2019-10-23T15:58:28Z", "updated_at": "2019-10-25T14:43:13Z", "user": "manoja328" }, { "repo": "pytorch/vision", "number": 1520, "title": "DeepLabV3: segment only person", "body": "How can I segment person only and skip the other classes by using DeepLabV3?", "url": "https://github.com/pytorch/vision/issues/1520", "state": "closed", "labels": [ "question", "topic: semantic segmentation" ], "created_at": "2019-10-23T10:22:46Z", "updated_at": "2020-01-13T17:37:27Z", "user": "muna-cs" }, { "repo": "pytorch/pytorch", "number": 28478, "title": "How to train a torch::jit::script::Module?", "body": "Existing documentation / tutorials show only how to train a `torch::nn::Module` https://pytorch.org/cppdocs/frontend.html#end-to-end-example\r\n\r\nI have attempted to make a training loop in the following manner\r\n```\r\n#include <torch/script.h>\r\n#include <torch/torch.h>\r\n#include <iostream>\r\n#include <vector>\r\n// custom loader code\r\n#include \"nets/nets.h\"\r\n#include \"util/runfiles.h\"\r\n\r\nint main(int argc, char** argv) {\r\n std::cout << \"Nets example\" << std::endl;\r\n\r\n // Custom code that loads the module on CUDA\r\n auto runfiles = MakeRunfiles(argv[0]);\r\n torch::jit::script::Module script_module = LoadSegnetBackbone(*runfiles);\r\n script_module.train();\r\n std::cout << \"Loaded script module\" << std::endl;\r\n\r\n // Pull parameters out of the script module so we can push them into the\r\n // optimizer.\r\n std::vector<at::Tensor> parameters;\r\n for (const auto& parameter : script_module.get_parameters()) {\r\n parameters.push_back(parameter.value().toTensor());\r\n }\r\n torch::optim::SGD optimizer(std::move(parameters), /*lr=*/0.01);\r\n\r\n constexpr int kBatchSize = 1;\r\n for (int epoch = 1; epoch <= 1000; ++epoch) {\r\n optimizer.zero_grad();\r\n\r\n // The input is a (kBatchSize,3,300,300) tensor filled with ones\r\n at::Tensor input = torch::ones({kBatchSize, /*channels (rgb) =*/3,\r\n /*height=*/300, /*width=*/300})\r\n .to(at::kFloat)\r\n .to(at::kCUDA);\r\n\r\n // Push the input through the script module\r\n std::vector<torch::jit::IValue> inputs;\r\n inputs.push_back(input);\r\n at::Tensor script_module_forward = script_module.forward(inputs).toTensor();\r\n // The result is an output tensor of size (kBatchSize, 32, 300, 300)\r\n\r\n // ground truth is a (kBatchSize, 300, 300) tensor filled with ones\r\n at::Tensor ground_truth =\r\n torch::ones({kBatchSize, /*height=*/300, /*width=*/300})\r\n .to(at::kLong)\r\n .to(at::kCUDA);\r\n\r\n at::Tensor loss = torch::nll_loss2d(\r\n torch::log_softmax(script_module_forward, /*dim=*/1), ground_truth);\r\n loss.backward();\r\n optimizer.step();\r\n\r\n if (epoch % 50 == 0) {\r\n std::cout << \"Loss was \" << loss.item<float>() << std::endl;\r\n }\r\n }\r\n}\r\n```\r\n\r\nbut the loss never changes. I have also posted about this on the pytorch forums. https://discuss.pytorch.org/t/jit-module-parameters-are-not-updating-when-training/58945 \r\n\r\ncc @suo @yf225", "url": "https://github.com/pytorch/pytorch/issues/28478", "state": "closed", "labels": [ "oncall: jit", "module: cpp", "triaged" ], "created_at": "2019-10-22T23:36:22Z", "updated_at": "2022-01-20T22:41:16Z", "user": "markisus" }, { "repo": "pytorch/text", "number": 622, "title": "How to integrate HuggingFace transformers with Torchtext BPTTIterator?", "body": "## \u2753 Questions and Help\r\n\r\nHello,\r\n\r\nI am trying to use the pretrained tokenizer from the HuggingFace Transformer-XL when training my custom transformer-XL model on WikiText2, and I am having a trouble making the BPTTIterator from the Torchtext to work.\r\n\r\nBelow are my code:\r\n\r\n```js\r\n# Import packages \r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nfrom transformers import AdamW, WarmupLinearSchedule\r\nfrom transformers import TransfoXLConfig, TransfoXLTokenizer, TransfoXLModel, TransfoXLLMHeadModel \r\nimport torchtext\r\nimport torchtext.data.utils \r\nfrom torchtext.data import Field, BPTTIterator\r\nimport lineflow as lf\r\nimport lineflow.datasets as lfds\r\nimport math\r\nimport random\r\nimport numpy as np\r\nimport pandas as pd \r\nimport time\r\n\r\n# set hyperparameters for this experiment\r\nbptt = 30\r\nbatch_size = 64\r\nlr = 0.01 # learning rate\r\n\r\n# load the pretrained tokenizer\r\ntokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103', do_lower_case=True)\r\n\r\n# for huggingface - torchtext integration\r\ntokenizer.mask_token = 'maskTok'\r\ntokenizer.pad_token = '<pad>'\r\ntokenizer.eos_token = '<eos>'\r\ntokenizer.unk_token = '<unk>'\r\ntokenizer.bos_token = '<sos>'\r\n\r\npad_index = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)\r\neos_index = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)\r\nunk_index = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)\r\nmask_index = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)\r\nbos_index = tokenizer.convert_tokens_to_ids(tokenizer.bos_token)\r\n\r\n# for huggingface - torchtext integration\r\ntokenizer.mask_token = 'maskTok'\r\ntokenizer.pad_token = '<pad>'\r\ntokenizer.eos_token = '<eos>'\r\ntokenizer.unk_token = '<unk>'\r\ntokenizer.bos_token = '<sos>'\r\n\r\npad_index = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)\r\neos_index = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)\r\nunk_index = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)\r\nmask_index = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)\r\nbos_index = tokenizer.convert_tokens_to_ids(tokenizer.bos_token)\r\n\r\n# load WikiText-2 dataset and split it into train and test set\r\ntrain_Wiki2, val_Wiki2, test_Wiki2 = torchtext.datasets.WikiText2.splits(TEXT)\r\n \r\n\r\n# extract total number of tokens in the vocabulary\r\nntokens = tokenizer.vocab_size\r\n\r\n# define transformer-XL configuration.\r\ntransfoXLconfig = TransfoXLConfig(vocab_size_or_config_json_file = ntokens,\r\n cutoffs = [20000, 40000, 200000], \r\n d_model = 1024, \r\n d_embed = 1024, \r\n n_head = 16, \r\n d_head = 64,\r\n n_layer = 5,\r\n dropout = 0.1,\r\n attn_type = 0,\r\n output_hidden_states = True,\r\n output_attentions = True)\r\n\r\nmodel = TransfoXLLMHeadModel(config = transfoXLconfig)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\ntrain_iter, test_iter = BPTTIterator.splits(\r\n (train_Wiki2, test_Wiki2),\r\n batch_size = batch_size,\r\n bptt_len= bptt,\r\n shuffle = False,\r\n repeat=False)\r\n\r\n# error occurs here; the error message is:\r\n# File \"/Users/jin-dominique/anaconda3/lib/python3.7/site-packages/torchtext/data/field.py\", \r\n# line 359, in numericalize\r\n# var = torch.tensor(arr, dtype=self.dtype, device=device)\r\n# \"TypeError: an integer is required (got type str)\"\r\ntrain = next(iter(train_iter))\r\ntest = next(iter(test_iter))\r\n```\r\n\r\nHow can I fix this error?\r\n\r\nThank you,", "url": "https://github.com/pytorch/text/issues/622", "state": "open", "labels": [], "created_at": "2019-10-21T17:27:46Z", "updated_at": "2020-07-18T19:13:42Z", "user": "h56cho" }, { "repo": "pytorch/ios-demo-app", "number": 3, "title": "Add example of how to optimize model for mobile inference", "body": "This demo is great and works fine although it would be great to have an example of how to prepare model for mobile inference cause it's non trivial. For example you can add the receipt of how you've prepare the `mobilenet_quantized.pt`.\r\n(Personally i've tried to convert my model to `float16` (it didn't work: model didn't load on mobile), also i've tried `torch.quantization.quantize` and it also didn't work.\r\nTnx!", "url": "https://github.com/pytorch/ios-demo-app/issues/3", "state": "closed", "labels": [], "created_at": "2019-10-19T14:53:57Z", "updated_at": "2020-03-11T17:59:13Z", "user": "mirth" }, { "repo": "pytorch/pytorch", "number": 28331, "title": "How to save quantized model in PyTorch1.3 with quantization information", "body": "## \u2753 How to save the quantized model in PyTorch1.3 with quantization information\r\nIs there any way to save the quantized model in PyTorch1.3, which keeps the original information remaining? \r\n\r\nI have known that I can save it after tracing it by:\r\n```python\r\n# Save\r\ntorch.jit.save(torch.jit.script(self.model_q), \"quant_model.pth\")\r\n# Load\r\nmq = torch.jit.load(\"quant_model.pth\")\r\n```\r\nAlthough `mq` has the right result, it, **however**, losts the quantized information, such as module(layer) name, zero point, scale, etc.\r\n\n\ncc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100", "url": "https://github.com/pytorch/pytorch/issues/28331", "state": "closed", "labels": [ "oncall: quantization", "triaged" ], "created_at": "2019-10-19T07:55:01Z", "updated_at": "2019-10-23T17:08:14Z", "user": "vippeterhou" }, { "repo": "pytorch/examples", "number": 643, "title": "How to run dcgan example?", "body": "I want to run `dcgan` example, however, the readme is not very clear.\r\nI have downloaded classroom model from lsun as below\r\n\r\n```\r\n$ ls classroom_train_lmdb -lh\r\ntotal 3.5G\r\n-rw-r--r-- 1 mahmood mahmood 3.5G May 1 2015 data.mdb\r\n-rw-r--r-- 1 mahmood mahmood 63K May 1 2015 lock.mdb\r\n$ ls classroom_val_lmdb -lh\r\ntotal 6.5M\r\n-rw-r--r-- 1 mahmood mahmood 6.4M May 1 2015 data.mdb\r\n-rw-r--r-- 1 mahmood mahmood 63K May 1 2015 lock.mdb\r\n```\r\n\r\nNow, the command is `python main.py --dataset lsun --dataroot XXX`. \r\nWhat is XXX exactly? Is it the root folder that contains `classroom_val_lmdb/` and `classroom_train_lmdb/`?", "url": "https://github.com/pytorch/examples/issues/643", "state": "closed", "labels": [], "created_at": "2019-10-18T07:24:45Z", "updated_at": "2022-03-09T23:35:07Z", "user": "mahmoodn" }, { "repo": "pytorch/tutorials", "number": 705, "title": "Where is the demo dataset and model files in (EXPERIMENTAL) STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH ", "body": "I'm trying to run the codes in [(EXPERIMENTAL) STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#experimental-static-quantization-with-eager-mode-in-pytorch), but there are no dataset and model files available, such as **imagenet_1k, mobilenet_quantization.pth** and so on. \r\nSo anyone can provide the address of the necessary files and dataset in this tutorial?", "url": "https://github.com/pytorch/tutorials/issues/705", "state": "closed", "labels": [], "created_at": "2019-10-18T01:38:06Z", "updated_at": "2019-10-27T08:14:15Z", "user": "Aspirinkb" }, { "repo": "pytorch/pytorch", "number": 28202, "title": "How to quantize resnet in pytorch 1.3?", "body": "I tried to quantize resnet18 refer to https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html\r\n\r\nbut I got this error\r\n```\r\n>>> from torchvision.models import resnet18\r\n>>> net= resnet18()\r\n>>> from torch.quantization import quantize_dynamic\r\n>>> qnet = quantize_dynamic(net,{nn.Conv2d,nn.Linear},dtype=torch.qint8)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\quantization\\quantize.py\", line 241, in quantize_dynamic\r\n convert(model, mapping, inplace=True)\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\quantization\\quantize.py\", line 294, in convert\r\n reassign[name] = swap_module(mod, mapping)\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\quantization\\quantize.py\", line 316, in swap_module\r\n new_mod = mapping[type(mod)].from_float(mod)\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\nn\\quantized\\dynamic\\modules\\linear.py\", line 70, in from_float\r\n qlinear = Linear(mod.in_features, mod.out_features)\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\nn\\quantized\\dynamic\\modules\\linear.py\", line 33, in __init__\r\n super(Linear, self).__init__(in_features, out_features, bias_)\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\nn\\quantized\\modules\\linear.py\", line 119, in __init__\r\n self.set_weight_bias(qweight, bias)\r\n File \"E:\\Program Files\\Anaconda3\\envs\\torch\\lib\\site-packages\\torch\\nn\\quantized\\modules\\linear.py\", line 208, in set_weight_bias\r\n self._packed_params = torch.ops.quantized.linear_prepack(w, b)\r\nRuntimeError: Didn't find engine for operation quantized::linear_prepack NoQEngine (operator () at ..\\aten\\src\\ATen\\native\\quantized\\cpu\\qlinear_prepack.cpp:202)\r\n(no backtrace available)\r\n```\r\n\r\nHow can I solve it?\r\n\r\nmy environment:\r\n\r\ntorch1.3.0+cpu \r\nwindows 7\r\npython3.6.6", "url": "https://github.com/pytorch/pytorch/issues/28202", "state": "closed", "labels": [], "created_at": "2019-10-17T04:03:02Z", "updated_at": "2020-06-23T14:10:10Z", "user": "Arctanxy" }, { "repo": "pytorch/pytorch", "number": 28066, "title": "How to speed up installing pytorch1.3?", "body": "I am installing pytorch1.3 using pip. The command from the official site is `pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html`.\r\n![pytorch](https://user-images.githubusercontent.com/19465753/66896730-02551800-f028-11e9-8301-42856ef44589.png)\r\nMy pip are using a mirror source which is fast for me. But the `-f https://download.pytorch.org/whl/torch_stable.html` part in the commad force pip to download things from the official site, which is slow for me. \r\n**So my question is how to replace the `-f site` part to speed it up using mirror sites?**\r\nThanks for help!\r\n", "url": "https://github.com/pytorch/pytorch/issues/28066", "state": "closed", "labels": [ "triaged" ], "created_at": "2019-10-16T07:19:30Z", "updated_at": "2019-10-17T23:13:46Z", "user": "gaopinghai" }, { "repo": "pytorch/pytorch.github.io", "number": 287, "title": "How to replace the website in the install command after -f ?", "body": "I am intalling pytorch on windows7 using pip. I get the command throgh the official website as the picture shows.\r\n![pytorch](https://user-images.githubusercontent.com/19465753/66892580-a6d25c80-f01e-11e9-9e9c-c335c79b995c.png)\r\nThe command is `pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html`. \r\nBut it is too slow because of the `-f https://download.pytorch.org/whl/torch_stable.html`. \r\n**How can I replace this?** My pip is already using a different source, but is no use. Thanks for help.", "url": "https://github.com/pytorch/pytorch.github.io/issues/287", "state": "open", "labels": [], "created_at": "2019-10-16T06:12:43Z", "updated_at": "2019-10-16T06:13:18Z", "user": "gaopinghai" }, { "repo": "pytorch/text", "number": 619, "title": "How to use torchtext for sequence labelling with wordpiece tokeniers", "body": "## \u2753 Questions and Help\r\n\r\n**Description**\r\n<!-- Please send questions or ask for help here. -->\r\n\r\nHi,\r\n\r\nIn a previous issue (#609), I asked how to use the tokenizer from the [Transformers](https://github.com/huggingface/transformers) library with torch text.\r\n\r\nI now would like to be able to use this tokenizer and torchtext to load sequence labelling datasets. The issue I am facing is that the tokenizer introduces wordpiece tokens, which ends up breaking the alignment between tokens and labels.\r\n\r\nIgnoring labels, I am able to load a sequence labelling dataset with a Transformer tokenizer like so,\r\n\r\n```python\r\nfrom torchtext import data\r\nfrom torchtext import datasets\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)\r\n\r\ndef preprocessor(batch):\r\n return tokenizer.encode(batch, add_special_tokens=True)\r\n\r\nTEXT = data.Field(\r\n use_vocab=False,\r\n batch_first=True,\r\n pad_token=tokenizer.pad_token_id,\r\n preprocessing=preprocessor\r\n)\r\n# LABEL = data.LabelField()\r\n\r\nfields = [('text', TEXT), ('unused_col_1', None), ('unused_col_2', None), ('label', None)]\r\n\r\ntrain, valid, test = datasets.SequenceTaggingDataset.splits(\r\n path='/Users/johngiorgi/Downloads/bert_data/BC5CDR/chem',\r\n train='train.tsv',\r\n validation='devel.tsv',\r\n test='test.tsv',\r\n fields=fields\r\n)\r\n\r\ntrain_iter, valid_iter, test_iter = data.BucketIterator.splits(\r\n (train, valid, test), batch_sizes=(16, 256, 256)\r\n)\r\n\r\n# LABEL.build_vocab(train)\r\n``` \r\n\r\nThe data comes from [here](https://github.com/ncbi-nlp/BLUE_Benchmark/releases/download/0.1/bert_data.zip), and is a tab-seperated file with four columns. The first column contains words, the last labels and each sentence is sperated by a newline, e.g.\r\n\r\n```\r\nNaloxone\t227508\t0\tB\r\nreverses\t-\t9\tO\r\nthe\t-\t18\tO\r\nantihypertensive\t-\t22\tO\r\neffect\t-\t39\tO\r\nof\t-\t46\tO\r\nclonidine\t-\t49\tB\r\n.\t-\t58\tO\r\n\r\nIn\t227508\t60\tO\r\n.\r\n.\r\n.\r\n```\r\n\r\nBut when I try to load the labels, e.g.\r\n\r\n```python\r\nfrom torchtext import data\r\nfrom torchtext import datasets\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)\r\n\r\ndef preprocessor(batch):\r\n return tokenizer.encode(batch, add_special_tokens=True)\r\n\r\nTEXT = data.Field(\r\n use_vocab=False,\r\n batch_first=True,\r\n pad_token=tokenizer.pad_token_id,\r\n preprocessing=preprocessor\r\n)\r\nLABEL = data.LabelField()\r\n\r\nfields = [('text', TEXT), ('unused_col_1', None), ('unused_col_2', None), ('label', LABEL)]\r\n\r\ntrain, valid, test = datasets.SequenceTaggingDataset.splits(\r\n path='/Users/johngiorgi/Downloads/bert_data/BC5CDR/chem',\r\n train='train.tsv',\r\n validation='devel.tsv',\r\n test='test.tsv',\r\n fields=fields\r\n)\r\n\r\ntrain_iter, valid_iter, test_iter = data.BucketIterator.splits(\r\n (train, valid, test), batch_sizes=(16, 256, 256)\r\n)\r\n\r\nLABEL.build_vocab(train)\r\n``` \r\n\r\nI get issues when trying to access the batch\r\n\r\n```python\r\nbatch = next(iter(train_iter))\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-39-9919119fad82> in <module>\r\n----> 1 batch = next(iter(train_iter))\r\n\r\n~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/iterator.py in __iter__(self)\r\n 154 else:\r\n 155 minibatch.sort(key=self.sort_key, reverse=True)\r\n--> 156 yield Batch(minibatch, self.dataset, self.device)\r\n 157 if not self.repeat:\r\n 158 return\r\n\r\n~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/batch.py in __init__(self, data, dataset, device)\r\n 32 if field is not None:\r\n 33 batch = [getattr(x, name) for x in data]\r\n---> 34 setattr(self, name, field.process(batch, device=device))\r\n 35 \r\n 36 @classmethod\r\n\r\n~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/field.py in process(self, batch, device)\r\n 235 \"\"\"\r\n 236 padded = self.pad(batch)\r\n--> 237 tensor = self.numericalize(padded, device=device)\r\n 238 return tensor\r\n 239 \r\n\r\n~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/field.py in numericalize(self, arr, device)\r\n 336 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]\r\n 337 else:\r\n--> 338 arr = [self.vocab.stoi[x] for x in arr]\r\n 339 \r\n 340 if self.postprocessing is not None:\r\n\r\n~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/field.py in <listcomp>(.0)\r\n 336 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]\r\n 337 else:\r\n--> 338 arr = [self.vocab.stoi[x] for x in arr]\r\n 339 \r\n 340 if self.postprocessing is not None:\r\n\r\nTypeError: unhashable type: 'list", "url": "https://github.com/pytorch/text/issues/619", "state": "closed", "labels": [], "created_at": "2019-10-15T14:42:09Z", "updated_at": "2020-02-22T03:22:23Z", "user": "JohnGiorgi" }, { "repo": "pytorch/pytorch", "number": 27958, "title": "how to use libtorch library in cuda file with nvcc compiler(c++)?", "body": "## \u2753 Questions and Help\r\n\r\n# Motivation\r\ni want to implement nms in parallel processing with libtorch library.\r\ni use this cuda code(https://github.com/gdlg/pytorch_nms)\r\n\r\n# Environment\r\nPyTorch version : 1.2.0\r\nCUDA (nvcc compiler ) : 10.0\r\nlibtorch version : 1.2.0\r\nsystem : win10\r\n\r\n# Operation\r\nthe command :`i use nvcc -c nms_kernel.cu -L -lcudart -I D:\\Code-software\\NNF\\libtorch\\libtorch\\include -I D:\\Code-software\\NNF\\libtorch\\libtorch\\include\\torch\\csrc\\api\\include` to compiled it \r\n\r\n# ERROR\r\n`D:/Code-software/NNF/libtorch/libtorch/include\\torch/csrc/jit/argument_spec.h(181): error: member \"torch::jit::ArgumentSpecCreator::DEPTH_LIMIT\" may not be initialized 1 error detected in the compilation of \"C:/Users/Cason/AppData/Local/Temp/tmpxft_00001b28_00000000-10_nms_kernel.cpp1.ii\"`\r\n\r\nas long as i add `#include <torch/extension.h>` or `#include <torch/script.h>` in cuda files,It makes this kind of mistake.\r\n\r\n\r\n \r\n\r\n\r\n\n\ncc @yf225", "url": "https://github.com/pytorch/pytorch/issues/27958", "state": "open", "labels": [ "module: cpp", "triaged" ], "created_at": "2019-10-15T03:35:07Z", "updated_at": "2020-05-08T08:30:40Z", "user": "CasonTsai" }, { "repo": "pytorch/pytorch", "number": 27827, "title": "How to hide latency on libtorch by multithreads? A problem about double stream pipelines execution.", "body": "Hello, I want to hide latency between data_loader and inference. I simply apply it by OpenMP with a simple double stream pipelines execution. However, the code \"auto t=model->forward({Tensor.to(kCUDA)}.toTensor()\" don't support multithreads(OpenMP). \r\n\r\nIs there any solution?\r\n\r\nMy idea is just like Fig. 6 on this website: https://software.intel.com/en-us/articles/heterogeneous-computing-pipelining ", "url": "https://github.com/pytorch/pytorch/issues/27827", "state": "closed", "labels": [], "created_at": "2019-10-14T02:50:44Z", "updated_at": "2019-10-14T08:20:37Z", "user": "xiaoLiuxiaoLiuxiaoLiu" }, { "repo": "pytorch/examples", "number": 640, "title": "Do we still need to divide sample by ourselves when using a single GPU per process?", "body": "In https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L149, args.batch_size is manually divided by the number of processes.\r\n\r\nHowever, when I checked https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler, I found that DistributedSampler already subsampled the batch.\r\n\r\nIs it a bug in the imagenet example, or have I missed anything?\r\n", "url": "https://github.com/pytorch/examples/issues/640", "state": "closed", "labels": [], "created_at": "2019-10-14T01:58:19Z", "updated_at": "2020-02-14T10:24:15Z", "comments": 2, "user": "taroxd" }, { "repo": "pytorch/examples", "number": 638, "title": "missing indent in def train(...) in `imagenet`", "body": "https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L284\r\n\r\nIt seems a missing indent in imagenet train(...) function.\r\n\r\n`/example/imagenet/main.py`, line 282 to 284.\r\n\r\n```python\r\n if args.gpu is not None:\r\n images = images.cuda(args.gpu, non_blocking=True)\r\n target = target.cuda(args.gpu, non_blocking=True)\r\n```\r\nThe default value of `args.gpu` is None.\r\nWhen `args.gpu` is not specified (default as None), `images` tensor is not moved to cuda, which is reasonable. But, why `target` tensor is still moved to cuda? Is there a missing tab indent?\r\n\r\nIn this example, the `model` is always moved to cuda, so the `outputs` is in cuda. Always moving `target` tensor to cuda can avoid causing error for the following `loss = criterion(outputs, targets)`. If this is the consideration, then why `images` tensor is kept in cpu?\r\n", "url": "https://github.com/pytorch/examples/issues/638", "state": "closed", "labels": [], "created_at": "2019-10-12T05:07:56Z", "updated_at": "2019-10-22T21:53:05Z", "comments": 1, "user": "HearyShen" }, { "repo": "pytorch/tutorials", "number": 694, "title": " net visualization image (https://pytorch.org/tutorials/_images/mnist.png) has the wrong dimensions", "body": "In the tutorial: beginner_source/blitz/neural_networks_tutorial.py, \r\nThe explanation for the first linear layer dimensions is unclear: \r\nself.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension \r\nThe input image dimension expected is 32 x 32. \r\nThe visualization of the net shows a dimension of 5x5 after the last max pool layer.\r\nWhere is the extra 1 x 1 coming from ? \r\n\r\nThe layer dimensions calculation can be a hurdle for beginners, is for me anyway.\r\nIts confusing because the dimension sizes is complicatedly dependent on the input image size, which is nowhere in the initialization parameters. \r\nThe paper linked in the docs is helpful: https://arxiv.org/pdf/1603.07285.pdf\r\n\r\nSo, after printing the dimensions before and after each step in the net, i see that the net visualization image (https://pytorch.org/tutorials/_images/mnist.png) has the wrong dimensions listed. The actual sizes after each step in the net are: \r\ntorch.Size([1, 1, 32, 32]) # input size \r\ntorch.Size([1, 6, 30, 30]) # after conv1\r\ntorch.Size([1, 6, 30, 30]) # after relu1\r\ntorch.Size([1, 6, 15, 15]) # after maxpool1\r\ntorch.Size([1, 16, 13, 13]) # after conv2 \r\ntorch.Size([1, 16, 13, 13]) # after relu2\r\ntorch.Size([1, 16, 6, 6]) # after maxpool2\r\ntorch.Size([1, 576]) # after flattening\r\ntorch.Size([1, 120]) # after fully connected layer 1\r\ntorch.Size([1, 84]) # after fully connected layer 2 \r\ntorch.Size([1, 10]) # after fully connected layer 3", "url": "https://github.com/pytorch/tutorials/issues/694", "state": "closed", "labels": [], "created_at": "2019-10-11T15:30:46Z", "updated_at": "2021-04-26T20:14:34Z", "comments": 1, "user": "tinku99" }, { "repo": "pytorch/pytorch", "number": 27479, "title": "[JIT] Figure out how to easily investigate memory usage issues issues", "body": "e.g. https://github.com/pytorch/pytorch/issues/25267\r\n And other internal reports\n\ncc @suo", "url": "https://github.com/pytorch/pytorch/issues/27479", "state": "open", "labels": [ "oncall: jit", "triaged" ], "created_at": "2019-10-07T18:24:08Z", "updated_at": "2020-02-28T18:54:51Z", "user": "jamesr66a" }, { "repo": "pytorch/vision", "number": 1395, "title": "How to Crop single image before calling torchvision.utils.save_image, If I am using PIL lib Image.crop(....) method then image quality degrade.", "body": "\r\n vutils.save_image(fixed_fake.data,outputpath , normalize=True)\r\n print(\"output path\",outputpath)\r\n img = Image.open(outputpath)\r\n noOfRow = 5\r\n noOfColumn = 8\r\n x1 = 2\r\n y1 = 2\r\n x2 = 130\r\n y2 = 130\r\n folder = file_batch\r\n\r\n for i in range(0, noOfColumn):\r\n dest_dir = file_batch[i].split(\"/\")[7]\r\n if not os.path.exists(outf+\"/\"+dest_dir):\r\n os.mkdir(outf+\"/\"+dest_dir)\r\n for j in range(1, noOfRow + 1):\r\n area = (x1, y1, x2, y2)\r\n cropped_img = img.crop(area)\r\n imgName = \"{}{}\".format(i, j)\r\n cropped_img.save(os.path.join(outf+dest_dir,filename))\r\n y1 = y1 + 130\r\n y2 = y2 + 130\r\n x1 = x1 + 130\r\n x2 = x2 + 130\r\n y1 = 2\r\n y2 = 130", "url": "https://github.com/pytorch/vision/issues/1395", "state": "open", "labels": [ "module: utils" ], "created_at": "2019-09-30T20:35:12Z", "updated_at": "2021-02-21T15:56:52Z", "user": "praveenkumarchandaliya" }, { "repo": "pytorch/pytorch", "number": 27070, "title": "How to share a submodule but not copying its parameters in the computing graph?", "body": "Hi,\r\n\r\nI am trying to feed a list of input images to a model that incorporates a number of the same submodule. The model is like following:\r\n\r\n```\r\nclass SubModule(nn.Module):\r\n\tdef __init__(self):\r\n\t\tsuper(SubModule, self).__init__()\r\n\t\tself.embedding = nn.Linear(1000,20)\r\n\r\n\tdef forward(self, input):\r\n\t\treturn self.embedding(input)\r\n\r\nclass Model(nn.Module):\r\n\tdef __init__(self, subnet, n):\r\n\t\tsuper(Model, self).__init__()\r\n\t\tself.subnet = subnet\r\n\t\tself.fc = nn.Linear(n*20, 2)\r\n\t\tself.n = n\r\n\r\n\tdef forward(self, x_list):\r\n\t\t# x_list is a list of n input images\r\n\t\tout = []\r\n\t\tfor i in range(self.n):\r\n\t\t\th = self.subnet(x_list[i]) # h: shape[batch_size, feature_length(20)]\r\n\t\t\tout.append(h.unsqueeze_(1))\r\n\r\n\t\tout = torch.cat(out, dim=1) #out: shape[batch_size, n, feature_length(20)]\r\n\t\tout = out.view(out.shape[0], -1)\r\n\t\tout = self.fc(out)\r\n\t\treturn out\r\n\r\nsubnet = SubModule()\r\nm = Model(subnet, 12)\r\n```\r\n\r\nBoth \"subnet\" and \"m\" will be trained by back propagation at some point. I found that \"m\" actually creates n copies of \"subnet\". I want the parameters of \"subnet\" to be shared during training; i.e. every input image is fed through the same submodule. However, I don't want to create a computing graph forwarding multiple submodules at the same time, especially when n is large. Is there anyway to do so? Is there something similar to how RNN's are handled in pytorch for my case?", "url": "https://github.com/pytorch/pytorch/issues/27070", "state": "closed", "labels": [], "created_at": "2019-09-30T16:02:24Z", "updated_at": "2020-03-19T06:06:45Z", "user": "ukaneverin" }, { "repo": "pytorch/pytorch", "number": 27033, "title": "How to increase numerical accuracy of Pytorch model?", "body": "I write this sentence in my script\r\n\r\n`print(self.netG(self.real_A)-self.netG(self.real_A))\r\n`\r\nI think I can get a all zero tensor but no.\r\n\r\n```\r\ntensor([[ [[-0.0032, 0.0089, -0.0085, ..., -0.0027, 0.0004, -0.0022],\r\n [-0.0019, -0.0022, 0.0775, ..., 0.0236, -0.0277, -0.0125],\r\n [ 0.0049, 0.0159, 0.0203, ..., -0.0212, 0.0010, -0.0069],\r\n ...,\r\n [ 0.0042, 0.0081, -0.0127, ..., -0.0097, 0.0136, -0.0002],\r\n [-0.0010, 0.0020, -0.0066, ..., 0.0260, 0.0433, 0.0088],\r\n [-0.0023, 0.0095, 0.0125, ..., 0.0005, 0.0090, 0.0029]]]],\r\n device='cuda:0', grad_fn=<SubBackward0>)\r\n```", "url": "https://github.com/pytorch/pytorch/issues/27033", "state": "closed", "labels": [], "created_at": "2019-09-29T13:11:06Z", "updated_at": "2019-10-02T12:56:36Z", "user": "gentlezr" }, { "repo": "pytorch/vision", "number": 1384, "title": "How to test my trained model on my data set", "body": "", "url": "https://github.com/pytorch/vision/issues/1384", "state": "closed", "labels": [ "question" ], "created_at": "2019-09-29T09:52:21Z", "updated_at": "2019-09-30T12:35:10Z", "user": "PL-96" }, { "repo": "pytorch/pytorch", "number": 26880, "title": "in TracedModel how to get model parameter like convolution stride info.", "body": "## \u2753 Questions and Help\r\n\r\nI use traced_model._modules[\u2018conv1\u2019] to access conv module.\r\nBut how can I find \u2018stride\u2019 info in tracedModel object?\r\nIs there any document to describe tracedModel API and structure?\r\n\r\nThanks,\r\n8086", "url": "https://github.com/pytorch/pytorch/issues/26880", "state": "closed", "labels": [], "created_at": "2019-09-26T08:31:45Z", "updated_at": "2019-09-26T20:34:53Z", "user": "joe8086" }, { "repo": "pytorch/pytorch", "number": 26803, "title": "install pytorch1.2 where the environment is cuda9.0?", "body": "Can you tell me how to install pytorch1.2 in the environment is cuda9.0?\r\nI don't have the root power, so can't upgrade cuda.", "url": "https://github.com/pytorch/pytorch/issues/26803", "state": "closed", "labels": [ "module: build", "triaged" ], "created_at": "2019-09-25T14:29:19Z", "updated_at": "2019-09-25T22:17:47Z", "user": "zyxdSTU" }, { "repo": "pytorch/pytorch", "number": 26717, "title": "How to use RandomSampler?", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nclass RandomSampler in torch/utils/data/sampler.py\r\n\r\ndef __iter__(self):\r\n n = len(self.data_source)\r\n if self.replacement:\r\n return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist())\r\n return iter(torch.randperm(n).tolist())\r\n\r\n\r\nproblem:\r\n`return iter(torch.randperm(n).tolist())`\r\n\r\n\r\nIf you want to get a random int number in [0,n) when `__iter__(self)` is called, you only need to use `random.randint(0, n-1)`. This code `return iter(torch.randperm(n).tolist())` uses much resource but only generates a random number when called.\r\n\r\n\r\nI think in this function you want to generate a list like `torch.randperm(n)` and return the next number when called. Furthermore, we should shuffle the list again when we reach the end of the list. To implement this idea, we can modify the code like this:\r\n- add\r\n`self.iter = iter(torch.randperm(len(self.data_source)).tolist())` in `__init__(self, ...)`\r\n- add\r\n`def __next__(self):`\r\n`try:`\r\n`return next(self.iter)`\r\n`except StopIteration:`\r\n`self.iter = iter(torch.randperm(len(self.data_source)).tolist())`\r\n`return next(self.iter)`\r\n- add\r\n`return self` in `__iter__(self)`", "url": "https://github.com/pytorch/pytorch/issues/26717", "state": "closed", "labels": [], "created_at": "2019-09-24T15:13:42Z", "updated_at": "2019-09-24T15:31:55Z", "user": "sp2823" }, { "repo": "pytorch/pytorch", "number": 26707, "title": "How to build pytorch for android", "body": "## \u2753 How to build pytorch for android\r\n\r\nwhen i run this command,\r\n```\r\nexport ANDROID_NDK=~/android-ndk-r20\r\nset USE_NCCL=OFF\r\nset USE_CUDA=OFF\r\nbash scripts/build_android.sh \r\n```\r\ni got follow errors\r\n``` \r\n@ error/constitute.c/WriteImage/1028.\r\n' @ error/constitute.c/WriteImage/1028.\r\n: not foundL/ly/software/pytorch/pytorch/cmake/../aten/src/ATen/gen.py: 3: /media/zw/DL/ly/software/pytorch/pytorch/cmake/../aten/src/ATen/gen.py: \r\n' @ error/constitute.c/WriteImage/1028.\r\nfrom: can't read /var/mail/collections\r\n```\r\nMy env:\r\n- pytorch-1.1.0\r\n- cmake-3.15.1\r\n- android-ndk-r20\r\n \r\n", "url": "https://github.com/pytorch/pytorch/issues/26707", "state": "closed", "labels": [ "module: build", "triaged", "oncall: mobile" ], "created_at": "2019-09-24T03:02:44Z", "updated_at": "2019-09-24T15:09:23Z", "user": "blackxer" }, { "repo": "pytorch/extension-cpp", "number": 44, "title": "How to write cuda code of the multilayer units", "body": "This tutorials helped me to write a single layer unit with CUDA code.\r\nBut how to write CUDA code of the multilayer units, like torch/nn/_functions/rnn.py 281?\r\n ```\r\noutput, hy, cy, reserve, new_weight_buf = torch._cudnn_rnn(\r\n input, weight_arr, weight_stride0,\r\n flat_weight,\r\n hx, cx,\r\n mode, hidden_size, num_layers,\r\n batch_first, dropout, train, bool(bidirectional),\r\n list(batch_sizes.data) if variable_length else (),\r\n dropout_ts)\r\n```\r\nI have achieved the same results by using the template of AutogradRNN, i.e., torch/nn/_functions/rnn.py 212.\r\n```\r\ndef AutogradRNN(mode, input_size, hidden_size, num_layers=1, batch_first=False,\r\n dropout=0, train=True, bidirectional=False, variable_length=False,\r\n dropout_state=None, flat_weight=None):\r\n```\r\nBut gpu utilization was too low and speed was too slow. Perhaps because each single layer unit is called individually, which involve launch of a CUDA kernel. So I want to rewrite multilayer units in CUDA and fuse particular groups of single layer. Can you provide a boilerplate?", "url": "https://github.com/pytorch/extension-cpp/issues/44", "state": "open", "labels": [], "created_at": "2019-09-23T03:37:04Z", "updated_at": "2019-09-24T14:54:37Z", "user": "haoyz" }, { "repo": "pytorch/pytorch", "number": 26630, "title": "How to script a model using c++ extension? I met this error", "body": "## \u2753 How to script a model using c++ extension? I met this error\r\n```\r\nRuntimeError: \r\nCould not export Python function call '_DCNv2'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList\r\n```\r\n", "url": "https://github.com/pytorch/pytorch/issues/26630", "state": "closed", "labels": [], "created_at": "2019-09-22T11:30:34Z", "updated_at": "2019-09-24T14:42:22Z", "user": "yinnhao" }, { "repo": "pytorch/pytorch", "number": 26392, "title": "How to print out the name and value of parameters in module?", "body": "torch::jit::script::Module module = torch::jit::load(model_path);\r\nHow to print out the name and value of parameters in module?", "url": "https://github.com/pytorch/pytorch/issues/26392", "state": "closed", "labels": [], "created_at": "2019-09-18T03:13:03Z", "updated_at": "2019-09-18T16:15:05Z", "user": "boyob" }, { "repo": "pytorch/pytorch", "number": 26344, "title": "How to get rid of zombie processes using torch.multiprocessing.Pool?", "body": "I am using torch.multiprocessing.Pool to speed up my NN in inference, like this:\r\n\r\n import torch.multiprocessing as mp\r\n mp = mp.get_context('forkserver')\r\n\r\n def parallel_predict(predict_func, sequences, args):\r\n predicted_cluster_ids = []\r\n pool = mp.Pool(args.num_workers, maxtasksperchild=1)\r\n out = pool.imap(\r\n func=functools.partial(predict_func, args=args),\r\n iterable=sequences,\r\n chunksize=1)\r\n for item in tqdm(out, total=len(sequences), ncols=85):\r\n predicted_cluster_ids.append(item)\r\n pool.close()\r\n pool.terminate()\r\n pool.join()\r\n return predicted_cluster_ids\r\n\r\nNote 1) I am using `imap` because I want to be able to show a progress bar with tqdm.\r\nNote 2) I tried with both `forkserver` and spawn but no luck. I cannot use other methods because of how they interact (poorly) with CUDA.\r\nNote 3) I am using `maxtasksperchild=1` and `chunksize=1` so for each sequence in sequences it spawns a new process.\r\nNote 4) Adding or removing `pool.terminate()` and `pool.join()` makes no difference.\r\nNote 5) `predict_func` is a method of a class I created. I could also pass the whole model to `parallel_predict` but it does not change anything.\r\n\r\nEverything works fine except the fact that after a while I run out of memory on the CPU (while on the GPU everything works as expected). Using `htop` to monitor memory usage I notice that, for every process I spawn with pool I get a zombie that uses 0.4% of the memory. They don't get cleared, so they keep using space. Still, `parallel_predict` does return the correct result and the computation goes on. My script is structured in a way that id does validation multiple times so next time `parallel_predict` is called the zombies add up.\r\n\r\nThis is what I get in `htop`: \r\n![Screenshot from 2019-09-17 11-46-21](https://user-images.githubusercontent.com/19649581/65039556-cfe5cb80-d952-11e9-80e7-102e455f3874.png)\r\n\r\nUsually, these zombies get cleared after ctrl-c but in some rare cases I need to killall.\r\n\r\nIs there some way I can force the`Pool` to close them?\r\n\r\nUPDATE:\r\nI tried to kill the zombies using this:\r\n\r\n def kill(pool):\r\n import multiprocessing\r\n import signal\r\n # stop repopulating new child\r\n pool._state = multiprocessing.pool.TERMINATE\r\n pool._worker_handler._state = multiprocessing.pool.TERMINATE\r\n for p in pool._pool:\r\n os.kill(p.pid, signal.SIGKILL)\r\n # .is_alive() will reap dead process\r\n while any(p.is_alive() for p in pool._pool):\r\n pass\r\n pool.terminate()\r\n\r\nBut it does not work. It hangs in `pool.terminate()`\r\n\r\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen", "url": "https://github.com/pytorch/pytorch/issues/26344", "state": "open", "labels": [ "module: dependency bug", "oncall: distributed", "module: multiprocessing", "triaged" ], "created_at": "2019-09-17T11:55:41Z", "updated_at": "2019-11-14T00:08:11Z", "user": "DonkeyShot21" }, { "repo": "pytorch/pytorch", "number": 25990, "title": "How to reproduce a Cross Entropy Loss without losing numerical accuracy?", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nI get a discrepancy between the values of the losses obtained by the torch. CrossEntropyLoss and CustomeCrossEntropyLoss.\r\n\r\n## To Reproduce\r\n\r\n\r\n\r\n import torch\r\n import torch.nn.modules.loss as L \r\n\r\n class CustomCrossEntropyLoss(L._Loss):\r\n def __init__(self, reduction=True):\r\n super(CustomCrossEntropyLoss, self).__init__()\r\n self.reduction = reduction\r\n \r\n def forward(self, inp, target):\r\n input_target = inp.gather(1, target.view(-1, 1))\r\n input_max, _ = inp.max(dim=1, keepdim=True)\r\n output_exp = torch.exp(inp - input_max)\r\n output_softmax_sum = output_exp.sum(dim=1)\r\n output = -input_target + torch.log(output_softmax_sum).view(-1, 1) + input_max\r\n if self.reduction:\r\n output = output.mean()\r\n return output\r\n\r\n torch_ce = torch.nn.CrossEntropyLoss(reduction='none')\r\n custom_ce = CustomCrossEntropyLoss(reduction=False)\r\n batch_size = 128\r\n N_class = 90000\r\n logits = torch.randn((batch_size,N_class))\r\n targets = torch.randint(N_class, (batch_size,))\r\n print((torch_ce(logits, targets).view(-1) - custom_ce(logits,targets).view(-1)).mean())\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\nI get a minimal non-zero discrepancy of the order of 10e-7, which then occurs in gradients and can affect the learning outcome.\r\nHow to fix this problem?\r\n\r\n## Environment\r\n\r\n Collecting environment information...\r\n\tPyTorch version: 1.2.0\r\n\tIs debug build: No\r\n\tCUDA used to build PyTorch: 10.0.130\r\n\r\n\tOS: CentOS Linux 7 (Core)\r\n\tGCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)\r\n\tCMake version: version 2.8.12.2\r\n\r\n\tPython version: 3.7\r\n\tIs CUDA available: Yes\r\n\tCUDA runtime version: Could not collect\r\n\r\n\tNvidia driver version: 430.14\r\n\tcuDNN version: Could not collect\r\n\r\n\tVersions of relevant libraries:\r\n\t[pip3] numpy==1.15.4\r\n\t[pip3] tensorboard-pytorch==0.7.1\r\n\t[pip3] torch==1.0.0\r\n\t[pip3] torchvision==0.2.1\r\n\t[conda] blas 1.0 mkl.conda\r\n\t[conda] mkl 2019.4 243.conda\r\n\t[conda] mkl-service 2.0.2 py37h7b6447c_0.conda\r\n\t[conda] mkl_fft 1.0.12 py37ha843d7b_0.conda\r\n\t[conda] mkl_random 1.0.2 py37hd81dba3_0.conda\r\n\t[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch\r\n\t[conda] torchvision 0.4.0 py37_cu100 pytorch\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/25990", "state": "closed", "labels": [], "created_at": "2019-09-11T10:45:29Z", "updated_at": "2019-09-11T19:51:51Z", "user": "tamerlansb" }, { "repo": "pytorch/pytorch", "number": 25910, "title": "How to use batch norm when there has padding?", "body": "I am doing a NLP task and input various-length sentences into model, so I add some padding to keep every sentence to the longest length in one batch. However, padding length can affect batch norm obviously. And I cannot find a module with a parameter of mask? Can you help me ?", "url": "https://github.com/pytorch/pytorch/issues/25910", "state": "closed", "labels": [], "created_at": "2019-09-10T12:22:13Z", "updated_at": "2019-09-10T16:19:48Z", "user": "RichardHWD" }, { "repo": "pytorch/pytorch", "number": 25766, "title": "how to get cube-root of a negative?", "body": "hello, I want to know how to get cube-root of a negative\uff1f\r\nfor example (-8)^(1/3) = -2\r\nhowever, torch.pow(-8, 1/3) get Nan.", "url": "https://github.com/pytorch/pytorch/issues/25766", "state": "closed", "labels": [], "created_at": "2019-09-06T13:33:15Z", "updated_at": "2021-06-18T15:26:46Z", "user": "qianlinjun" }, { "repo": "pytorch/examples", "number": 628, "title": "No data on lo interface for a single node training", "body": "Hi, I ran the single node training and it ran normally. I want to monitor the bandwidth of the process in one single machine. I used `iptraf-ng` on CentOS 7. I selected the `lo` interface. But it showed nothing. The IP I used is `127.0.0.1`. Is there something wrong?\r\n\r\nThanks", "url": "https://github.com/pytorch/examples/issues/628", "state": "open", "labels": [ "triaged" ], "created_at": "2019-09-06T12:28:10Z", "updated_at": "2022-03-09T23:48:11Z", "comments": 0, "user": "ElegantLin" }, { "repo": "pytorch/pytorch", "number": 25699, "title": "PyTorch C++ API as a static lib: how to compile ?", "body": "## \u2753 Questions and Help\r\n\r\nthis questions is linked to the bug described in the issue https://github.com/pytorch/pytorch/issues/25698\r\n\r\nI'd like to have instructions on how to compile PyTorch C++ API (libtorch project) as a statical library to link with my C++ projects :\r\n\r\n- for Linux, Windows and MacOS\r\n- with the last Intel compilers if possible\r\n- mode with and witout GPU support (CPU vs GPU), with the last drivers and controling GPU cart 'compute capabilities' support, to be able to execute the code on old Kepler cards\r\n- debug and release (advanced optimization and vectorization) modes\r\n\r\nthanks for your help and assistance !", "url": "https://github.com/pytorch/pytorch/issues/25699", "state": "closed", "labels": [], "created_at": "2019-09-05T10:30:33Z", "updated_at": "2025-07-02T12:59:59Z", "user": "VitaMusic" }, { "repo": "pytorch/pytorch", "number": 25572, "title": "How to change dynamic model to onnx?", "body": "## \ud83d\udcda Documentation\r\nI want to change [Extremenet](https://github.com/xingyizhou/ExtremeNet) to onnx, which programmed a dynamic model, I searched \r\n1 https://pytorch.org/docs/stable/onnx.html#tracing-vs-scripting\r\n2 https://github.com/onnx/tutorials\r\nIt seems not told how to change a dynamic pytorch model to onnx\uff0c where is the example of change dynamic model to onnx?\r\nbelow is core dynamic code patch:\r\n\r\n class exkp(nn.Module):\r\n def __init__(\r\n self, n, nstack, dims, modules, out_dim, pre=None, cnv_dim=256, \r\n make_tl_layer=None, make_br_layer=None,\r\n make_cnv_layer=make_cnv_layer, make_heat_layer=make_kp_layer,\r\n make_tag_layer=make_kp_layer, make_regr_layer=make_kp_layer,\r\n make_up_layer=make_layer, make_low_layer=make_layer, \r\n make_hg_layer=make_layer, make_hg_layer_revr=make_layer_revr,\r\n make_pool_layer=make_pool_layer, make_unpool_layer=make_unpool_layer,\r\n make_merge_layer=make_merge_layer, make_inter_layer=make_inter_layer, \r\n kp_layer=residual\r\n ):\r\n super(exkp, self).__init__()\r\n self.nstack = nstack\r\n self._decode = _exct_decode\r\n\r\n curr_dim = dims[0]\r\n\r\n self.pre = nn.Sequential(\r\n convolution(7, 3, 128, stride=2),\r\n residual(3, 128, 256, stride=2)\r\n ) if pre is None else pre\r\n\r\n self.kps = nn.ModuleList([\r\n kp_module(\r\n n, dims, modules, layer=kp_layer,\r\n make_up_layer=make_up_layer,\r\n make_low_layer=make_low_layer,\r\n make_hg_layer=make_hg_layer,\r\n make_hg_layer_revr=make_hg_layer_revr,\r\n make_pool_layer=make_pool_layer,\r\n make_unpool_layer=make_unpool_layer,\r\n make_merge_layer=make_merge_layer\r\n ) for _ in range(nstack)\r\n ])\r\n self.cnvs = nn.ModuleList([\r\n make_cnv_layer(curr_dim, cnv_dim) for _ in range(nstack)\r\n ])\r\n\r\n ## keypoint heatmaps\r\n self.t_heats = nn.ModuleList([\r\n make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack)\r\n ])\r\n\r\n self.l_heats = nn.ModuleList([\r\n make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack)\r\n ])\r\n\r\n self.b_heats = nn.ModuleList([\r\n make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack)\r\n ])\r\n\r\n self.r_heats = nn.ModuleList([\r\n make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack)\r\n ])\r\n\r\n self.ct_heats = nn.ModuleList([\r\n make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack)\r\n ])\r\n\r\n for t_heat, l_heat, b_heat, r_heat, ct_heat in \\\r\n zip(self.t_heats, self.l_heats, self.b_heats, \\\r\n self.r_heats, self.ct_heats):\r\n t_heat[-1].bias.data.fill_(-2.19)\r\n l_heat[-1].bias.data.fill_(-2.19)\r\n b_heat[-1].bias.data.fill_(-2.19)\r\n r_heat[-1].bias.data.fill_(-2.19)\r\n ct_heat[-1].bias.data.fill_(-2.19)\r\n\r\n self.inters = nn.ModuleList([\r\n make_inter_layer(curr_dim) for _ in range(nstack - 1)\r\n ])\r\n\r\n self.inters_ = nn.ModuleList([\r\n nn.Sequential(\r\n nn.Conv2d(curr_dim, curr_dim, (1, 1), bias=False),\r\n nn.BatchNorm2d(curr_dim)\r\n ) for _ in range(nstack - 1)\r\n ])\r\n self.cnvs_ = nn.ModuleList([\r\n nn.Sequential(\r\n nn.Conv2d(cnv_dim, curr_dim, (1, 1), bias=False),\r\n nn.BatchNorm2d(curr_dim)\r\n ) for _ in range(nstack - 1)\r\n ])\r\n\r\n self.t_regrs = nn.ModuleList([\r\n make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack)\r\n ])\r\n self.l_regrs = nn.ModuleList([\r\n make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack)\r\n ])\r\n self.b_regrs = nn.ModuleList([\r\n make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack)\r\n ])\r\n self.r_regrs = nn.ModuleList([\r\n make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack)\r\n ])\r\n\r\n self.relu = nn.ReLU(inplace=True)\r\n\r\n def _train(self, *xs):\r\n image = xs[0]\r\n t_inds = xs[1]\r\n l_inds = xs[2]\r\n b_inds = xs[3]\r\n r_inds = xs[4]\r\n\r\n inter = self.pre(image)\r\n outs = []\r\n\r\n layers = zip(\r\n self.kps, self.cnvs,\r\n self.t_heats, self.l_heats, self.b_heats, self.r_heats,\r\n self.ct_heats,\r\n self.t_regrs, self.l_regrs, self.b_regrs, self.r_regrs,\r\n )\r\n for ind, layer in enumerate(layers):\r\n kp_, cnv_ = layer[0:2]\r\n t_heat_, l_heat_, b_heat_, r_heat_ = layer[2:6]\r\n ct_heat_ = layer[6]\r\n t_regr_, l_regr_, b_regr_, r_regr_ = layer[7:11]\r\n\r\n kp = kp_(inter)\r\n cnv = cnv_(kp)\r\n\r\n ", "url": "https://github.com/pytorch/pytorch/issues/25572", "state": "closed", "labels": [ "module: onnx" ], "created_at": "2019-09-03T05:27:41Z", "updated_at": "2019-09-03T14:17:53Z", "user": "qingzhouzhen" }, { "repo": "pytorch/pytorch", "number": 25383, "title": "In torch::jit::script::Module module = torch::jit::load(\"xxx.pt\"), How to release module?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @suo", "url": "https://github.com/pytorch/pytorch/issues/25383", "state": "closed", "labels": [ "oncall: jit", "triaged" ], "created_at": "2019-08-29T09:16:14Z", "updated_at": "2019-12-12T19:40:20Z", "user": "pxEkin" }, { "repo": "pytorch/pytorch", "number": 25294, "title": "What is the purpose of fp16 training? faster training? or better accuracy?", "body": "I think for fp16 inference, just train a model on FP32 and then just model.half() will work.\r\nBut this forceful type casting would lead worse accuracy.\r\n\r\nI'm not sure which is better between \r\n1. FP32 training then model.half() -> fp16 inference VS \r\n2. FP16 training using apex then inference FP16 with the same setting.\r\n\r\nif 1 and 2 has not that big accuracy gap, case 1 would be used just for faster/memory efficent training?\r\n\r\nPlease give me any hint. \r\n\r\nThank you.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/25294", "state": "closed", "labels": [], "created_at": "2019-08-28T06:12:01Z", "updated_at": "2019-09-18T02:06:39Z", "user": "dedoogong" }, { "repo": "pytorch/pytorch", "number": 25284, "title": "How to preserve backward grad_fn after distributed 'all_gather' operations", "body": "I am trying to implement model parallelism in a distributed data parallel setting.\r\n\r\nLet\u2019s say I have a tensor output in each process and a number of operations have been performed on it (in each process independently). The tensor has a .grad_fn attached to it. Now I want to perform an all_gather to create a list [tensor_1, tensor_2...tensor_n]. But all the tensors in the list will lose the grad_fn property. \r\n\r\nMy expectation is to be able to backward() using concated tensor list in each process i through tensor_i.\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera", "url": "https://github.com/pytorch/pytorch/issues/25284", "state": "closed", "labels": [ "oncall: distributed" ], "created_at": "2019-08-28T02:00:49Z", "updated_at": "2019-08-28T10:13:50Z", "user": "JoyHuYY1412" }, { "repo": "pytorch/examples", "number": 624, "title": "How to get Class Activation Map (CAM) in c++ front end ?", "body": "Hi everyone , \r\n\r\nI saw many example of visualizing CAM with python using \"register_backward_hook\" but can't find any way to do the same in C++ frontend. \r\n\r\nIs there away to visualize Conv layers in c++ frontend ?\r\n\r\nThank you in advance. ", "url": "https://github.com/pytorch/examples/issues/624", "state": "open", "labels": [ "c++" ], "created_at": "2019-08-28T00:52:19Z", "updated_at": "2022-03-09T23:48:32Z", "user": "zirid" }, { "repo": "pytorch/xla", "number": 961, "title": "[Question] how to track TPU memory usage", "body": "How can I access TPU (internal) memory utilization? \r\n\r\nThanks!", "url": "https://github.com/pytorch/xla/issues/961", "state": "closed", "labels": [ "question" ], "created_at": "2019-08-27T20:10:59Z", "updated_at": "2019-10-30T16:50:52Z", "user": "nosound2" }, { "repo": "pytorch/examples", "number": 622, "title": "how to train MnasNet? I use the main.py to train MnasNet, but get the worse result. It is just 60.7%, while it should be 73% as you say in Mnasnet.py", "body": "", "url": "https://github.com/pytorch/examples/issues/622", "state": "closed", "labels": [], "created_at": "2019-08-27T14:25:29Z", "updated_at": "2019-09-06T18:12:16Z", "user": "xufana7" }, { "repo": "pytorch/pytorch", "number": 25247, "title": "where is the link to this NOTE?", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\r\n```\r\n# No `def __len__(self)` default?\r\n# See NOTE [ Lack of Default `__len__` in Python Abstract Base Classes ]\r\n```", "url": "https://github.com/pytorch/pytorch/issues/25247", "state": "closed", "labels": [], "created_at": "2019-08-27T13:14:09Z", "updated_at": "2024-02-27T05:02:26Z", "user": "vainaixr" }, { "repo": "pytorch/examples", "number": 621, "title": "RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB .................", "body": "RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 1024.00 MiB total capacity; 435.61 MiB already allocated; 24.17 MiB free; 26.39 MiB cached)\r\nI have tried to set the batch-size as 16 or 32, but it didn't work. Would you please tell me how to solve this problem?", "url": "https://github.com/pytorch/examples/issues/621", "state": "closed", "labels": [], "created_at": "2019-08-27T12:38:17Z", "updated_at": "2022-03-09T23:46:25Z", "comments": 4, "user": "qiahui" }, { "repo": "pytorch/examples", "number": 620, "title": "Can I use async in ImageNet", "body": "Does the example of ImageNet support async? If not, how can I add this? Are there some suggestions?\r\n\r\nThanks!", "url": "https://github.com/pytorch/examples/issues/620", "state": "closed", "labels": [], "created_at": "2019-08-27T02:38:13Z", "updated_at": "2019-09-23T00:13:17Z", "comments": 2, "user": "ElegantLin" }, { "repo": "pytorch/pytorch", "number": 25180, "title": "How to filter with a transfer function in Pytorch??", "body": "Hi,\r\n\r\nI am trying to implement a 1D convolution operation with `F.conv1d`. The current usage of this function is to provide the weights of the filter directly in the time domain. However, for some DSP purposes, it is more effective to apply the filtering process in terms of the transfer function of the filter. In other words, to provide the coefficients of numerator and denominator of the transfer function and the function applies the filtering process accordingly. \r\n\r\nThis is already provided in [scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lfilter.html) as well as [Matlab](https://www.mathworks.com/help/matlab/ref/filter.html). \r\n\r\nI think it is possible to do the same with Pytorch, but I am still struggling to grasp all the details to achieve this .. could you please give any tips?\r\n\r\nMany thanks in advance\r\nBest", "url": "https://github.com/pytorch/pytorch/issues/25180", "state": "closed", "labels": [], "created_at": "2019-08-26T15:22:31Z", "updated_at": "2019-08-26T15:42:02Z", "user": "ahmed-fau" }, { "repo": "pytorch/pytorch", "number": 25003, "title": "How to use manually installed third_party libraries, instead of recursive third_party?", "body": "Some of the packages **have been manually installed** on the system. How to build **everything** based on existing packages, instead of the **recursively checked out** third_party libraries?\r\n\r\nFor instance: **pybind11** ???", "url": "https://github.com/pytorch/pytorch/issues/25003", "state": "closed", "labels": [], "created_at": "2019-08-22T00:39:11Z", "updated_at": "2019-08-22T15:28:57Z", "user": "jiapei100" }, { "repo": "pytorch/xla", "number": 943, "title": "[Looking for suggestions] How to start looking at xla code?", "body": "not really an issue, it's just I met so many random errors from stack-trace-back or other source which I barely understand, and the model training is quite slow and I cannot spot which tensors got various shape at different training and are re-allocated to cpu, so I'm thinking if looking into the `csrc` would help. Yet not quite sure if I wanna find out the reason of the slow training where shall I get started with? thanks so much!", "url": "https://github.com/pytorch/xla/issues/943", "state": "closed", "labels": [], "created_at": "2019-08-19T14:58:08Z", "updated_at": "2019-08-25T14:24:47Z", "user": "crystina-z" }, { "repo": "pytorch/examples", "number": 611, "title": "Problem on multiple nodes", "body": "Hi, when I am running ImageNet example on multiple nodes, I met the problem showing \r\n`RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:272, unhandled system error` on my node 0.\r\n\r\nThe command I used is \r\n`python main.py -a resnet50 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0 [imagenet-folder with train and val folders]`\r\n\r\nThe platform I used is `Python 3.6` using Anaconda on CentOS and the PyTorch Version is `1.1.0`.\r\n\r\nActually, I met this problem on single node and multi GPUs but I solved it by setting\r\n`export NCCL_P2P_DISABLE=1`.\r\n\r\nAlso before I ran the code, I would set `OMP_NUM_THREADS=1` to make the distributed training faster. \r\n\r\nI made sure that my IP and port were correct.\r\n\r\nDo you know how to solve it? \r\n\r\nThanks\r\n", "url": "https://github.com/pytorch/examples/issues/611", "state": "open", "labels": [ "distributed" ], "created_at": "2019-08-15T09:21:21Z", "updated_at": "2022-03-09T20:52:45Z", "comments": 2, "user": "ElegantLin" }, { "repo": "pytorch/pytorch", "number": 24399, "title": "How to get NLLLoss grad?", "body": "import torch\r\nimport torch.nn as nn\r\n\r\nm = nn.LogSoftmax(dim=1)\r\nloss = nn.NLLLoss()\r\na=[[2., 0.],\r\n[1., 1.]]\r\ninput = torch.tensor(a, requires_grad=True)\r\ntarget = torch.tensor([1, 1])\r\noutput = loss(m(input), target)\r\noutput.backward()\r\nprint(input.grad)\r\n-------------------------------------------------------\r\ntensor([[ 0.4404, -0.4404],\r\n [ 0.2500, -0.2500]])\r\n----------------------------------\r\nHow to get input.grad?What's the formula\uff1f\r\n", "url": "https://github.com/pytorch/pytorch/issues/24399", "state": "closed", "labels": [], "created_at": "2019-08-15T09:07:53Z", "updated_at": "2019-08-15T16:26:06Z", "user": "williamlzw" }, { "repo": "pytorch/audio", "number": 235, "title": "How to make the data precision loaded by torchaudio.load be consistent with the data loaded by the librosa.load", "body": "I found that data precision loaded by torchaudio.load is much lower than librosa. Is there a way to improve data precision?\r\n", "url": "https://github.com/pytorch/audio/issues/235", "state": "closed", "labels": [], "created_at": "2019-08-14T15:13:41Z", "updated_at": "2019-08-27T20:12:50Z", "user": "YapengTian" }, { "repo": "pytorch/pytorch", "number": 24310, "title": "[Question] Who can tell me where is the windows version torch in PYPI? It just like missing.", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/24310", "state": "closed", "labels": [ "module: windows", "triaged" ], "created_at": "2019-08-14T05:48:57Z", "updated_at": "2020-02-04T03:10:59Z", "user": "xiaohuihuichao" }, { "repo": "pytorch/examples", "number": 607, "title": "Does this variable 'tokens' make sense?", "body": "https://github.com/pytorch/examples/blob/4581968193699de14b56527296262dd76ab43557/word_language_model/data.py#L32\r\n\r\nThanks!", "url": "https://github.com/pytorch/examples/issues/607", "state": "closed", "labels": [], "created_at": "2019-08-13T03:33:11Z", "updated_at": "2019-08-16T12:01:08Z", "comments": 4, "user": "standbyme" }, { "repo": "pytorch/pytorch", "number": 24220, "title": "Where is nn.Transformer for pytorch 1.2.0 on win10? ", "body": "## \u2753 Questions and Help\r\nI ran the installing code \"pip3 install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html\" and installed torch 1.2.0.\r\n\r\nYet, I can't find the Transformers in nn module, where are they? platform: WIN10 \r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/24220", "state": "closed", "labels": [], "created_at": "2019-08-13T01:08:31Z", "updated_at": "2019-10-07T02:07:59Z", "user": "shuaishuaij" }, { "repo": "pytorch/examples", "number": 605, "title": "DistributedDataParralle training speed", "body": "Hi, I am using image net. But there is no big difference between the time consuming when I used 2 GPUs, 4 GPUs or 8 GPUs. I just changed the `gpu id` and batch size to guarantee the memory of GPU was fully used. The speed did not increase although I used more GPUs. Is there something wrong about what I did?\r\n\r\nThanks a lot.", "url": "https://github.com/pytorch/examples/issues/605", "state": "open", "labels": [ "distributed" ], "created_at": "2019-08-10T16:37:09Z", "updated_at": "2022-10-11T11:29:08Z", "comments": 5, "user": "ElegantLin" }, { "repo": "pytorch/tutorials", "number": 606, "title": "Is there any reason for using tensor.data?", "body": "https://github.com/pytorch/tutorials/blob/60d6ef365e36f3ba82c2b61bf32cc40ac4e86c7b/beginner_source/blitz/autograd_tutorial.py#L160\r\n\r\nThis is an official tutorial codes.\r\n\r\nIs there any reason for using tensor.data? \r\n~~~\r\nx = torch.randn(3, requires_grad=True)\r\n\r\ny = x * 2\r\nwhile y.data.norm() < 1000:\r\n y = y * 2\r\n~~~\r\n\r\nIf not, I think this code should be replaced by \r\n~~~\r\ny = x * 2\r\nwhile y.detach().norm() < 1000:\r\n y = y * 2\r\n~~~", "url": "https://github.com/pytorch/tutorials/issues/606", "state": "closed", "labels": [], "created_at": "2019-08-09T06:27:43Z", "updated_at": "2019-09-04T13:35:22Z", "comments": 0, "user": "minlee077" }, { "repo": "pytorch/pytorch", "number": 24009, "title": "How to convert pytorch model(faster-rcnn) to onnx?", "body": "## \u2753 Questions and Help\r\n\r\nI trained a faster-rcnn model use the project: jwyang/faster-rcnn.pytorch\r\n[https://github.com/jwyang/faster-rcnn.pytorch](url)\r\nI want convert the model to onnx, this is my code:\r\n` torch_out=torch.onnx.export(fasterRCNN,\\\r\n (im_data,im_info,gt_boxes,num_boxes),\\ \r\n \"onnx_model_name.onnx\",\\\r\n export_params=True,\\\r\n opset_version=10,\\\r\n do_constant_folding=True,\\\r\n input_names=['input'],\\\r\n output_names=['output'])`\r\nI get this error:\r\nFile \"/home/user/anaconda3/envs/mypytorch-env/lib/python3.7/site-packages/torch/jit/__init__.py\", line 297, in forward\r\n out_vars, _ = _flatten(out)\r\nRuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got int\r\nTerminated\r\n\r\nWho ever encountered this problem? \r\nPlease help me. Thank you very much.", "url": "https://github.com/pytorch/pytorch/issues/24009", "state": "closed", "labels": [ "module: onnx", "triaged" ], "created_at": "2019-08-08T08:57:05Z", "updated_at": "2021-12-22T21:51:39Z", "user": "waynebianxx" }, { "repo": "pytorch/examples", "number": 604, "title": "Do you have any instructions for the use of functions related to C + + ports?", "body": "Do you have any instructions for the use of functions related to C + + ports?\r\nfor example:\r\nauto aa = torch::tensor({ 1, 2 , 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 });\r\nauto bb = torch::reshape(aa, { 2, 2, 2, 2 });\r\nconst void *src = bb.data<int>();\r\nstd::vector<int>value;\r\nvoid *dst = &value;\r\ntorch::CopyBytes(2, src, torch::DeviceType::CPU, dst, torch::DeviceType::CPU, 0);\r\nBut I found that I could not get the desired result.\r\nI hope to get your help. Thank you.", "url": "https://github.com/pytorch/examples/issues/604", "state": "closed", "labels": [], "created_at": "2019-08-07T06:56:32Z", "updated_at": "2019-08-07T17:26:02Z", "comments": 1, "user": "yeyuxmf" }, { "repo": "pytorch/tutorials", "number": 593, "title": "does the attention need do mask on the encoder_outputs?", "body": "attn_weights = self.attn(rnn_output, encoder_outputs)\r\nthis is the attn_weights, the number of output vector in a batch is different because the input length is different, and the number of output vector is padded align max_seq_len in the batch.\r\nbut when compute attn_weights, there is no mask operation on the padding output vec\uff0cis this right ?", "url": "https://github.com/pytorch/tutorials/issues/593", "state": "closed", "labels": [], "created_at": "2019-08-06T12:45:14Z", "updated_at": "2019-08-07T19:11:51Z", "comments": 1, "user": "littttttlebird" }, { "repo": "pytorch/examples", "number": 600, "title": "What is the different between train_loss and test_loss?", "body": "Hello, I am a student who is just beginning to learn pytorch, I have runnd the examples of MNIST code, I am curious why the train_loss and test_loss are calculated differently.Here is the calculation code.Why are they not using the same code?\r\n`loss = F.nll_loss(output, target) # train_loss`\r\n\r\n` test_loss = F.nll_loss(output, target, reduction='sum')`\r\n`test_loss = test_loss/len(test_loader.dataset)`\r\n", "url": "https://github.com/pytorch/examples/issues/600", "state": "closed", "labels": [], "created_at": "2019-08-05T02:35:51Z", "updated_at": "2019-08-18T11:15:44Z", "user": "wulongjian" }, { "repo": "pytorch/pytorch", "number": 23773, "title": " how to modify activation in GRU", "body": "We know in keras, `Bidirectional(GRU(128, activation='linear', return_sequences=True))(a1) # (240,256)`\uff0cthat is to say, we can choose activation.But in torch,there\u2019s no para to choose.`nn.GRU(n_in, n_hidden, bidirectional=True, dropout=droupout, batch_first=True, num_layers=num_layers)\r\n`I want to know how to modify activation in GRU in torch\r\n", "url": "https://github.com/pytorch/pytorch/issues/23773", "state": "closed", "labels": [], "created_at": "2019-08-05T01:45:08Z", "updated_at": "2019-08-05T02:05:10Z", "user": "MichelleYang2017" }, { "repo": "pytorch/examples", "number": 598, "title": "Problem finding the model en", "body": "Hi,\r\nHow can I resolve this error?\r\n```\r\n$ python train.py \r\nTraceback (most recent call last):\r\n File \"train.py\", line 20, in <module>\r\n inputs = data.Field(lower=args.lower, tokenize='spacy')\r\n File \"/home/mahmood/.local/lib/python2.7/site-packages/torchtext/data/field.py\", line 152, in __init__\r\n self.tokenize = get_tokenizer(tokenize)\r\n File \"/home/mahmood/.local/lib/python2.7/site-packages/torchtext/data/utils.py\", line 12, in get_tokenizer\r\n spacy_en = spacy.load('en')\r\n File \"/home/mahmood/.local/lib/python2.7/site-packages/spacy/__init__.py\", line 27, in load\r\n return util.load_model(name, **overrides)\r\n File \"/home/mahmood/.local/lib/python2.7/site-packages/spacy/util.py\", line 139, in load_model\r\n raise IOError(Errors.E050.format(name=name))\r\nIOError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.\r\n\r\n```", "url": "https://github.com/pytorch/examples/issues/598", "state": "closed", "labels": [], "created_at": "2019-08-01T11:09:46Z", "updated_at": "2022-03-10T00:25:00Z", "comments": 1, "user": "mahmoodn" }, { "repo": "pytorch/text", "number": 572, "title": "How to set the max length for batches?", "body": "Is there a way to use something like `max_len` argument: all text entries in a dataset longer than a certain length can be thrown out.", "url": "https://github.com/pytorch/text/issues/572", "state": "closed", "labels": [], "created_at": "2019-07-29T18:25:25Z", "updated_at": "2019-09-16T15:04:40Z", "user": "XinDongol" }, { "repo": "pytorch/pytorch", "number": 23490, "title": "upcoming PEP 554: how much effort we need to support sub-interpreter", "body": "## \ud83d\ude80 Feature\r\nsupport python sub-interpreters and maintains all status of the torch library.\r\n\r\n## Motivation\r\n\r\nas #10950 demonstrates, the current ``torch`` library cannot lives on multiple sub-interpreter simultaneously within the same process. But we do need to run python codes on multiple \"threads\" at the same time for the very reasons why ``torch`` introduces ``torch.multiprocessing`` and ``DistributedDataParallel`` (the single node scenario). As [PEP 554](https://www.python.org/dev/peps/pep-0554/) is proposed back in 2017 and maybe available by 2019 or 2020, I think it is necessary to make use of it because:\r\n- It is easier to sharing data between interpreters than between processes\r\n- It will reduce gpu memory overhead (Every subprocess consume at least 400~500MB gpu memory)\r\n- It can help avoid relatively complex process management problems\r\n\r\nAnd between multi-interpreter and multi-process, there is almost no difference on user coding experience and front-end design, and the changes will be made behind the scene. \r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nI think works need to be done on following aspects:\r\n- Changes any global status that should bind to a interpreter to a per-interpreter status set. (the ``detach`` method mentioned in #10950, for example)\r\n ``Tensor`` lifecycle management maybe not a good example, because it is also a choice that ``Tensor`` can be shared across interpreters.\r\n- Prevent re-initializing and double-finalize for those status that are indeed global. (CUDA initialization, for example)\r\n- Create interface and infrastructure for controlling communication and sharing ``Tensor`` between interpreters. \r\n- Deprecate ``torch.multiprocessing`` module\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/23490", "state": "open", "labels": [ "feature", "triaged" ], "created_at": "2019-07-28T16:19:53Z", "updated_at": "2019-08-02T16:16:00Z", "user": "winggan" }, { "repo": "pytorch/pytorch", "number": 23423, "title": "how to process c++ forward multiple return values", "body": "my model return value like this , it is detection model\r\noutput = (\r\n tensor1, # loc preds\r\n tensor2 # conf preds\r\n )\r\n\r\nhow to get return values in c++\r\n", "url": "https://github.com/pytorch/pytorch/issues/23423", "state": "closed", "labels": [], "created_at": "2019-07-26T07:37:15Z", "updated_at": "2019-07-26T11:14:56Z", "user": "kakaluote" }, { "repo": "pytorch/vision", "number": 1166, "title": "How to train resnet18 to the best accuracy?", "body": "I recently did a simple experiment, training cifar10 with resnet18( torchvision.models), but I can't achieve the desired accuracy(93%).\r\n\r\n\r\nI found a GitHub repository where the example can be trained to 93% accuracy, [pytorch-cifar](https://github.com/kuangliu/pytorch-cifar). But his implementation is different from torchvision.models.resnet18, The difference may be [here](https://github.com/kuangliu/pytorch-cifar/issues/91#issuecomment-514933998). \r\n\r\n\r\n\r\nIs there any example to teach us how to train cifar10 with resnet18 to the optimal precision?", "url": "https://github.com/pytorch/vision/issues/1166", "state": "closed", "labels": [ "question", "module: models", "topic: classification" ], "created_at": "2019-07-25T09:57:20Z", "updated_at": "2019-07-26T09:05:35Z", "user": "zerolxf" }, { "repo": "pytorch/pytorch", "number": 23218, "title": "What is the torchvision version for pytorch-nightly? Use 0.3.0 to report errors", "body": "The official website does not provide the torchvison version installation method.\r\nHere is the error message when torchvison 0.3.0 is used.\r\ntorchvision/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at7getTypeERKNS_6TensorE\r\n\r\nThat should be caused by the mismatch version between pytorch-nightly and torchvision 0.3.0.\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/23218", "state": "closed", "labels": [ "module: docs", "triaged", "module: vision" ], "created_at": "2019-07-23T06:37:44Z", "updated_at": "2021-03-12T13:00:16Z", "user": "zymale" }, { "repo": "pytorch/pytorch", "number": 23215, "title": "How the ops are registerd to ATenDispatch's op tables_", "body": "I want to know how ops are registered to ATenDispatch's op_tables_;\r\nI see ATenDispatch has registerOp/registerVariableOp interface, but not found code that these interfaces are called to register ops;\r\nIs ATenDispatch use the global static varable to trigger the registration like C10_DECLARE_REGISTRY mechanism\uff1f\r\n\r\nI see the code below, but cannot find where \"aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> Tensor\" is registered to ATenDispatch.\r\n\r\n`static inline Tensor linear(const Tensor & input, const Tensor & weight, const Tensor & bias) {\r\n static auto table = globalATenDispatch().getOpTable(\"aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> Tensor\");\r\n return table->getOp<Tensor (const Tensor &, const Tensor &, const Tensor &)>(at::detail::infer_backend(input), at::detail::infer_is_variable(input))(input, weight, bias);\r\n}`\r\n", "url": "https://github.com/pytorch/pytorch/issues/23215", "state": "closed", "labels": [], "created_at": "2019-07-23T06:17:08Z", "updated_at": "2019-07-23T10:44:28Z", "user": "dongfangduoshou123" }, { "repo": "pytorch/tutorials", "number": 566, "title": "Is this RNN implementation different from the vanilla RNN?", "body": "https://github.com/pytorch/tutorials/blob/master/intermediate_source/char_rnn_classification_tutorial.py\r\n\r\nStandard Interpretation\r\n-------------------------\r\n\r\nIn the original RNN, the hidden state and output are calculated as\r\n\r\n[![enter image description here][1]][1]\r\n\r\nin other words, we obtain the the output from the hidden state.\r\n\r\nAccording to [Wiki][2], the RNN architecture can be unfolded like this\r\n[![vani][3]][3]\r\n\r\nAnd the code I have been using is like:\r\n\r\n class Model(nn.Module):\r\n def __init__(self, input_size, output_size, hidden_dim, n_layers):\r\n super(Model, self).__init__()\r\n self.hidden_dim = hidden_dim\r\n self.rnn = nn.RNN(input_size, hidden_dim, 1) \r\n self.fc = nn.Linear(hidden_dim, output_size)\r\n \r\n def forward(self, x):\r\n batch_size = x.size(0)\r\n \r\n out, hidden = self.rnn(x)\r\n \r\n # getting output from the hidden state\r\n out = out..view(-1, self.hidden_dim)\r\n out = self.fc(out)\r\n \r\n return out, hidden\r\n\r\nRNN as \"pure\" feed-forward layers\r\n-------------------------\r\nBut in this tutorial the hidden layer calculation is same as the standard interpretation, but the output is is calculated independently from the current hidden state `h`.\r\n\r\nTo me, the math behind this implementation is:\r\n\r\n[![enter image description here][6]][6]\r\n\r\nSo, this implementation is different from the original RNN implementation?\r\n\r\n [1]: https://i.stack.imgur.com/1IdH7.png\r\n [2]: https://en.wikipedia.org/wiki/Recurrent_neural_network\r\n [3]: https://i.stack.imgur.com/aJL7l.png\r\n [4]: https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html#creating-the-network\r\n [5]: https://i.stack.imgur.com/mfjcp.png\r\n [6]: https://i.stack.imgur.com/M3jgf.gif\r\n\r\n@chsasank @zou3519 ", "url": "https://github.com/pytorch/tutorials/issues/566", "state": "closed", "labels": [], "created_at": "2019-07-20T07:49:30Z", "updated_at": "2021-06-16T00:15:41Z", "comments": 1, "user": "KinWaiCheuk" }, { "repo": "pytorch/xla", "number": 837, "title": "How to save/load a model trained with `torch_xla_py.data_parallel`", "body": "A `torch_xla_py.data_parallel` model doesn't have an implementation for the function `state_dict()` which is required to save/load the model. Is there a way around this?\r\n\r\nThanks", "url": "https://github.com/pytorch/xla/issues/837", "state": "closed", "labels": [], "created_at": "2019-07-18T17:23:42Z", "updated_at": "2019-07-29T23:23:51Z", "user": "ibeltagy" }, { "repo": "pytorch/xla", "number": 836, "title": "how to read the output of `_xla_metrics_report`", "body": "This is not an issue, but I am curious how to read the metrics report printed by:\r\n`print(torch_xla._XLAC._xla_metrics_report())`\r\n\r\nI want to see if all the operations I am using are supported or not, and if there's something to do to speed it up. \r\n\r\nHere's a sample output: \r\n```\r\n2019-07-18 17:15:45,048: ** ** * Saving fine-tuned model ** ** * \r\nMetric: CompileTime\r\n TotalSamples: 24\r\n Counter: 07s501ms263.438us\r\n ValueRate: 415ms845.224us / second\r\n Rate: 1.53144 / second\r\n Percentiles: 1%=092ms776.335us; 5%=116ms990.572us; 10%=184ms266.354us; 20%=185ms543.885us; 50%=270ms754.761us; 80%=415ms236.583us; 90%=416ms362.582us; 95%=417ms693.385us; 99%=417ms171.393us\r\nMetric: ExecuteTime\r\n TotalSamples: 1752\r\n Counter: 07m04s113ms694.219us\r\n ValueRate: 03s312ms257.946us / second\r\n Rate: 25.5639 / second\r\n Percentiles: 1%=068ms224.061us; 5%=069ms564.310us; 10%=069ms785.400us; 20%=069ms102.875us; 50%=181ms379.773us; 80%=189ms745.170us; 90%=191ms81.607us; 95%=194ms931.718us; 99%=225ms80.436us\r\nMetric: InboundData\r\n TotalSamples: 880\r\n Counter: 3.44KB\r\n ValueRate: 37.22B / second\r\n Rate: 9.30398 / second\r\n Percentiles: 1%=4.00B; 5%=4.00B; 10%=4.00B; 20%=4.00B; 50%=4.00B; 80%=4.00B; 90%=4.00B; 95%=4.00B; 99%=4.00B\r\nMetric: OutboundData\r\n TotalSamples: 1976\r\n Counter: 4.09GB\r\n ValueRate: 20.98MB / second\r\n Rate: 10.5791 / second\r\n Percentiles: 1%=4.00B; 5%=3.00KB; 10%=3.00KB; 20%=3.00KB; 50%=12.00KB; 80%=2.25MB; 90%=2.25MB; 95%=9.00MB; 99%=9.00MB\r\nMetric: ReleaseCompileHandlesTime\r\n TotalSamples: 14\r\n Counter: 41s345ms872.979us\r\n ValueRate: 02s970ms383.483us / second\r\n Rate: 0.667202 / second\r\n Percentiles: 1%=001ms416.541us; 5%=001ms416.541us; 10%=049ms618.232us; 20%=059ms4.227us; 50%=130ms959.829us; 80%=10s703ms744.276us; 90%=11s570ms353.038us; 95%=11s585ms908.001us; 99%=11s585ms908.001us\r\nMetric: ReleaseDataHandlesTime\r\n TotalSamples: 3409\r\n Counter: 18s036ms534.459us\r\n ValueRate: 101ms925.451us / second\r\n Rate: 49.556 / second\r\n Percentiles: 1%=643.545us; 5%=776.956us; 10%=879.853us; 20%=001ms27.498us; 50%=001ms436.214us; 80%=003ms823.050us; 90%=004ms105.973us; 95%=005ms355.654us; 99%=008ms39.847us\r\nMetric: TransferFromServerTime\r\n TotalSamples: 880\r\n Counter: 09s893ms77.809us\r\n ValueRate: 094ms23.901us / second\r\n Rate: 9.30398 / second\r\n Percentiles: 1%=001ms136.428us; 5%=001ms262.894us; 10%=001ms369.658us; 20%=002ms514.730us; 50%=002ms768.530us; 80%=002ms232.447us; 90%=055ms522.896us; 95%=063ms333.605us; 99%=071ms470.850us\r\nMetric: TransferToServerTime\r\n TotalSamples: 1976\r\n Counter: 02m42s730ms811.808us\r\n ValueRate: 670ms609.184us / second\r\n Rate: 10.5653 / second\r\n Percentiles: 1%=001ms358.042us; 5%=002ms672.558us; 10%=003ms745.003us; 20%=005ms642.737us; 50%=016ms554.496us; 80%=077ms944.343us; 90%=187ms107.324us; 95%=231ms913.229us; 99%=263ms297.246us\r\nCounter: CachedSyncTensors\r\n Value: 1736\r\nCounter: CreateCompileHandles\r\n Value: 17\r\nCounter: CreateDataHandles\r\n Value: 190064\r\nCounter: CreateXlaTensor\r\n Value: 2813256\r\nCounter: DestroyCompileHandles\r\n Value: 14\r\nCounter: DestroyDataHandles\r\n Value: 186616\r\nCounter: DestroyXlaTensor\r\n Value: 2809944\r\nCounter: ReleaseCompileHandles\r\n Value: 14\r\nCounter: ReleaseDataHandles\r\n Value: 186616\r\nCounter: UncachedSyncTensors\r\n Value: 24\r\nCounter: XRTAllocateFromTensor_Empty\r\n Value: 1629\r\nCounter: XrtCompile_Empty\r\n Value: 2176\r\nCounter: XrtExecuteChained_Empty\r\n Value: 2176\r\nCounter: XrtExecute_Empty\r\n Value: 2176\r\nCounter: XrtRead_Empty\r\n Value: 2176\r\nCounter: XrtReleaseAllocationHandle_Empty\r\n Value: 2176\r\nCounter: XrtReleaseCompileHandle_Empty\r\n Value: 2176\r\nCounter: XrtSessionCount\r\n Value: 25\r\nCounter: XrtSubTuple_Empty\r\n Value: 2176\r\nCounter: aten::_local_scalar_dense\r\n Value: 880\r\n```", "url": "https://github.com/pytorch/xla/issues/836", "state": "closed", "labels": [], "created_at": "2019-07-18T17:21:05Z", "updated_at": "2019-07-25T23:21:40Z", "user": "ibeltagy" }, { "repo": "pytorch/pytorch", "number": 23015, "title": "I used libtorch to write resnet18 for training in C++, so how to load resnet18.pth in pytorch to help pre-training", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nI used libtorch to write resnet18 for training in C++, so how to load resnet18.pth in pytorch to help pre-training\uff1b\r\nCannot import using torch::load(resnet18,\"./resnet18.pth\")\r\nResnet18 written in c++ is correct", "url": "https://github.com/pytorch/pytorch/issues/23015", "state": "closed", "labels": [], "created_at": "2019-07-18T08:53:31Z", "updated_at": "2019-07-18T08:57:16Z", "user": "CF-chen-feng-CF" }, { "repo": "pytorch/pytorch", "number": 22858, "title": "What is the abbreviation of CI?", "body": "## \u2753 Questions and Help\r\nI ma sorry, I don't understand the sentence: ` On CI, we test with BUILD_SHARED_LIBS=OFF.`\r\nWhat is the CI ?\r\nWhat is the BUILD_SHARED_LIBS?\r\nCould someone explain it for me?\r\nThank you very much!\r\n![image](https://user-images.githubusercontent.com/30762967/61199981-868abd00-a712-11e9-9fc3-fc3d3e25446d.png)\r\n", "url": "https://github.com/pytorch/pytorch/issues/22858", "state": "closed", "labels": [], "created_at": "2019-07-15T07:11:37Z", "updated_at": "2019-07-15T07:17:31Z", "user": "137996047" }, { "repo": "pytorch/pytorch", "number": 22791, "title": "How to use mpi backend without CUDA_aware", "body": "We noticed that the MPI backend doesn't support the GPU from the official website,(https://pytorch.org/docs/master/distributed.html), then we complied the Pytorch with USE_MPI=1. The command line is `python3 main.py -a resnet50 --dist-url 'tcp://12.0.50.1:12348' --dist-backend 'mpi' --multiprocessing-distributed --world-size 2 --rank 0 /data/tiny-imagenet-200/`. However, we got the error message \"CUDA tensor detected and the MPI used doesn't have CUDA-aware MPI support\". I think the GPU-Direct is not enabled if used MPI backend, I don't know why it uses the CUDA tensor and CUDA-aware , seems GPU-Direct route. And how to set the args for avoiding the CUDA-aware. Thank you :)", "url": "https://github.com/pytorch/pytorch/issues/22791", "state": "open", "labels": [ "triaged", "module: mpi" ], "created_at": "2019-07-12T08:09:15Z", "updated_at": "2020-11-25T06:05:04Z", "user": "401qingkong" }, { "repo": "pytorch/pytorch", "number": 22731, "title": "How to convert at::Tensor (one element) type into a float type in c++ libtorch?", "body": "When I take an element A (eg: x[1][2][3][4], also type at::Tensor) from a four-dimensional variable x(at::Tensor), I want to compare (or multiply) A and B (float type) , how do I convert A (at::Tensor) to float? Thanks a lot!", "url": "https://github.com/pytorch/pytorch/issues/22731", "state": "closed", "labels": [], "created_at": "2019-07-11T05:33:49Z", "updated_at": "2023-06-16T06:44:22Z", "user": "FightStone" }, { "repo": "pytorch/pytorch", "number": 22709, "title": "[docs] Unclear how to use pixel_shuffle", "body": "## \ud83d\udcda Documentation\r\n\r\nThe function is documented as taking no inputs, but uses inputs in the example. \r\n\r\n![image](https://user-images.githubusercontent.com/5652049/61009358-6d63c400-a340-11e9-8633-7d798588089b.png)\r\n", "url": "https://github.com/pytorch/pytorch/issues/22709", "state": "closed", "labels": [ "module: docs", "triaged" ], "created_at": "2019-07-10T22:28:19Z", "updated_at": "2020-10-06T10:53:53Z", "user": "zou3519" }, { "repo": "pytorch/examples", "number": 590, "title": "Why normalize rewards?", "body": "In [line 75](https://github.com/pytorch/examples/blob/master/reinforcement_learning/actor_critic.py#L75) of actor-critic.py, there is a code that normalizes the rewards. However, we don't normalize the values returned from the critic. Why do we do this?", "url": "https://github.com/pytorch/examples/issues/590", "state": "closed", "labels": [], "created_at": "2019-07-10T12:06:56Z", "updated_at": "2022-03-09T23:58:34Z", "comments": 1, "user": "ThisIsIsaac" }, { "repo": "pytorch/tutorials", "number": 553, "title": "improve pytorch tutorial for Data Parallelism", "body": "in this tutorial for data parallel ([link](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html))\r\nit can be useful if you can add how to handle loss function for the case that we are using multiple gpus.\r\nusually naive way will cause unbalance gpu memory usage \r\n", "url": "https://github.com/pytorch/tutorials/issues/553", "state": "open", "labels": [], "created_at": "2019-07-08T21:04:57Z", "updated_at": "2021-10-28T13:15:08Z", "comments": 3, "user": "isalirezag" }, { "repo": "pytorch/examples", "number": 586, "title": "syntax for evaluation mode on custom data?", "body": "I fine-tuned a model on a custom dataset that was pretrained with Imagenet. Now that I have the model and resulting best epoch in a pth file. What is the correct syntax for evaluation on the validation dataset using my new model. I know I need to set the '-e' flag, butI don't see how to point the script to my newly-trained model. Apologies if this is a novice issue. ", "url": "https://github.com/pytorch/examples/issues/586", "state": "closed", "labels": [], "created_at": "2019-07-06T16:41:18Z", "updated_at": "2022-03-10T05:54:16Z", "comments": 1, "user": "gbrow004" }, { "repo": "pytorch/pytorch", "number": 22549, "title": "How to programmatically check PyTorch version", "body": "## \ud83d\udcda Documentation\r\n\r\n[Minor minor detail]\r\nIn the reproducibility section of the docs or in the FAQ, I would add a simple subsection/snippet of code to show how to programmatically check the running version of PyTorch. \r\nThis can also encourage users to take into account heterogeneity of PyTorch versions in their code.\r\n\r\nBy the way, a simple regex on `torch.__version__` is enough (this assuming version numbering will not change).\r\n```python\r\nimport torch\r\nimport re\r\nif int(re.search(r'([\\d.]+)', torch.__version__).group(1).replace('.', '')) < 100:\r\n raise ImportError('Your PyTorch version is not supported. '\r\n 'Please download and install PyTorch 1.x')\r\n```\r\n", "url": "https://github.com/pytorch/pytorch/issues/22549", "state": "closed", "labels": [ "module: docs", "triaged", "enhancement" ], "created_at": "2019-07-05T15:02:31Z", "updated_at": "2019-11-16T11:55:47Z", "user": "srossi93" }, { "repo": "pytorch/xla", "number": 803, "title": "How to monitor the TPU utilization and memory usage when training?", "body": "", "url": "https://github.com/pytorch/xla/issues/803", "state": "closed", "labels": [ "question" ], "created_at": "2019-07-05T11:19:51Z", "updated_at": "2021-11-15T18:12:41Z", "user": "anhle-uet" }, { "repo": "pytorch/xla", "number": 802, "title": "Is there any detailed document on how to build and train Pytorch network on TPU?", "body": "Hi, I couldn't find any detailed guide or tutorial on how to use this project. Could you point me out some? Thanks a lot!", "url": "https://github.com/pytorch/xla/issues/802", "state": "closed", "labels": [], "created_at": "2019-07-05T05:27:11Z", "updated_at": "2019-07-05T05:46:57Z", "user": "anhle-uet" }, { "repo": "pytorch/examples", "number": 584, "title": "in DCGAN example, Why do we need to make netD inference twice?", "body": "To my understanding, the two lines nearly have the same functionality (same input, same output) except in the first inference `fake` is detached.\r\n\r\nhttps://github.com/pytorch/examples/blob/1de2ff9338bacaaffa123d03ce53d7522d5dcc2e/dcgan/main.py#L229\r\n\r\nhttps://github.com/pytorch/examples/blob/1de2ff9338bacaaffa123d03ce53d7522d5dcc2e/dcgan/main.py#L241\r\n\r\nIs it necessary to make netD inference twice with the same input?\r\nWhat if we re-use the output of the first inference (with fake input) to calculate `errG`?\r\n\r\ninstead of the original codes:\r\n```\r\n ############################\r\n # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))\r\n ###########################\r\n # train with real\r\n netD.zero_grad()\r\n real_cpu = data[0].to(device)\r\n batch_size = real_cpu.size(0)\r\n label = torch.full((batch_size,), real_label, device=device)\r\n\r\n output = netD(real_cpu)\r\n errD_real = criterion(output, label)\r\n errD_real.backward()\r\n D_x = output.mean().item()\r\n\r\n # train with fake\r\n noise = torch.randn(batch_size, nz, 1, 1, device=device)\r\n fake = netG(noise)\r\n label.fill_(fake_label)\r\n output = netD(fake.detach())\r\n errD_fake = criterion(output, label)\r\n errD_fake.backward()\r\n D_G_z1 = output.mean().item()\r\n errD = errD_real + errD_fake\r\n optimizerD.step()\r\n\r\n ############################\r\n # (2) Update G network: maximize log(D(G(z)))\r\n ###########################\r\n netG.zero_grad()\r\n label.fill_(real_label) # fake labels are real for generator cost\r\n output = netD(fake)\r\n errG = criterion(output, label)\r\n errG.backward()\r\n D_G_z2 = output.mean().item()\r\n optimizerG.step()\r\n```\r\n\r\ncan we re-write them as following?\r\n```\r\n ############################\r\n # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))\r\n ###########################\r\n # train with real\r\n netD.zero_grad()\r\n real_cpu = data[0].to(device)\r\n batch_size = real_cpu.size(0)\r\n label = torch.full((batch_size,), real_label, device=device)\r\n\r\n real_output = netD(real_cpu)\r\n errD_real = criterion(real_output, label)\r\n errD_real.backward()\r\n D_x = real_output.mean().item()\r\n\r\n # train with fake\r\n noise = torch.randn(batch_size, nz, 1, 1, device=device)\r\n fake = netG(noise)\r\n label.fill_(fake_label)\r\n # output = netD(fake.detach())\r\n fake_output = netD(fake)\r\n errD_fake = criterion(fake_output, label)\r\n #errD_fake.backward()\r\n errD_fake.backward(retain_graph=True)\r\n D_G_z1 = fake_output.mean().item()\r\n errD = errD_real + errD_fake\r\n optimizerD.step()\r\n\r\n ############################\r\n # (2) Update G network: maximize log(D(G(z)))\r\n ###########################\r\n netG.zero_grad()\r\n netD.zero_grad()\r\n label.fill_(real_label) # fake labels are real for generator cost\r\n # output = netD(fake)\r\n errG = criterion(fake_output, label)\r\n errG.backward()\r\n D_G_z2 = fake_output.mean().item()\r\n optimizerG.step()\r\n```", "url": "https://github.com/pytorch/examples/issues/584", "state": "closed", "labels": [], "created_at": "2019-07-04T08:37:17Z", "updated_at": "2022-03-10T00:37:26Z", "comments": 7, "user": "DreamChaserMXF" }, { "repo": "pytorch/xla", "number": 797, "title": "How to use LR schedulers?", "body": "Can you provide an example of how to use torch.optim LR schedulers with Pytorch/XLA? I've been basing my code on the [test examples](https://github.com/pytorch/xla/tree/master/test), but not sure where to define the LR scheduler and where to step the scheduler since it looks like the optimizer is re-initialized each training loop. Thanks!", "url": "https://github.com/pytorch/xla/issues/797", "state": "closed", "labels": [], "created_at": "2019-07-04T04:41:29Z", "updated_at": "2019-07-10T05:32:16Z", "user": "brianhhu" }, { "repo": "pytorch/examples", "number": 582, "title": "[How to write nn.ModuleList() in Pytorch C++ API]", "body": "Hi,\r\nHow can i write nn.Modulelist() using Pytorch C++?\r\n\r\n@goldsborough \r\n@soumith \r\nAny help would be great.\r\n\r\nThank you\r\n", "url": "https://github.com/pytorch/examples/issues/582", "state": "open", "labels": [ "c++" ], "created_at": "2019-07-03T05:55:19Z", "updated_at": "2022-03-09T20:49:35Z", "user": "vinayak618" }, { "repo": "pytorch/xla", "number": 793, "title": "How to use a cluster of tpus?", "body": "Hi,\r\nThe documentation described the use case with only one cloud tpu. However, in tensorflow it is possible to setup multiple TPU's via TPUClusterResolver. So, how to setup pytorch-xla for training on a multiple tpu devices?. In my case it is 25 preemptible v2-8 TPUs.", "url": "https://github.com/pytorch/xla/issues/793", "state": "closed", "labels": [], "created_at": "2019-07-02T14:43:29Z", "updated_at": "2019-07-10T05:37:19Z", "user": "Rexhaif" }, { "repo": "pytorch/tutorials", "number": 549, "title": "How to combine Rescale with other transforms.RandomHorizontalFlip?", "body": "I am following the tutorial, but meet the problem:` File \"/home/swg/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py\", line 49, in __call__\r\n img = t(img)\r\n File \"/home/swg/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py\", line 448, in __call__\r\n return F.hflip(img)\r\n File \"/home/swg/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/functional.py\", line 345, in hflip\r\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\r\nTypeError: img should be PIL Image. Got <class 'dict'>\r\n` How to solve? Thx", "url": "https://github.com/pytorch/tutorials/issues/549", "state": "closed", "labels": [], "created_at": "2019-07-01T09:47:31Z", "updated_at": "2021-06-16T16:20:20Z", "user": "swg209" }, { "repo": "pytorch/pytorch", "number": 22381, "title": "How to apply transfer learning for custom object detection ?", "body": "## \u2753 Questions and Help\r\n\r\nIs there an example to apply transfer learning for **custom** object detection.\r\n[https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html)\r\n\r\nReference:-\r\nhttps://www.learnopencv.com/faster-r-cnn-object-detection-with-pytorch/", "url": "https://github.com/pytorch/pytorch/issues/22381", "state": "closed", "labels": [ "triaged", "module: vision" ], "created_at": "2019-06-30T16:44:47Z", "updated_at": "2019-07-01T21:38:00Z", "user": "spatiallysaying" }, { "repo": "pytorch/tutorials", "number": 548, "title": "how to set two inputs for caffe2", "body": "when I try to run the model on mobile devices by ONNX, I read the code in official turorials as follow:\r\n\r\n`import onnx`\r\n`import caffe2.python.onnx.backend as onnx_caffe2_backend`\r\n`model = onnx.load(\"super_resolution.onnx\")`\r\n`prepared_backend = onnx_caffe2_backend.prepare(model)`\r\n`W = {model.graph.input[0].name: x.data.numpy()}`\r\n`c2_out = prepared_backend.run(W)[0]`\r\n\r\nthe input in this demo is x, but what should i do if i want to set two inputs?\r\nthanks for you attention!", "url": "https://github.com/pytorch/tutorials/issues/548", "state": "closed", "labels": [], "created_at": "2019-06-30T11:55:22Z", "updated_at": "2019-07-31T05:11:54Z", "user": "mmmmayi" }, { "repo": "pytorch/examples", "number": 579, "title": "Implementing some C++ examples", "body": "Hi @soumith I want to implement some examples for C++ side. I want to start with an example of loading dataset with OpenCV and another example for RNN. \r\n\r\nThere is this [PR](https://github.com/pytorch/examples/pull/506) but it is not active and the example doesn't contain training. Should I write one myself? ", "url": "https://github.com/pytorch/examples/issues/579", "state": "open", "labels": [ "c++" ], "created_at": "2019-06-27T07:14:40Z", "updated_at": "2022-03-09T20:49:35Z", "comments": 4, "user": "ShahriarRezghi" }, { "repo": "pytorch/pytorch", "number": 22290, "title": "How to define a new backward function in libtorch ?", "body": "## \u2753Is it possible to define a backward function libtorch ?\r\n\r\n### In pytorch, a new backward function can be defined\r\n`\r\n\r\nclass new_function(torch.autograd.Function):\r\n\r\n def ....\r\n\r\n def forward(self,...)\r\n\r\n def backward(self, ...)\r\n\r\n`\r\nHowever, in libtorch, how to define a new backward function? \r\n\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/22290", "state": "closed", "labels": [ "module: cpp", "triaged" ], "created_at": "2019-06-27T03:05:20Z", "updated_at": "2019-06-29T05:13:07Z", "user": "buduo15" }, { "repo": "pytorch/pytorch", "number": 22249, "title": "How to convert pytorch0.41 model to CAFFE", "body": "\r\n", "url": "https://github.com/pytorch/pytorch/issues/22249", "state": "closed", "labels": [], "created_at": "2019-06-26T03:25:03Z", "updated_at": "2019-06-26T03:28:12Z", "user": "BokyLiu" }, { "repo": "pytorch/tutorials", "number": 543, "title": "Can I translate this tutorial and make a book?", "body": "Hello sir,\r\nAs the title says, can I translate this whole tutorial into S.Korean and make a book?\r\nI've noticed that this project is BSD licensed but the first thing to do will be asking you for permission.\r\nIt would be a great & fun job for me and I'm in a plan to donate the book royalty to charity.", "url": "https://github.com/pytorch/tutorials/issues/543", "state": "closed", "labels": [], "created_at": "2019-06-24T12:49:48Z", "updated_at": "2019-08-20T11:03:06Z", "comments": 0, "user": "amsukdu" }, { "repo": "pytorch/examples", "number": 578, "title": "DCGAN BatchNorm initialization weight looks different", "body": "Hi there,\r\n\r\nI used the `torch.utils.tensorboard` to watch the weight/grad when training the DCGAN example on MNIST dataset.\r\n\r\nIn the DCGAN example, we use the normal distribution to initialize both the weight of Conv and BatchNorm. However, I find it is strange when I visualize the weight of them. In the following figure, it seems that the `G/main/1/weight` (BatchNorm) is not initialized with the normal distribution because it looks so different from `G/main/0/weight` (ConvTranspose2d). It has been trained for 10 iters with batch size 64.\r\n\r\nCould someone explain this?\r\n\r\n![image](https://user-images.githubusercontent.com/15101533/59962209-e677b480-9514-11e9-8387-827d58fbd791.png)\r\n\r\nThe related tensorboard code is copied from [here](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/04-utils/tensorboard/main.py):\r\n```python\r\n# logging weight and grads\r\nfor tag, value in netD.named_parameters():\r\n tag = 'D/' + tag.replace('.', '/')\r\n writer.add_histogram(tag, value.data.cpu().numpy(), global_step)\r\n writer.add_histogram(tag+'/grad', value.grad.data.cpu().numpy(), global_step)\r\nfor tag, value in netG.named_parameters():\r\n tag = 'G/' + tag.replace('.', '/')\r\n writer.add_histogram(tag, value.data.cpu().numpy(), global_step)\r\n writer.add_histogram(tag+'/grad', value.grad.data.cpu().numpy(), global_step)\r\n```\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/578", "state": "open", "labels": [ "question" ], "created_at": "2019-06-22T09:51:44Z", "updated_at": "2022-03-10T05:48:52Z", "comments": 0, "user": "daa233" }, { "repo": "pytorch/tutorials", "number": 541, "title": "Sphinx error (builder name data not registered)", "body": "Notebooks for beginner and intermediate tutorials have been generated fine. When building advanced tutorial I have problems: 1) need to install torchaudio in a hard way (no easy way, need to compile from sources and hack some command to be completed, but checked - python can import torchaudio); 2) when running \"make data\" I am getting Sphinx error:\r\nmake data\r\nRunning Sphinx v2.1.2\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/sphinx/registry.py\", line 145, in preload_builder\r\n entry_point = next(entry_points)\r\nStopIteration\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/sphinx/cmd/build.py\", line 283, in build_main\r\n args.tags, args.verbosity, args.jobs, args.keep_going)\r\n File \"/usr/lib/python3.7/site-packages/sphinx/application.py\", line 238, in __init__\r\n self.preload_builder(buildername)\r\n File \"/usr/lib/python3.7/site-packages/sphinx/application.py\", line 315, in preload_builder\r\n self.registry.preload_builder(self, name)\r\n File \"/usr/lib/python3.7/site-packages/sphinx/registry.py\", line 148, in preload_builder\r\n ' through entry point') % name)\r\nsphinx.errors.SphinxError: Builder name data not registered or available through entry point\r\n", "url": "https://github.com/pytorch/tutorials/issues/541", "state": "closed", "labels": [], "created_at": "2019-06-22T01:29:50Z", "updated_at": "2021-06-16T16:27:17Z", "comments": 2, "user": "olegmikul" }, { "repo": "pytorch/tutorials", "number": 539, "title": "how to create custom dataloader for with labels of varying length?", "body": "My image labels are in .csv format and it has labels in varying order.\r\n![Screenshot_2019-06-20 transferl(1)](https://user-images.githubusercontent.com/17276742/59912925-66b7f000-9417-11e9-95a3-1aca37c4d88b.png)\r\n\r\ni have created custom dataset. My labels are in separate csv files and they have varying length.\r\nI have tried making customdataset and then dataloader.\r\nI want to do transfer learning with Resnet. Can anyonehelp, I am a beginner to both pytorch and deep learning.\r\n![Screenshot_2019-06-20 transferl(2)](https://user-images.githubusercontent.com/17276742/59913219-0c6b5f00-9418-11e9-8ce8-84a669796c95.png)\r\n\r\n![Screenshot_2019-06-21 transferl(4)](https://user-images.githubusercontent.com/17276742/59915757-b0a3d480-941d-11e9-94d5-91c726d8d334.png)\r\n\r\n![Screenshot_2019-06-21 transferl(2)](https://user-images.githubusercontent.com/17276742/59914010-b8fa1080-9419-11e9-91fe-2a143de2351d.png)\r\nUpto this step, Everything is fine, However, when i create dataloader,\r\nit shows problem. Can anyone help? i am a beginner.\r\n![Screenshot_2019-06-21 transferl(3)](https://user-images.githubusercontent.com/17276742/59915497-12b00a00-941d-11e9-9632-50cf0121e60e.png)\r\n![Screenshot_2019-06-21 transferl(1)](https://user-images.githubusercontent.com/17276742/59915568-38d5aa00-941d-11e9-86fc-123257ce8ef8.png)\r\n\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/539", "state": "closed", "labels": [], "created_at": "2019-06-21T11:33:27Z", "updated_at": "2021-07-30T22:48:21Z", "user": "nisnab" }, { "repo": "pytorch/tutorials", "number": 535, "title": "Under which license are the images?", "body": "Hi,\r\n\r\nUnder which license are the style and content images? I want to use your tutorial in an open source repository and wonder if the images won't change the license of my repo.\r\n\r\nThanks!", "url": "https://github.com/pytorch/tutorials/issues/535", "state": "open", "labels": [], "created_at": "2019-06-19T12:21:57Z", "updated_at": "2019-06-19T12:21:57Z", "comments": 0, "user": "HyamsG" }, { "repo": "pytorch/pytorch", "number": 21962, "title": "How to deploy pytorch???", "body": "How to deploy pytorch???", "url": "https://github.com/pytorch/pytorch/issues/21962", "state": "closed", "labels": [], "created_at": "2019-06-19T08:47:10Z", "updated_at": "2019-06-19T14:22:20Z", "user": "yuanjie-ai" }, { "repo": "pytorch/pytorch", "number": 21945, "title": "how to use mkl-dnn after installing by conda?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nhow to use mkl-dnn after installing by conda \"conda install mkl-dnn\"? And I don`t know what to do in the next step for using mkl-dnn on the pytorch?", "url": "https://github.com/pytorch/pytorch/issues/21945", "state": "closed", "labels": [], "created_at": "2019-06-19T02:53:45Z", "updated_at": "2019-06-20T21:57:03Z", "user": "HITerStudy" }, { "repo": "pytorch/pytorch", "number": 21630, "title": "How to accerlate dataloader?", "body": "how to accelerate dataloader?\r\nWhen we load data like :\r\nfor i\uff0c data in enumerate(dataset):\r\n data.to(gpu)\r\nif we transfer data to gpu, the operation takes a lot of time. \r\nhow can we get data stored in gpu directly? Or any other ways to accelerate the dataloader", "url": "https://github.com/pytorch/pytorch/issues/21630", "state": "closed", "labels": [], "created_at": "2019-06-11T14:19:14Z", "updated_at": "2019-06-11T18:57:33Z", "user": "luhc15" }, { "repo": "pytorch/pytorch", "number": 21583, "title": "where is boradcast.h after installation caffe2", "body": "I'm trying to run cpp program with caffe2 in ubuntu 16.04.\r\nI installed caffe2 according to the official guide and checked that it is working properly.\r\n\r\nIn my case, I linked caffe2 & c10 libs and set include dir as `/usr/local/lib/python*.*/dist-packages/torch/include/`\r\nThen I confronted with below error.\r\n```bash\r\nIn file included from /usr/include/caffe2/utils/filler.h:8:0,\r\n from /usr/include/caffe2/core/operator_schema.h:16,\r\n from /usr/include/caffe2/core/net.h:18\r\n/usr/include/caffe2/utils/math.h:18:41: fatal error: caffe2/utils/math/broadcast.h: No such file or directory\r\n```\r\nIn build progress, the whole directory `/caffe2/core/utils` is omitted. Is there any reason?\r\n\r\nIf I can get, any solution to this problem?", "url": "https://github.com/pytorch/pytorch/issues/21583", "state": "closed", "labels": [ "caffe2" ], "created_at": "2019-06-10T08:11:43Z", "updated_at": "2020-04-17T07:52:10Z", "user": "helloahn" }, { "repo": "pytorch/pytorch", "number": 21571, "title": "How to retrieve hidden states for all time steps in LSTM or BiLSTM?", "body": "How to retrieve hidden states for all time steps in LSTM or BiLSTM?", "url": "https://github.com/pytorch/pytorch/issues/21571", "state": "closed", "labels": [], "created_at": "2019-06-09T08:49:29Z", "updated_at": "2019-06-09T22:12:28Z", "user": "gongel" }, { "repo": "pytorch/pytorch", "number": 21551, "title": "How to disable MKL-DNN 64-bit compilation?", "body": "My build from current source on RPi 3B fails because the compilation is selecting the 64-bit option for the Intel MKL-DNN library. Is there an _option/flag_ to disable this selection during the **make** process?\r\n\r\nThanks.\r\n\r\n```\r\n-- MIOpen not found. Compiling without MIOpen support\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 621 0 621 0 0 1408 0 --:--:-- --:--:-- --:--:-- 1411\r\n100 66.4M 100 66.4M 0 0 8019k 0 0:00:08 0:00:08 --:--:-- 9169k\r\nDownloaded and unpacked Intel(R) MKL small libraries to /home/pi/projects/pytorch/third_party/ideep/mkl-dnn/external\r\nCMake Error at third_party/ideep/mkl-dnn/CMakeLists.txt:59 (message):\r\n Intel(R) MKL-DNN supports 64 bit platforms only\r\n```\r\n", "url": "https://github.com/pytorch/pytorch/issues/21551", "state": "closed", "labels": [ "module: build", "triaged", "module: mkldnn" ], "created_at": "2019-06-08T00:25:42Z", "updated_at": "2019-09-10T08:46:33Z", "user": "baqwas" }, { "repo": "pytorch/pytorch", "number": 21477, "title": "Not obvious how to install torchvision with PyTorch source build", "body": "Previously, it used to be possible to build PyTorch from source, and then `pip install torchvision` and get torchvision available. Now that torchvision is binary distributions, this no longer works; to make matters worse, it explodes in non-obvious ways.\r\n\r\nWhen I had an existing install of torchvision 0.3.0, I got this error:\r\n\r\n```\r\nImportError: /scratch/ezyang/pytorch-tmp-env/lib/python3.7/site-packages/torchvision/_C.cpython-37m-x86_64-linux-gnu.so: u\r\nndefined symbol: _ZN3c106Device8validateEv\r\n```\r\n\r\nI reinstalled torchvision with `pip install torchvision`. Then I got this error:\r\n\r\n```\r\n File \"/scratch/ezyang/pytorch-tmp-env/lib/python3.7/site-packages/torchvision/ops/boxes.py\", line 2, in <module>\r\n from torchvision import _C\r\nImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory\r\n```\r\n\r\n(I'm on a CUDA 10 system).\r\n\r\nIn the end, I cloned torchvision and built/installed it from source.", "url": "https://github.com/pytorch/pytorch/issues/21477", "state": "open", "labels": [ "triaged", "module: vision" ], "created_at": "2019-06-06T18:04:12Z", "updated_at": "2019-06-11T22:10:40Z", "user": "ezyang" }, { "repo": "pytorch/pytorch", "number": 21456, "title": "Where is the algorithm for conv being selected?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\r\nHi,\r\nI would like to deploy PyTorch on some specific HW. Problem is it is very slow if the algorithms for conv are not changed. I'd like to rely to some specific algorithms - optimized for the HW. My problem is I cannot figure out where in PyTorch code of conv2d for example the actual algorithm is selected.\r\nThank you.\r\n", "url": "https://github.com/pytorch/pytorch/issues/21456", "state": "closed", "labels": [ "triaged" ], "created_at": "2019-06-06T11:25:27Z", "updated_at": "2019-07-04T14:23:56Z", "user": "snippler" }, { "repo": "pytorch/examples", "number": 571, "title": "why only save the rank 0 model when use distributeddataparallel?", "body": "hi, should we save all the model in different rank when we use the distributeddataparallel? \r\ndifferent rank seems do not share the same bn parameters. do we need to save all the model in different rank? Thanks very much!", "url": "https://github.com/pytorch/examples/issues/571", "state": "open", "labels": [ "distributed" ], "created_at": "2019-06-05T03:30:59Z", "updated_at": "2022-03-10T00:12:25Z", "comments": 0, "user": "v-wewei" }, { "repo": "pytorch/examples", "number": 570, "title": "Why there is no tanh activation at the end of TransformerNet?", "body": "In Fast Style Transfer, it seems like there is no tanh layer at the end of TransformerNet model.\r\nThere is no gurantee that output values fti in certain range e.g. [0, 1] for PIL image.\r\n\r\nUsing only deconv is also valid? How does it work?", "url": "https://github.com/pytorch/examples/issues/570", "state": "open", "labels": [ "good first issue" ], "created_at": "2019-06-03T06:10:19Z", "updated_at": "2022-03-10T00:08:18Z", "comments": 0, "user": "DongHwanJang" }, { "repo": "pytorch/examples", "number": 568, "title": "Use official dataset for ImageNet?", "body": "As of version `0.3.0`, `torchvision` [officially supports](https://github.com/pytorch/vision/blob/v0.3.0/torchvision/datasets/imagenet.py) the `ImageNet` dataset. Do we want to use it in the corresponding example?\r\n\r\n### Cons\r\n\r\n- We break backward compatibility for earlier `torchvision` versions\r\n\r\n### Pros\r\n\r\n- We can add a `download` flag, which automates the download and extraction process without any further user interference\r\n- We can _spread the word_ that this rather famous dataset is now officially supported\r\n\r\nI can add a PR for this if we want to do this.", "url": "https://github.com/pytorch/examples/issues/568", "state": "closed", "labels": [], "created_at": "2019-05-31T13:51:39Z", "updated_at": "2022-03-10T00:34:59Z", "comments": 1, "user": "pmeier" }, { "repo": "pytorch/tutorials", "number": 517, "title": "The pipelined example of model_parallel_tutorial.py got worse performance", "body": "I download the example model_parallel_tutorial.py and execute it on my server. However the results don't match the results in the tutorial. I make sure the two GPUs are dedicate to the example. Is there any missed condition?\r\nNote. The two GPUs are Telsa P100 connected with NVLINK.\r\n\r\n![mp_vs_rn](https://user-images.githubusercontent.com/40190145/58627105-fc74c980-8308-11e9-8a8c-24175a0509ae.png)\r\n![split_size_tradeoff](https://user-images.githubusercontent.com/40190145/58627107-fc74c980-8308-11e9-9e72-cf56f612f978.png)\r\n![mp_vs_rn_vs_pp](https://user-images.githubusercontent.com/40190145/58627108-fd0d6000-8308-11e9-8abf-f72908050cb2.png)\r\n@mrshenli ", "url": "https://github.com/pytorch/tutorials/issues/517", "state": "closed", "labels": [], "created_at": "2019-05-30T10:31:21Z", "updated_at": "2019-05-30T13:52:59Z", "comments": 2, "user": "cofiiwu" }, { "repo": "pytorch/pytorch", "number": 21117, "title": "How to open \"USE_LMDB\" by install from source code", "body": "## \ud83d\udcda Documentation\r\nlinux : ubuntu 16.04\r\n\r\nI follow the [doc](https://github.com/pytorch/pytorch#from-source) to install pytorch form source code. But I don't know how to change the install config, I want to open the \"USE_LMDB\".\r\nI run this cmd: \r\n```\r\nexport CMAKE_PREFIX_PATH=${CONDA_PREFIX:-\"$(dirname $(which conda))/../\"}\r\npython setup.py install\r\n```\r\nthen I found the log \"USE_LMDB: OFF\", I try to modify the CMakeLists.txt, and also try\r\n```\r\nUSE_LMDB=ON python setup.py install\r\n```\r\nthe log is also show \"USE_LMDB: OFF\", how can I modify the config ?", "url": "https://github.com/pytorch/pytorch/issues/21117", "state": "closed", "labels": [], "created_at": "2019-05-30T03:32:20Z", "updated_at": "2019-05-31T02:19:32Z", "user": "daohu527" }, { "repo": "pytorch/pytorch", "number": 21060, "title": "What is Mac system requirement to install caffe2?", "body": "Hi, @ezyang @apaszke @neoinmtv @soumith what is the Mac machine requirements to install caffe2?\r\n\r\nI have Mac mini with 500GB storage, 4GB RAM, and Intel Core i5 processor. is it possible to install caffe2 on my Mac mini?\r\n\r\n> OS: MacOS Mojave 10.14.2\r\n> Processor: Intel Core i5 2.5 GHz\r\n> Graphics: Intel HD Graphics 4000 1536 MB\r\n> RAM: 4GB DDR3\r\n> Storage: 500GB SATA", "url": "https://github.com/pytorch/pytorch/issues/21060", "state": "closed", "labels": [], "created_at": "2019-05-29T11:24:04Z", "updated_at": "2019-05-29T23:02:59Z", "user": "notebookdata" }, { "repo": "pytorch/pytorch", "number": 21015, "title": "How to use Infiniband for cpu-cluster with backend gloo?", "body": "Now I'm trying to build pytorch from source for my cpu-cluster with backend gloo.\r\nAfter installing pytorch, I got this information from install summay:\r\n```\r\n -- USE_DISTRIBUTED : True\r\n -- USE_MPI : ON\r\n -- USE_GLOO : ON\r\n -- USE_GLOO_IBVERBS : 1\r\n```\r\nIn my cluster, the network interface \"eno1\" represents Ethernet, and \"ib0\" represents Infiniband.\r\nI set the environment variable `GLOO_SOCKET_IFNAME=eno1`, and distributed pytorch works fine. But when I set `GLOO_SOCKET_IFNAME=ib0`, it will cause some error.\r\n\r\nWhat should I do?\r\nThanks.", "url": "https://github.com/pytorch/pytorch/issues/21015", "state": "open", "labels": [ "oncall: distributed", "triaged" ], "created_at": "2019-05-28T11:41:53Z", "updated_at": "2019-07-26T02:26:22Z", "user": "sth1997" }, { "repo": "pytorch/pytorch", "number": 20993, "title": "The true value is not what it looks like", "body": "## \ud83d\udc1b Bug\r\n\r\nThe true value of the element of tensor seems not to be what it looks like and what it should be. \r\nThe debuger shows that a variable x is 1.0000, while `torch.sqrt(1.0 - torch.pow(x, 2))` is nan and `x > 1` is true. What's worse, some other 1.0000 variable shows the opposite. They are all produced by computing cos, which means they should be no more than 1.\r\n\r\n## To Reproduce\r\n```\r\nx = _l2_norm(x, 1)\r\ncosine = torch.matmul(x, x.transpose(0, 1))\r\nsine = torch.sqrt(1.0 - torch.pow(cosine, 2))\r\n```\r\n```\r\ndef _l2_norm(input, axis=1):\r\n norm = torch.norm(input, 2, axis, True)\r\n output = torch.div(input, norm)\r\n return output\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nThere are some `nan` in diagonal of sine and `cosine < 1` behaves strange.\r\nI am wondering whether there are some tricks about storage I don't know.\r\n\r\n## Environment\r\n\r\nPyTorch version: 0.3.1\r\nIs debug build: No\r\nCUDA used to build PyTorch: 9.0.176\r\n\r\nOS: Ubuntu 16.04.1 LTS\r\nGCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609\r\nCMake version: version 3.5.1\r\n\r\nPython version: 2.7\r\nIs CUDA available: Yes\r\nCUDA runtime version: 9.2.148\r\nGPU models and configuration: \r\nGPU 0: GeForce GTX 1080 Ti\r\nGPU 1: GeForce GTX 1080 Ti\r\nGPU 2: GeForce GTX 1080 Ti\r\nGPU 3: GeForce GTX 1080 Ti\r\nGPU 4: GeForce GTX 1080 Ti\r\nGPU 5: GeForce GTX 1080 Ti\r\nGPU 6: GeForce GTX 1080 Ti\r\nGPU 7: GeForce GTX 1080 Ti\r\n\r\nNvidia driver version: 396.37\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1\r\n\r\nVersions of relevant libraries:\r\n[pip] Could not collect\r\n[conda] blas 1.0 mkl defaults\r\n[conda] cuda92 1.0 0 pytorch\r\n[conda] mkl 2019.1 144 https://mirrors.ustc.edu.cn/anaconda/pkgs/main\r\n[conda] mkl-service 1.1.2 py37h90e4bf4_5 defaults\r\n[conda] mkl_fft 1.0.10 py37h14c3975_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge\r\n[conda] mkl_random 1.0.2 py37h637b7d7_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge\r\n[conda] pytorch 0.4.1 py37_cuda9.2.148_cudnn7.1.4_1 [cuda92] pytorch\r\n[conda] torchfile 0.1.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge\r\n[conda] torchsample 0.1.3 pypi_0 pypi\r\n[conda] torchvision 0.2.1 py37_1000 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge\r\n", "url": "https://github.com/pytorch/pytorch/issues/20993", "state": "closed", "labels": [], "created_at": "2019-05-27T18:33:45Z", "updated_at": "2019-05-29T23:23:43Z", "user": "Woolseyyy" }, { "repo": "pytorch/pytorch", "number": 20988, "title": "How to do matrix multiplication between two 2D sparse directly and quickly", "body": "## \ud83d\ude80 Feature\r\nI want a function like torch.sparse.mm(sparse_matrix1, sparse_matrix2)\r\n\r\n## Motivation\r\nI know torch.sparse.mm(sparse_magrix1, sparse_matrix2.to_dense()) , but this will spend a lot of memory when sparse_matrix2's shape is large.", "url": "https://github.com/pytorch/pytorch/issues/20988", "state": "closed", "labels": [ "module: sparse", "triaged" ], "created_at": "2019-05-27T15:46:32Z", "updated_at": "2021-01-04T17:58:49Z", "user": "Louis-udm" }, { "repo": "pytorch/pytorch", "number": 20902, "title": "How to speed up the TorchScipt code ?", "body": "## \u2753 Questions and Help\r\nHi! Recently I use the TorchScipt Module to produce the serialized model, then, I loaded the pt and predict the input, but I find it still at a low speed? Is that any tools or method to speed up? \r\n", "url": "https://github.com/pytorch/pytorch/issues/20902", "state": "closed", "labels": [], "created_at": "2019-05-24T09:01:56Z", "updated_at": "2019-05-24T13:59:03Z", "user": "JiYuanFeng" }, { "repo": "pytorch/pytorch", "number": 20705, "title": "How to reserve negative values of features extracting by register_forward_hook?", "body": "I am trying to extract features of a certain layer of a pretrained model. The fellowing code does work, however, the values of template_feature_map changed and I did nothing of it.\r\n\r\n vgg_feature = models.vgg13(pretrained=True).features\r\n template_feature_map=[]\r\n def save_template_feature_map(self, input, output):\r\n template_feature_map.append(output.detach())\r\n print(template_feature_map)\r\n template_handle = vgg_feature[5].register_forward_hook(save_template_feature_map)\r\n vgg_feature(template[0])\r\n print(template_feature_map)\r\n\r\nThe output of 6th layer of the model should have negative values, as first print(template_feature_map) shows. But, the negative values which should maintain in second print(template_feature_map) are changed to zeros, I don\u2019t know why. If you know the mechanism of this, please tell me how to keep the negative values.\r\n\r\nThe output of two print(template_feature_map):\r\n\r\n` [tensor([[[[-5.7389e-01, -2.7154e+00, -4.0990e+00, ..., 4.1902e+00,\r\n 3.1757e+00, 2.2461e+00],\r\n [-2.2217e+00, -4.3395e+00, -6.8158e+00, ..., -1.4454e+00,\r\n 9.8012e-01, -2.3653e+00],\r\n [-4.1940e+00, -6.3235e+00, -6.8422e+00, ..., -2.8329e+00,\r\n 2.5570e+00, -2.7704e+00],\r\n ...,\r\n [-3.3250e+00, 1.3792e-01, 5.4926e+00, ..., -4.1722e+00,\r\n -6.1008e-01, -2.6037e+00],\r\n [ 1.5377e+00, 6.0671e-01, 2.0974e+00, ..., 1.2441e+00,\r\n 1.5033e+00, -2.7246e+00],\r\n [ 6.8857e-01, -3.5160e-02, 6.7858e-01, ..., 1.2052e+00,\r\n 1.4533e+00, -1.4160e+00]],\r\n\r\n [[ 6.8798e-01, 1.6971e+00, 2.1629e+00, ..., 3.1701e-01,\r\n 8.5424e-01, 2.8768e+00],\r\n [ 1.4013e+00, 2.7217e+00, 2.1476e+00, ..., 3.1156e+00,\r\n 4.4858e+00, 3.6936e+00],\r\n [ 3.1807e+00, 2.2245e+00, 2.4665e+00, ..., 1.3838e+00,\r\n 1.0580e-02, -3.1445e-03],\r\n ...,\r\n [-4.7298e+00, -3.3037e+00, -1.2982e+00, ..., 2.3266e-01,\r\n 6.7711e+00, 3.8166e+00],\r\n [-4.7972e+00, -5.4591e+00, -2.5201e+00, ..., 3.7584e+00,\r\n 5.1524e+00, 2.3072e+00],\r\n [-2.4306e+00, -2.8033e+00, -2.0912e+00, ..., 1.9888e+00,\r\n 2.0582e+00, 1.9266e+00]],\r\n\r\n [[-4.4257e+00, -4.6331e+00, -3.3580e-03, ..., -8.2233e+00,\r\n -7.4645e+00, -1.7361e+00],\r\n [-4.5593e+00, -8.4195e+00, -8.8428e+00, ..., -6.7950e+00,\r\n -1.4665e+01, -2.5335e+00],\r\n [-2.3481e+00, -3.8543e+00, -3.5965e+00, ..., -1.5105e+00,\r\n -1.6923e+01, -5.9852e+00],\r\n ...,\r\n [-8.0165e+00, 8.0185e+00, 6.5506e+00, ..., 5.3241e+00,\r\n 3.3854e+00, -1.6342e+00],\r\n [-1.3689e+01, -2.2930e+00, 4.7097e+00, ..., 3.2021e+00,\r\n 2.9208e+00, -8.0228e-01],\r\n [-1.3055e+01, -1.1470e+01, -8.4442e+00, ..., 1.8155e-02,\r\n -6.2866e-02, -2.0333e+00]],\r\n\r\n ...,\r\n\r\n [[ 3.4622e+00, -1.2417e+00, -5.0749e+00, ..., 5.3184e+00,\r\n 1.4744e+01, 8.3968e+00],\r\n [-2.7820e+00, -9.1911e+00, -1.1069e+01, ..., 2.5380e+00,\r\n 9.8336e+00, 4.0623e+00],\r\n [-3.9794e+00, -1.0140e+01, -9.9133e+00, ..., 3.0999e+00,\r\n 5.5936e+00, 2.5775e+00],\r\n ...,\r\n [ 2.0299e+00, 2.1304e-01, -2.2307e+00, ..., 1.1388e+01,\r\n 8.8098e+00, 1.8991e+00],\r\n [ 8.0663e-01, -1.5073e+00, 3.3977e-01, ..., 8.5316e+00,\r\n 4.9923e+00, -3.6818e-01],\r\n [-3.5146e+00, -7.2647e+00, -5.4331e+00, ..., -1.9781e+00,\r\n -3.4463e+00, -4.9034e+00]],\r\n\r\n [[-3.2915e+00, -7.3263e+00, -6.8458e+00, ..., 2.3122e+00,\r\n 9.7774e-01, -1.3498e+00],\r\n [-4.5396e+00, -8.6832e+00, -8.8582e+00, ..., 7.1535e-02,\r\n -4.1133e+00, -4.4045e+00],\r\n [-4.8781e+00, -7.0239e+00, -4.7350e+00, ..., -3.6954e+00,\r\n -9.6687e+00, -8.8289e+00],\r\n ...,\r\n [-4.7072e+00, -4.4823e-01, 1.7099e+00, ..., 3.7923e+00,\r\n 1.6887e+00, -4.3305e+00],\r\n [-5.5120e+00, -3.2324e+00, 2.3594e+00, ..., 4.6031e+00,\r\n 1.8856e+00, -4.0147e+00],\r\n [-5.1355e+00, -5.5335e+00, -1.7738e+00, ..., 1.6159e+00,\r\n -1.3950e+00, -4.1055e+00]],\r\n\r\n [[-2.0252e+00, -2.3971e+00, -1.6477e+00, ..., -3.3740e+00,\r\n -4.9965e+00, -2.1219e+00],\r\n [-7.6059e-01, -3.3901e-01, -1.8980e-01, ..., -4.3286e+00,\r\n -7.1350e+00, -3.9186e+00],\r\n [ 8.4101e-01, 1.3403e+00, 2.5821e-01, ..., -5.1847e+00,\r\n -7.1829e+00, -3.7724e+00],\r\n ...,\r\n [-6.0619e+00, -5.6475e+00, -1.6446e+00, ..., -9.2322e+00,\r\n -9.1981e+00, -5.5239e+00],\r\n [-7.4606e+00, -7.6054e+00, -5.8401e+00, ..., -7.6998e+00,\r\n -6.4111e+00, -2.9374e+00],\r\n [-6.4147e+00, -7.2813e+00, -6.1880e+00, ..., -4.6726e+00,\r\n -3.1090e+00, -7.8383e-01]]]])]\r", "url": "https://github.com/pytorch/pytorch/issues/20705", "state": "closed", "labels": [], "created_at": "2019-05-20T17:04:31Z", "updated_at": "2019-05-20T18:40:27Z", "user": "iminfine" }, { "repo": "pytorch/pytorch", "number": 20627, "title": "How to transfer tf.layers.dense to pytorch?", "body": "How to transfer tf.layers.dense to pytorch?\r\n\r\n~~~\r\ntf.layers.dense(post_outputs, hp.num_freq)\r\n~~~", "url": "https://github.com/pytorch/pytorch/issues/20627", "state": "closed", "labels": [], "created_at": "2019-05-17T04:40:19Z", "updated_at": "2019-05-17T04:55:15Z", "user": "DonggeunYu" }, { "repo": "pytorch/pytorch", "number": 20566, "title": " What is a C ++ torch api similar to the registor_hook function in Python?", "body": "I want to know the backward grdient value of a particular layer.\r\nIn Python, there is a function called registor_hook. C ++ does not have the same function. \r\nIs there a similar method?", "url": "https://github.com/pytorch/pytorch/issues/20566", "state": "closed", "labels": [], "created_at": "2019-05-16T02:27:37Z", "updated_at": "2019-05-16T11:39:28Z", "user": "Navifra-Kerry" }, { "repo": "pytorch/examples", "number": 557, "title": "[ImageNet] Where is the checkpoint and best model?", "body": "I ran `main.py` as follows:\r\n\r\n`python main.py -a resnet50 --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 [imagenet-folder with train and val folders]`\r\n\r\nI can't find any ckpt files or model files in the current directory. \r\n\r\nSomebodies have the same problem. See the latest replies of this issue:\r\nhttps://github.com/pytorch/examples/issues/292", "url": "https://github.com/pytorch/examples/issues/557", "state": "closed", "labels": [], "created_at": "2019-05-10T08:36:12Z", "updated_at": "2022-03-10T05:12:16Z", "user": "dqgdqg" }, { "repo": "pytorch/pytorch", "number": 20316, "title": "What is the meaning of such a formula in some functions", "body": "## \ud83d\udcda Documentation\r\nWhat is the meaning of such a formula in some functions\uff1f Just like the formula in the following picture\r\nmath::\r\n v = \\frac{v}{\\max(\\lVert v \\rVert_p, \\epsilon)}.\r\n![image](https://user-images.githubusercontent.com/22348625/57455073-e8efb900-729c-11e9-895f-588ec050ed4b.png)\r\nIs my chrome lack of some Plug-ins so they can not show correctly?? ", "url": "https://github.com/pytorch/pytorch/issues/20316", "state": "closed", "labels": [], "created_at": "2019-05-09T13:01:12Z", "updated_at": "2019-05-09T14:22:28Z", "user": "heslowen" }, { "repo": "pytorch/pytorch", "number": 20271, "title": "Official instructions for how to build libtorch don't have same structure as prebuilt binaries", "body": "On Slack, Geoffrey Yu asked:\r\n\r\n> Are there instructions for building libtorch from source? I feel like I'm missing something since I've tried building with `tools/build_libtorch.py`. However the build output doesn't seem to have the same structure as the prebuilt libtorch that you can download on pytorch.org\r\n\r\n@pjh5 responded: \"If you're curious, here's exactly what builds the libtorches https://github.com/pytorch/builder/blob/master/manywheel/build_common.sh#L120 . It's mostly tools/build_libtorch.py but also copies some header files from a wheel file\"\r\n\r\nThis is not mentioned at all in the \"how to build libtorch\" documentation: https://github.com/pytorch/pytorch/blob/master/docs/libtorch.rst Normally we give build instructions in README but there are no libtorch build instructions in the README. Additionally, the C++ API docs https://pytorch.org/cppdocs/ don't explain how to build from source.\r\n\r\nSome more users being confused about the matter:\r\n* https://discuss.pytorch.org/t/building-libtorch-c-distribution-from-source/27519/2\r\n* https://github.com/pytorch/pytorch/issues/20156\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/20271", "state": "closed", "labels": [ "high priority", "module: binaries", "module: build", "module: docs", "module: cpp", "triaged" ], "created_at": "2019-05-08T13:02:51Z", "updated_at": "2019-05-30T19:52:28Z", "user": "ezyang" }, { "repo": "pytorch/pytorch", "number": 20090, "title": "How to add dynamically allocated strings to Pickler?", "body": "The following code prints `111` and `111`, instead of `222` and `111`, because `222` is skipped [here](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/pickler.cpp#L68). Is this by design as Pickler only works for statically allocated strings? Or is there a way to correctly add dynamically allocated strings? (and all other types listed [here](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/pickler.cpp#L104-L114))? \r\n\r\n```c++\r\n std::string str1 = \"111\";\r\n std::string str2 = \"222\";\r\n\r\n std::vector<at::Tensor> tensor_table;\r\n torch::jit::Pickler pickler(&tensor_table);\r\n pickler.start();\r\n pickler.addIValue(str1);\r\n pickler.addIValue(str2);\r\n pickler.finish();\r\n\r\n auto buffer = new char[pickler.stack().size()];\r\n memcpy(buffer, pickler.stack().data(), pickler.stack().size());\r\n\r\n torch::jit::Unpickler unpickler(buffer, pickler.stack().size(), &tensor_table);\r\n auto values = unpickler.parse_ivalue_list();\r\n std::cout << values.back().toStringRef() << std::endl;\r\n values.pop_back();\r\n std::cout << values.back().toStringRef() << std::endl;\r\n values.pop_back();\r\n```\r\n\r\ncc @zdevito ", "url": "https://github.com/pytorch/pytorch/issues/20090", "state": "closed", "labels": [ "oncall: jit", "triaged" ], "created_at": "2019-05-03T04:43:08Z", "updated_at": "2019-05-17T21:45:41Z", "user": "mrshenli" }, { "repo": "pytorch/examples", "number": 554, "title": "Where is the hook?", "body": "On the tutorial, I see it says this is an example of hook. So where is the hook?", "url": "https://github.com/pytorch/examples/issues/554", "state": "closed", "labels": [], "created_at": "2019-04-30T11:38:40Z", "updated_at": "2019-05-27T21:00:53Z", "user": "yanbixing" }, { "repo": "pytorch/pytorch", "number": 19908, "title": "c++/pytorch How to convert tensor to image array?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\nI would like to convert a tensor to image array and use tensor.data<short>() method. But it doesn't work. \r\n\r\nMy function is showed below:\r\n```\r\n#include <torch/script.h> // One-stop header.\r\n\r\n#include <iostream>\r\n#include <memory>\r\n#include <sstream>\r\n#include <string>\r\n#include <vector>\r\n\r\n#include \"itkImage.h\"\r\n#include \"itkImageFileReader.h\"\r\n#include \"itkImageFileWriter.h\"\r\n#include \"itkImageRegionIterator.h\"\r\n\r\n//////////////////////////////////////////////////////\r\n//Goal: load jit script model and segment myocardium\r\n//Step: 1. load jit script model\r\n// 2. load input image\r\n// 3. predict by model\r\n// 4. save the result to file\r\n//////////////////////////////////////////////////////\r\ntypedef short \t\t\t\tPixelType;\r\nconst unsigned int Dimension = 3;\r\ntypedef itk::Image<PixelType, Dimension> \t\t\t\tImageType;\r\ntypedef itk::ImageFileReader<ImageType> \t\t\t\tReaderType;\r\ntypedef itk::ImageRegionIterator<ImageType> \t\t\t IteratorType;\r\n\r\nbool itk2tensor(ImageType::Pointer itk_img, torch::Tensor &tensor_img) {\r\n\t\r\n\ttypename ImageType::RegionType region = itk_img->GetLargestPossibleRegion();\r\n\tconst typename ImageType::SizeType size = region.GetSize();\r\n\tstd::cout << \"Input size: \" << size[0] << \", \" << size[1]<< \", \" << size[2] << std::endl;\r\n\r\n\tint len = size[0] * size[1] * size[2];\r\n\tshort rowdata[len];\r\n\tint count = 0;\r\n\tIteratorType iter(itk_img, itk_img->GetRequestedRegion());\r\n\t\r\n\t// convert itk to array\r\n\tfor (iter.GoToBegin(); !iter.IsAtEnd(); ++iter) {\r\n\t\trowdata[count] = iter.Get();\r\n\t\tcount++;\r\n\t}\r\n\tstd::cout << \"Convert itk to array DONE!\" << std::endl;\r\n\r\n\t// convert array to tensor\r\n\ttensor_img = torch::from_blob(rowdata, {1, 1, (int)size[0], (int)size[1], (int)size[2]}, torch::kShort).clone();\r\n\ttensor_img = tensor_img.toType(torch::kFloat);\r\n\ttensor_img = tensor_img.to(torch::kCUDA);\r\n\ttensor_img.set_requires_grad(0);\r\n\r\n\treturn true;\r\n}\r\n\r\n\r\nbool tensor2itk(torch::Tensor &t, ImageType::Pointer itk_img) {\r\n\r\n\tstd::cout << \"tensor dtype = \" << t.dtype() << std::endl;\r\n\tstd::cout << \"tensor size = \" << t.sizes() << std::endl;\r\n\tt = t.toType(torch::kShort);\r\n\tshort * array = t.data<short>();\r\n\r\n\tImageType::IndexType start;\r\n\tstart[0] = 0; // first index on X\r\n\tstart[1] = 0; // first index on Y\r\n\tstart[2] = 0; // first index on Z\r\n\r\n\tImageType::SizeType size;\r\n\tsize[0] = t.size(2);\r\n\tsize[1] = t.size(3);\r\n\tsize[2] = t.size(4);\r\n\r\n\tImageType::RegionType region;\r\n\tregion.SetSize( size );\r\n\tregion.SetIndex( start );\r\n\r\n\titk_img->SetRegions( region );\r\n\titk_img->Allocate();\r\n\r\n\tint len = size[0] * size[1] * size[2];\r\n\r\n\tIteratorType iter(itk_img, itk_img->GetRequestedRegion());\r\n\tint count = 0;\r\n\t// convert array to itk\r\n\tstd::cout << \"start!\" << std::endl;\r\n\tfor (iter.GoToBegin(); !iter.IsAtEnd(); ++iter) {\r\n\t\tshort temp = *array++; // ERROR!\r\n\t\tstd::cout << temp << \" \";\r\n\t\titer.Set(temp);\r\n\t\tcount++;\r\n\t}\r\n\tstd::cout << \"end!\" << std::endl;\r\n\r\n\treturn true;\r\n}\r\n\r\n\r\nint main(int argc, const char* argv[]) {\r\n\tint a, b, c;\r\n\tif (argc != 4) {\r\n\t\tstd::cerr << \"usage: automyo input jitmodel output\\n\";\r\n\t\treturn -1;\r\n\t}\r\n\r\n\tstd::cout << \"========= jit start =========\\n\";\r\n\t// 1. load jit script model\r\n\tstd::cout << \"Load script module: \" << argv[2] << std::endl;\r\n\tstd::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[2]);\r\n\tmodule->to(at::kCUDA);\r\n\r\n\t// assert(module != nullptr);\r\n\tstd::cout << \"Load script module DONE\" << std::endl;\r\n\r\n\t// 2. load input image\r\n\tconst char* img_path = argv[1];\r\n\tstd::cout << \"Load image: \" << img_path << std::endl;\r\n\r\n\tReaderType::Pointer reader = ReaderType::New();\r\n\r\n\tif (!img_path) {\r\n\t\tstd::cout << \"Load input file error!\" << std::endl;\r\n\t\treturn false;\r\n\t}\r\n\r\n\treader->SetFileName(img_path);\r\n\treader->Update();\r\n\r\n\tstd::cout << \"Load image DONE!\" << std::endl;\r\n\r\n\tImageType::Pointer itk_img = reader->GetOutput();\r\n\r\n\ttorch::Tensor tensor_img;\r\n\tif (!itk2tensor(itk_img, tensor_img)) {\r\n\t\tstd::cerr << \"itk2tensor ERROR!\" << std::endl;\r\n\t}\r\n\telse {\r\n\t\tstd::cout << \"Convert array to tensor DONE!\" << std::endl;\r\n\t}\r\n\r\n\tstd::vector<torch::jit::IValue> inputs;\r\n\tinputs.push_back(tensor_img);\r\n\r\n\t// 3. predict by model\r\n\ttorch::Tensor y = module->forward(inputs).toTensor();\r\n\tstd::cout << \"Inference DONE!\" << std::endl;\r\n\r\n\t// 4. save the result to file\r\n\ttorch::Tensor seg = y.gt(0.5);\r\n\t// std::cout << seg << std::endl;\r\n\r\n\tImageType::Pointer out_itk_img = ImageType::New();\r\n\tif (!tensor2itk(seg, out_itk_img)) {\r\n\t\tstd::cerr << \"tensor2itk ERROR!\" << std::endl;\r\n\t}\r\n\telse {\r\n\t\tstd::cout << \"Convert tensor to itk DONE!\" << std::endl;\r\n\t}\r\n\r\n\tstd::cout << out_itk_img << std::endl;\r\n\r\n\treturn true;\r\n}\r\n```\r\n\r\nThe runtime log is showed below:\r\n\r\n\r\n> Load script module: model_myo_jit.pt\r\n> Load script module DONE\r\n> Load image: patch_6.nii.gz\r\n> Load image DONE!\r\n> Input size: 128,", "url": "https://github.com/pytorch/pytorch/issues/19908", "state": "closed", "labels": [], "created_at": "2019-04-29T07:49:29Z", "updated_at": "2019-04-29T09:58:24Z", "user": "JingLiRaysightmed" }, { "repo": "pytorch/pytorch", "number": 19822, "title": " How to use torch.tensor(n) in Python3 to adapt to \u2019 at::TensorImpl\u2018 ", "body": "## \u2753 Questions and Help\r\nHi ,when nms_cpp compiled by cpp_extension ,it didnt work but work in pytorch0.4.0.: \r\n\r\n \r\n## TypeError: gpu_nms(): incompatible function arguments. The following argument types are supported:\r\n 1. (arg0: at::TensorImpl, arg1: at::TensorImpl, arg2: at::TensorImpl, arg3:\r\nfloat) -> int\r\n## Invoked with: \r\ntensor([ 2.5353e+09, 2.5238e+09, -4.5295e+18, ..., 4.7854e+18,\r\n 4.7424e+18, 4.7895e+18]), tensor([ 566]), \r\ntensor([[ 146.1686, 111.1691, 242.2774, 288.5695, 0.8267],\r\n [ 144.7030, 108.2768, 244.0824, 282.2564, 0.8234],\r\n [ 144.5566, 110.4112, 243.3897, 283.4086, 0.8225],\r\n ...,\r\n [ 100.9274, 81.2732, 155.0707, 130.5494, 0.0500],\r\n [ 0.0000, 185.7541, 47.3124, 276.2884, 0.0500],\r\n [ 4.5178, 57.4754, 37.1159, 115.0753, 0.0500]], device='cuda:0\r\n'), // \r\n0.5\r\n## cpp file:\r\n## Code\r\n// ------------------------------------------------------------------\r\n// Faster R-CNN\r\n// Copyright (c) 2015 Microsoft\r\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\r\n// Written by Shaoqing Ren\r\n// ------------------------------------------------------------------\r\n#include <torch/script.h>\r\n#include<torch/serialize/tensor.h>\r\n#include <THC/THC.h>\r\n#include <ATen/ATen.h>//state\r\n#include <TH/TH.h>\r\n#include <THC/THCTensorCopy.h>\r\n//#include <TH/generic/THTensorCopy.h>\r\n#include <THC/generic/THCTensorCopy.h>//generic/THCTensorCopy.h\r\n#include <THC/THCTensorCopy.hpp>\r\n#include <math.h>\r\n#include <stdio.h>\r\n\r\n#include <cstddef>\r\n\r\n#include <torch/torch.h>\r\n#include <torch/script.h>\r\n\r\n#include \"cuda/nms_kernel2.h\"\r\n#include \"nms.h\"\r\n\r\n//src/nms_cuda.cpp(27): error C2440: \u201c\u521d\u59cb\u5316\u201d: \u65e0\u6cd5\u4ece\u201c\r\n//std::unique_ptr<THCState,void (__cdecl *)(THCState *)>\r\n//THCState *state = at::globalContext().thc_state;\r\n//std::unique_ptr<THCState,void (__cdecl *)(THCState *)> state= at::globalContext().thc_state;\r\n\t\r\n THCState *state;\r\n\r\nint gpu_nms(THLongTensor * keep, THLongTensor* num_out, THCudaTensor * boxes, float nms_overlap_thresh) {\r\n // boxes has to be sorted\r\n THArgCheck(THLongTensor_isContiguous(keep), 0, \"boxes must be contiguous\");\r\n THArgCheck(THCudaTensor_isContiguous(state, boxes), 2, \"boxes must be contiguous\");\r\n \r\n // Number of ROIs\r\n int64_t boxes_num = THCudaTensor_size(state, boxes, 0);\r\n int64_t boxes_dim = THCudaTensor_size(state, boxes, 1);\r\n\r\n float* boxes_flat = THCudaTensor_data(state, boxes);\r\n\r\n const int64_t col_blocks = DIVUP(boxes_num, threadsPerBlock);\r\n printf(\"100,%d,%d ,%d ,%d \" , *state,boxes_num, boxes_dim, col_blocks);\r\n //, *state\r\n THCudaLongTensor * mask = THCudaLongTensor_newWithSize2d(state, boxes_num, col_blocks);\r\n//#unsigned\r\n unsigned long long* mask_flat = (unsigned long long* )THCudaLongTensor_data(state, mask);\r\n\r\n//_mns from \r\n _nms(boxes_num, boxes_flat, mask_flat, nms_overlap_thresh);\r\n\r\n\r\n THLongTensor * mask_cpu = THLongTensor_newWithSize2d(boxes_num, col_blocks);\r\n //THCudaTensor_copyFloat\r\n //THLongTensor_copyCuda(state, mask_cpu, mask); #no found\r\n //THCTensor_(copyAsyncCPU)\r\n //THTensor_copyCuda(state, mask_cpu, mask);\r\n //THLongTensor_copyCudaLong(state, mask_cpu, mask);\r\n //not found cu file\r\n //THCStorage_copyCudaLong(state, mask_cpu, mask);#\r\n //THCTensor_copy(state, mask_cpu, mask);\r\n //THCudaTensor_copyLong(state, mask_cpu, mask); \r\n \r\n //THLongTensor_copyCudaLong(state, mask_cpu, mask);\r\n //copy_from_cpu(state, mask_cpu, mask);\r\n //ok mask 2 mask_cpu \r\n //\r\n THCudaLongTensor_freeCopyTo(state, mask_cpu, mask); \r\n //Copy_Long(state, mask_cpu, mask); \r\n //copyAsyncCuda \r\n //THTensor_copyLong(state, mask_cpu, mask);\r\n \r\n THCudaLongTensor_free(state, mask);\r\n \r\n//unsigned\r\n long long * mask_cpu_flat = THLongTensor_data(mask_cpu);\r\n\r\n THLongTensor * remv_cpu = THLongTensor_newWithSize1d(col_blocks);\r\n //unsigned\r\n long long* remv_cpu_flat = THLongTensor_data(remv_cpu);\r\n THLongTensor_fill(remv_cpu, 0);\r\n\r\n int64_t * keep_flat = THLongTensor_data(keep);\r\n long num_to_keep = 0;\r\n\r\n int i, j;\r\n for (i = 0; i < boxes_num; i++) {\r\n int nblock = i / threadsPerBlock;\r\n int inblock = i % threadsPerBlock;\r\n\r\n if (!(remv_cpu_flat[nblock] & (1ULL << inblock))) {\r\n keep_flat[num_to_keep++] = i;\r\n long long *p = &mask_cpu_flat[0] + i * col_blocks;\r\n for (j = nblock; j < col_blocks; j++) {\r\n remv_cpu_flat[j] |= p[j];\r\n }\r\n }\r\n }\r\n\r\n int64_t * num_out_flat = THLongTensor_data(num_out);\r\n * num_out_flat = num_to_keep;\r\n\r\n THLongTensor_free(mask_cpu);\r\n THLongTensor_free(remv_cpu);\r\n\r\n \r\n //return 1; \r\n return num_to_keep;\r\n}\r\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\r\n m.def(\"cpu_nms\", &cpu_nms, \"nms cpu_nms \");\r\n\r\n m.def(\"gpu_nms\", &gpu_nms, \"nms gpu_nms (CUDA)\");\r\n\r\n}\r\n\r\n", "url": "https://github.com/pytorch/pytorch/issues/19822", "state": "closed", "labels": [], "created_at": "2019-04-27T08:40:06Z", "updated_at": "2019-04-28T07:42:41Z", "user": "liuchanfeng165" }, { "repo": "pytorch/pytorch", "number": 19744, "title": "How to select cl.exe for a config of cpp_extension?", "body": "## \u2753 Questions and Help\r\nHi, I got this error and dont wanna change vs15 again because the compiler keeps in OS,if I use cl.exe of VS2015 not VS2017 to compile my cpp_extension by setup.py and how to modify setup.py\r\n![image](https://user-images.githubusercontent.com/18642811/56752176-46bed400-67ba-11e9-8852-072930b90dde.png)\r\n## setup.py\r\nfrom setuptools import setup\r\nimport os\r\n#import torch\r\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\r\n\r\n_ext_src_root= ['E:/Program Files (x86)/Microsoft Visual Studio 14.0/VC'] #\r\ncx_path= 'E:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64'\r\ncc = os.environ.get('CC', cx_path+'/cl.exe')\r\ncxx = os.environ.get('CXX', cx_path+'/cl.exe')\r\ncl=os.environ.get('cl', cx_path+'/cl.exe')\r\n\r\nprint(cxx)\r\nvs_bin= '%VS140COMNTOOLS%/../../VC/bin/amd64'\r\n#nvcc sets --compiler-bindir for compiler ; -ccbin\r\nsetup(\r\n name='lltm_cuda',\r\n ext_modules=[\r\n CUDAExtension('lltm_cuda', [\r\n 'lltm_cuda.cpp',\r\n 'lltm_cuda_kernel.cu',\r\n ],\r\n # include_dirs=torch.utils.cpp_extension.include_paths(),\r\n extra_compile_args={'cxx': ['-g'],\r\n 'nvcc': ['-O2' , '--compiler-bindir' \" {}\".format(cx_path+'/cl.exe')]}\r\n #extra_cflags\r\n # extra_compile_args ={\r\n # \"cxx\": [\"-O2\", \"-I{}\".format(\"{}/include\".format(_ext_src_root))],\r\n # # \"nvcc\": [\"-O2\", \"-I{}\".format(\"{}/include\".format(_ext_src_root))],\r\n # \"cl\": [\"-O2\", \"-I{}\".format(\"{}/include\".format(_ext_src_root))],\r\n # },\r\n\r\n ),\r\n ],\r\n cmdclass={\r\n 'build_ext': BuildExtension\r\n })\r\n## add list\r\njust to compile ' .cu' not '.cpp ', mabe modify there or not?\r\nline 240 in cpp_extension.py :+1: \r\n # Register .cu and .cuh as valid source extensions.\r\n self.compiler.src_extensions += ['.cu', '.cuh']\r\n # Save the original _compile method for later.\r\n if self.compiler.compiler_type == 'msvc':\r\n self.compiler._cpp_extensions += ['.cu', '.cuh']\r\n original_compile = self.compiler.compile\r\n original_spawn = self.compiler.spawn\r\n else:\r\n original_compile = self.compiler._compile\r\n\r\n## Thanks a lot", "url": "https://github.com/pytorch/pytorch/issues/19744", "state": "closed", "labels": [], "created_at": "2019-04-25T16:29:54Z", "updated_at": "2019-04-26T08:28:24Z", "user": "liuchanfeng165" }, { "repo": "pytorch/pytorch", "number": 19611, "title": "How to understand the results of model. Eval () and how to obtain the predictive probability value?", "body": "Hello everyone\r\nI've been using tensorflow before, but I met torch when I added functionality to a tool. My ultimate goal was to get the classification probability. On a binary classification problem, I used model. Eval () (inputs). numpy () to get the prediction results.\r\n\r\nlike this \r\n2.19903\t-2.06323\r\n2.22841\t-2.09061\r\n2.20833\t-2.07209\r\n2.22888\t-2.09125\r\n2.22644\t-2.08869\r\n\r\nI don't know how to convert it to probability, or should I use other commands to get classification probability?\r\n\r\nI hope I can get help. Thank you.", "url": "https://github.com/pytorch/pytorch/issues/19611", "state": "closed", "labels": [ "triaged" ], "created_at": "2019-04-23T09:29:27Z", "updated_at": "2019-04-23T19:26:02Z", "user": "xujiameng" }, { "repo": "pytorch/pytorch", "number": 19561, "title": "How to do prediction/inference for a batch of images at a time with libtorch?", "body": "Anybody knows how to do prediction/inference for a batch of images at a time with libtorch/pytorch C++?\r\nany reply would be appreciated, thank you!\r\n", "url": "https://github.com/pytorch/pytorch/issues/19561", "state": "closed", "labels": [], "created_at": "2019-04-22T08:40:04Z", "updated_at": "2019-04-22T20:59:27Z", "user": "asa008" }, { "repo": "pytorch/examples", "number": 547, "title": "where can I get the inference code for classification?", "body": "I have trained the resnet-18 model for classification on my own dataset with examples/imagenet/main.py. And now I want to infernece the images, but there is no inference code.", "url": "https://github.com/pytorch/examples/issues/547", "state": "closed", "labels": [], "created_at": "2019-04-22T03:20:25Z", "updated_at": "2019-04-24T10:50:50Z", "comments": 2, "user": "ShaneYS" }, { "repo": "pytorch/pytorch", "number": 19453, "title": "How to load PyTorch model with LSTM using C++ api", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Establish a PyTorch model with LSTM module using python, and store the script module after using torch.jit.trace. Python code like this:\r\n\r\n```python\r\nclass MyModule(nn.Module):\r\n def __init__(self, N, M):\r\n super(MyModule, self).__init__()\r\n self.lstm = nn.LSTM(M, M, batch_first=True)\r\n self.linear = nn.Linear(M, 1)\r\n\r\n def forward(self, inputs, h0, c0):\r\n\r\n output, (_, _) = self.lstm(inputs, h0, c0)\r\n output, _ = torch.max(output, dim=1)\r\n # output, _ = torch.max(inputs, dim=1)\r\n output = self.linear(output)\r\n return output\r\n\r\nbatch_size = 8\r\nh = 33\r\nw = 45\r\nmodel = MyModule(h, w)\r\ndata = np.random.normal(1, 1, size=(batch_size, h, w))\r\ndata = torch.Tensor(data)\r\nh0, c0 = torch.zeros(1, batch_size, w), torch.zeros(1, batch_size, w)\r\n\r\ntraced_script_module = torch.jit.trace(model, (data, h0,c0))\r\ntraced_script_module.save('model.pt')\r\n```\r\n\r\n\r\n\r\n2. Load the model and move the model to GPU, then when the script exit, there is a core dump. However, If we don't move the model to gpu, the cpp script exits normally.My cpp script like this:\r\n\r\n```c++\r\nint main(int argc, const char* argv[]) {\r\n if (argc != 2) {\r\n std::cerr << \"usage: example-app <path-to-exported-script-module>\\n\";\r\n return -1;\r\n }\r\n\r\n // Deserialize the ScriptModule from a file using torch::jit::load().\r\n std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);\r\n\r\n assert(module != nullptr);\r\n std::cout << \"ok\\n\";\r\n this->module->to(at::Device(\"cuda:0\"))\r\n\r\n vector<torch::jit::IValue> inputs;\r\n int b = 2, h = 33, w = 45;\r\n vector<float> data(b*h*w, 1.0);\r\n torch::Tensor data_tensor = torch::from_blob(data.data(), {b, h, w}.to(at::Device(\"cuda:0\"));\r\n torch::Tensor h0 = torch::from_blob(vector<float>(1*b*w, 0.0), {b, h, w}).to(at::Device(\"cuda:0\"));\r\n torch::Tensor c0 = torch::from_blob(vector<float>(1*b*w, 0.0), {b, h, w}).to(at::Device(\"cuda:0\"));\r\n inputs.push_back(data_tensor);\r\n inputs.push_back(h0);\r\n inputs.push(c0);\r\n torch::Tensor output = module->forward(inputs).toTensor().cpu();\r\n auto accessor = output.accessor<float, 2>();\r\n vector<float> answer(b);\r\n for (int i=0; i<accessor.size(0); ++i){\r\n answer[i] = accessor[i][0];\r\n }\r\n cout << \"predict ok\" << endl;\r\n}\r\n```\r\n\r\n> Note: There is a bug to move init hidden state tensor of lstm to gpu [link](https://github.com/pytorch/pytorch/issues/15272) I use two methods to solve this problem, one is to specify the device in python model using hard code, another is to pass init hidden state as input parameter of forward in cpp script, which may cause a warning [link](https://discuss.pytorch.org/t/rnn-module-weights-are-not-part-of-single-contiguous-chunk-of-memory/6011/14)\r\n\r\nthe gdb trace info like this:\r\n\r\n```shell\r\n(gdb) where\r\n#0 0x00007ffff61ca9fe in ?? () from /usr/local/cuda/lib64/libcudart.so.10.0\r\n#1 0x00007ffff61cf96b in ?? () from /usr/local/cuda/lib64/libcudart.so.10.0\r\n#2 0x00007ffff61e4be2 in cudaDeviceSynchronize () from /usr/local/cuda/lib64/libcudart.so.10.0\r\n#3 0x00007fffb945dcf4 in cudnnDestroy () from repo/pytorch_cpp/libtorch/lib/libcaffe2_gpu.so\r\n#4 0x00007fffb4fca17d in std::unordered_map<int, std::vector<at::native::(anonymous namespace)::Handle, std::allocator<at::native::(anonymous namespace)::Handle> >, std::hash<int>, std::equal_to<int>, std::allocator<std::pair<int const, std::vector<at::native::(anonymous namespace)::Handle, std::allocator<at::native::(anonymous namespace)::Handle> > > > >::~unordered_map() () from repo/pytorch_cpp/libtorch/lib/libcaffe2_gpu.so\r\n#5 0x00007fffb31fe615 in __cxa_finalize (d=0x7fffe8519680) at cxa_finalize.c:83\r\n#6 0x00007fffb4dd3ac3 in __do_global_dtors_aux () from repo/pytorch_cpp/libtorch/lib/libcaffe2_gpu.so\r\n#7 0x00007fffffffe010 in ?? ()\r\n#8 0x00007ffff7de5b73 in _dl_fini () at dl-fini.c:138\r\nBacktrace stopped: frame did not save the PC\r\n\r\n```\r\n\r\n3. When I remove the LSTM in python model, then the cpp script exits normally.\r\n\r\n4. I guess the hidden state of LSTM cause the core dump, maybe relate to the release the init hidden state memory?\r\n\r\n\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\nSo, When I want to load a model with LSTM using c++, how to deal with the hidden state, and how to avoid core dump?\r\n\r\n## Environment\r\n\r\nPyTorch version: 1.0.1.post2\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.0.130\r\n\r\nOS: Ubuntu 18.04.2 LTS\r\nGCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0\r\nCMake version: version 3.10.2\r\n\r\nPython version: 3.6\r\nIs CUDA available: Yes\r\nCUDA runtime version: 10.0.130\r\nGPU models and configuration:\r\nGPU 0: GeForce RTX 2080 Ti\r\nGPU 1: GeForce RTX 2080 Ti\r\n\r\nNvidia driver version: 410.48\r\ncuDNN version: Could not collect\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1", "url": "https://github.com/pytorch/pytorch/issues/19453", "state": "open", "labels": [ "module: cpp", "triaged" ], "created_at": "2019-04-19T02:11:06Z", "updated_at": "2024-07-24T20:52:00Z", "user": "SixerWang" }, { "repo": "pytorch/tutorials", "number": 484, "title": "different results", "body": "HI, i copied to code exactly and and ran with all the appropriate downloads and im not achieving the resutls stated with 40 epochs. the loss stays at around 2.2 with test accuracy of 100/837 .. around 11%.. \r\n\r\nis there something i need to change to get over 50% accuracy ? ", "url": "https://github.com/pytorch/tutorials/issues/484", "state": "closed", "labels": [], "created_at": "2019-04-18T16:04:34Z", "updated_at": "2021-06-16T17:49:27Z", "comments": 1, "user": "taylerpauls" }, { "repo": "pytorch/examples", "number": 544, "title": "the error when I run the example for the imagenet", "body": "When I tried to run the model for the example/imagenet, I encounter such error.So could you tell me how to solve the problem?\r\n\r\npython /home/zrz/code/imagenet_dist/examples-master/imagenet/main.py -a resnet18 -/home/zrz/dataset/imagenet/imagenet2012/ILSVRC2012/raw-data/imagenet-data\r\n\r\n=> creating model 'resnet18'\r\n\r\nEpoch: [0][ 0/320292]\tTime 3.459 ( 3.459)\tData 0.295 ( 0.295)\tLoss 7.2399e+00 (7.2399e+00)\tAcc@1 0.00 ( 0.00)\tAcc@5 0.00 ( 0.00)\r\n\r\nEpoch: [0][ 10/320292]\tTime 0.043 ( 0.357)\tData 0.000 ( 0.027)\tLoss 9.4861e+00 (1.3169e+01)\tAcc@1 0.00 ( 0.00)\tAcc@5 0.00 ( 0.00)\r\n\r\nEpoch: [0][ 20/320292]\tTime 0.046 ( 0.209)\tData 0.000 ( 0.014)\tLoss 7.3722e+00 (1.0817e+01)\tAcc@1 0.00 ( 0.00)\tAcc@5 0.00 ( 0.00)\r\n\r\nEpoch: [0][ 30/320292]\tTime 0.032 ( 0.154)\tData 0.000 ( 0.010)\tLoss 6.9166e+00 (9.5394e+00)\tAcc@1 0.00 ( 0.00)\tAcc@5 0.00 ( 0.00)\r\n\r\n/opt/conda/conda-bld/pytorch_1549630534704/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/home/zrz/code/imagenet_dist/examples-master/imagenet/main.py\", line 417, in <module>\r\n\r\n main()\r\n\r\n File \"/home/zrz/code/imagenet_dist/examples-master/imagenet/main.py\", line 113, in main\r\n\r\n main_worker(args.gpu, ngpus_per_node, args)\r\n\r\n File \"/home/zrz/code/imagenet_dist/examples-master/imagenet/main.py\", line 239, in main_worker\r\n\r\n train(train_loader, model, criterion, optimizer, epoch, args)\r\n\r\n File \"/home/zrz/code/imagenet_dist/examples-master/imagenet/main.py\", line 286, in train\r\n\r\n losses.update(loss.item(), input.size(0))\r\n\r\nRuntimeError: CUDA error: device-side assert triggered\r\n\r\nterminate called after throwing an instance of 'c10::Error'\r\n\r\n what(): CUDA error: device-side assert triggered (insert_events at /opt/conda/conda-bld/pytorch_1549630534704/work/aten/src/THC/THCCachingAllocator.cpp:470)\r\n\r\nframe #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f099a50acf5 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libc10.so)\r\n\r\nframe #1: <unknown function> + 0x123b8c0 (0x7f099e7ee8c0 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)\r\n\r\nframe #2: at::TensorImpl::release_resources() + 0x50 (0x7f099ac76c30 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libcaffe2.so)\r\n\r\nframe #3: <unknown function> + 0x2a836b (0x7f099818b36b in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch.so.1)\r\n\r\nframe #4: <unknown function> + 0x30eff0 (0x7f09981f1ff0 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch.so.1)\r\n\r\nframe #5: torch::autograd::deleteFunction(torch::autograd::Function*) + 0x2f0 (0x7f099818dd70 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch.so.1)\r\n\r\nframe #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x45 (0x7f09c17f87f5 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch_python.so)\r\n\r\nframe #7: torch::autograd::Variable::Impl::release_resources() + 0x4a (0x7f09984001ba in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch.so.1)\r\n\r\nframe #8: <unknown function> + 0x12148b (0x7f09c181048b in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch_python.so)\r\n\r\nframe #9: <unknown function> + 0x31a49f (0x7f09c1a0949f in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch_python.so)\r\n\r\nframe #10: <unknown function> + 0x31a4e1 (0x7f09c1a094e1 in /home/zrz/miniconda3/envs/runze_env_name/lib/python3.6/site-packages/torch/lib/libtorch_python.so)\r\n\r\nframe #11: <unknown function> + 0x1993cf (0x5574e4c9a3cf in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #12: <unknown function> + 0xf12b7 (0x5574e4bf22b7 in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #13: <unknown function> + 0xf1147 (0x5574e4bf2147 in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #14: <unknown function> + 0xf115d (0x5574e4bf215d in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #15: <unknown function> + 0xf115d (0x5574e4bf215d in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #16: <unknown function> + 0xf115d (0x5574e4bf215d in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #17: PyDict_SetItem + 0x3da (0x5574e4c37e7a in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #18: PyDict_SetItemString + 0x4f (0x5574e4c4078f in /home/zrz/miniconda3/envs/runze_env_name/bin/python3.6)\r\n\r\nframe #19: PyImport_Cleanup + 0x99 (0x5574e4ca4709 in /ho", "url": "https://github.com/pytorch/examples/issues/544", "state": "closed", "labels": [], "created_at": "2019-04-14T06:22:38Z", "updated_at": "2022-03-10T05:56:43Z", "comments": 4, "user": "runzeer" }, { "repo": "pytorch/examples", "number": 543, "title": "[Important BUG] non-consistent behavior between \"final evaluation\" and \"eval on each epoch\" for mnist example", "body": "It is a common sense that, during evaluation, the model is not trained by the dev dataset. \r\nHowever, I noticed a strange different behavior between the two results:\r\n(1) train 10 epoch, having final evaluate on test data\r\n(2) train 10 epoch, having an evaluation after each training epoch on test data\r\n\r\n## Prior knowledge:\r\nEven though you set seed for everything \r\n```\r\n# set seed\r\nrandom.seed(args.seed)\r\nnp.random.seed(args.seed)\r\ntorch.manual_seed(args.seed)\r\nif use_cuda:\r\n torch.cuda.manual_seed_all(args.seed) # if got GPU also set this seed\r\n```\r\nWhen you run `examples/mnist/main.py`, it still give different result on GPU.\r\n```\r\nrun 1\r\n-------------\r\nTest set: Average loss: 0.1018, Accuracy: 9660/10000 (97%)\r\nTest set: Average loss: 0.0611, Accuracy: 9825/10000 (98%)\r\nTest set: Average loss: 0.0555, Accuracy: 9813/10000 (98%)\r\nTest set: Average loss: 0.0409, Accuracy: 9862/10000 (99%)\r\nTest set: Average loss: 0.0381, Accuracy: 9870/10000 (99%)\r\nTest set: Average loss: 0.0339, Accuracy: 9891/10000 (99%)\r\nTest set: Average loss: 0.0340, Accuracy: 9877/10000 (99%)\r\nTest set: Average loss: 0.0399, Accuracy: 9872/10000 (99%)\r\nTest set: Average loss: 0.0291, Accuracy: 9908/10000 (99%)\r\nTest set: Average loss: 0.0315, Accuracy: 9896/10000 (99%)\r\n\r\nrun 2\r\n--------------\r\nTest set: Average loss: 0.1016, Accuracy: 9666/10000 (97%)\r\nTest set: Average loss: 0.0608, Accuracy: 9828/10000 (98%)\r\nTest set: Average loss: 0.0567, Accuracy: 9810/10000 (98%)\r\nTest set: Average loss: 0.0408, Accuracy: 9864/10000 (99%)\r\nTest set: Average loss: 0.0382, Accuracy: 9868/10000 (99%)\r\nTest set: Average loss: 0.0339, Accuracy: 9894/10000 (99%)\r\nTest set: Average loss: 0.0349, Accuracy: 9871/10000 (99%)\r\nTest set: Average loss: 0.0396, Accuracy: 9876/10000 (99%)\r\nTest set: Average loss: 0.0294, Accuracy: 9911/10000 (99%)\r\nTest set: Average loss: 0.0304, Accuracy: 9895/10000 (99%)\r\n```\r\nAs long as you set `torch.backends.cudnn.deterministic = True`\r\nYou could get consistent results:\r\n\r\n```\r\n====== parameters ========\r\n batch_size: 64\r\n do_eval: True\r\n do_eval_each_epoch: True\r\n epochs: 10\r\n log_interval: 10\r\n lr: 0.01\r\n momentum: 0.5\r\n no_cuda: False\r\n save_model: False\r\n seed: 42\r\n test_batch_size: 1000\r\n==========================\r\nTest set: Average loss: 0.1034, Accuracy: 9679/10000 (97%)\r\nTest set: Average loss: 0.0615, Accuracy: 9804/10000 (98%)\r\nTest set: Average loss: 0.0484, Accuracy: 9847/10000 (98%)\r\nTest set: Average loss: 0.0361, Accuracy: 9888/10000 (99%)\r\nTest set: Average loss: 0.0341, Accuracy: 9887/10000 (99%)\r\nTest set: Average loss: 0.0380, Accuracy: 9877/10000 (99%)\r\nTest set: Average loss: 0.0302, Accuracy: 9899/10000 (99%)\r\nTest set: Average loss: 0.0315, Accuracy: 9884/10000 (99%)\r\nTest set: Average loss: 0.0283, Accuracy: 9909/10000 (99%)\r\nTest set: Average loss: 0.0266, Accuracy: 9907/10000 (99%) -> epoch 10\r\n\r\n\r\n====== parameters ========\r\n batch_size: 64\r\n do_eval: True\r\n do_eval_each_epoch: True\r\n epochs: 20\r\n log_interval: 10\r\n lr: 0.01\r\n momentum: 0.5\r\n no_cuda: False\r\n save_model: False\r\n seed: 42\r\n test_batch_size: 1000\r\n==========================\r\nTest set: Average loss: 0.1034, Accuracy: 9679/10000 (97%)\r\nTest set: Average loss: 0.0615, Accuracy: 9804/10000 (98%)\r\nTest set: Average loss: 0.0484, Accuracy: 9847/10000 (98%)\r\nTest set: Average loss: 0.0361, Accuracy: 9888/10000 (99%)\r\nTest set: Average loss: 0.0341, Accuracy: 9887/10000 (99%)\r\nTest set: Average loss: 0.0380, Accuracy: 9877/10000 (99%)\r\nTest set: Average loss: 0.0302, Accuracy: 9899/10000 (99%)\r\nTest set: Average loss: 0.0315, Accuracy: 9884/10000 (99%)\r\nTest set: Average loss: 0.0283, Accuracy: 9909/10000 (99%)\r\nTest set: Average loss: 0.0266, Accuracy: 9907/10000 (99%) -> epoch 10\r\nTest set: Average loss: 0.0373, Accuracy: 9870/10000 (99%)\r\nTest set: Average loss: 0.0286, Accuracy: 9909/10000 (99%)\r\nTest set: Average loss: 0.0309, Accuracy: 9908/10000 (99%)\r\nTest set: Average loss: 0.0302, Accuracy: 9899/10000 (99%)\r\nTest set: Average loss: 0.0261, Accuracy: 9907/10000 (99%)\r\nTest set: Average loss: 0.0258, Accuracy: 9913/10000 (99%)\r\nTest set: Average loss: 0.0288, Accuracy: 9917/10000 (99%)\r\nTest set: Average loss: 0.0280, Accuracy: 9904/10000 (99%)\r\nTest set: Average loss: 0.0294, Accuracy: 9902/10000 (99%)\r\nTest set: Average loss: 0.0257, Accuracy: 9914/10000 (99%) -> epoch 20\r\n```\r\n\r\nHowever, when you change the model to have `final evaluation` after epoch 10, the result becomes:\r\n```\r\n====== parameters ========\r\n batch_size: 64\r\n do_eval: True\r\n do_eval_each_epoch: False\r\n epochs: 10\r\n log_interval: 10\r\n lr: 0.01\r\n momentum: 0.5\r\n no_cuda: False\r\n save_model: False\r\n seed: 42\r\n test_batch_size: 1000\r\n==========================\r\nTest set: Average loss: 0.0361, Accuracy: 9885/10000 (99%) -> epoch 10\r\n```\r\n\r\nI also tried to add `torch.backends.cudnn.benchmark = False`, it gives the same result.\r\n\r\nRepeatability and consistent result is crucial in machine learning, do you guys know what is the r", "url": "https://github.com/pytorch/examples/issues/543", "state": "open", "labels": [ "help wanted", "nlp" ], "created_at": "2019-04-12T06:09:13Z", "updated_at": "2022-03-10T06:03:31Z", "comments": 1, "user": "Jacob-Ma" }, { "repo": "pytorch/examples", "number": 542, "title": "non-deterministic behavior on PyTorch mnist example", "body": "I tried PyTorch `examples/mnist/main.py` example to check if it is deterministic. \r\nAlthough I modified the code to set the seed on everything, it still gives quite different results on GPU. \r\n\r\nDo you know how to make the code be deterministic? Thank you very much.\r\n\r\nBelow is the code I have run and the output.\r\n```\r\nfrom __future__ import print_function\r\nimport argparse\r\nimport random\r\nimport numpy as np\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\nfrom torchvision import datasets, transforms\r\n\r\n\r\n\r\nclass Net(nn.Module):\r\n def __init__(self):\r\n super(Net, self).__init__()\r\n self.conv1 = nn.Conv2d(1, 20, 5, 1)\r\n self.conv2 = nn.Conv2d(20, 50, 5, 1)\r\n self.fc1 = nn.Linear(4 * 4 * 50, 500)\r\n self.fc2 = nn.Linear(500, 10)\r\n\r\n def forward(self, x):\r\n x = F.relu(self.conv1(x))\r\n x = F.max_pool2d(x, 2, 2)\r\n x = F.relu(self.conv2(x))\r\n x = F.max_pool2d(x, 2, 2)\r\n x = x.view(-1, 4 * 4 * 50)\r\n x = F.relu(self.fc1(x))\r\n x = self.fc2(x)\r\n return F.log_softmax(x, dim=1)\r\n\r\n\r\ndef train(args, model, device, train_loader, optimizer, epoch):\r\n model.train()\r\n for batch_idx, (data, target) in enumerate(train_loader):\r\n data, target = data.to(device), target.to(device)\r\n optimizer.zero_grad()\r\n output = model(data)\r\n loss = F.nll_loss(output, target)\r\n loss.backward()\r\n optimizer.step()\r\n #if batch_idx % args.log_interval == 0:\r\n #print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\r\n #epoch, batch_idx * len(data), len(train_loader.dataset),\r\n #100. * batch_idx / len(train_loader), loss.item()))\r\n\r\n\r\ndef test(args, model, device, test_loader):\r\n model.eval()\r\n test_loss = 0\r\n correct = 0\r\n with torch.no_grad():\r\n for data, target in test_loader:\r\n data, target = data.to(device), target.to(device)\r\n output = model(data)\r\n test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss\r\n pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability\r\n correct += pred.eq(target.view_as(pred)).sum().item()\r\n\r\n test_loss /= len(test_loader.dataset)\r\n print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(\r\n test_loss, correct, len(test_loader.dataset),\r\n 100. * correct / len(test_loader.dataset)))\r\n\r\n\r\ndef main():\r\n # Training settings\r\n parser = argparse.ArgumentParser(description='PyTorch MNIST Example')\r\n parser.add_argument('--batch-size', type=int, default=64, metavar='N',\r\n help='input batch size for training (default: 64)')\r\n parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\r\n help='input batch size for testing (default: 1000)')\r\n parser.add_argument('--epochs', type=int, default=10, metavar='N',\r\n help='number of epochs to train (default: 10)')\r\n parser.add_argument('--lr', type=float, default=0.01, metavar='LR',\r\n help='learning rate (default: 0.01)')\r\n parser.add_argument('--momentum', type=float, default=0.5, metavar='M',\r\n help='SGD momentum (default: 0.5)')\r\n parser.add_argument('--no-cuda', action='store_true', default=False,\r\n help='disables CUDA training')\r\n parser.add_argument('--seed', type=int, default=1, metavar='S',\r\n help='random seed (default: 1)')\r\n parser.add_argument('--log-interval', type=int, default=10, metavar='N',\r\n help='how many batches to wait before logging training status')\r\n\r\n parser.add_argument('--save-model', action='store_true', default=False,\r\n help='For Saving the current Model')\r\n args = parser.parse_args()\r\n use_cuda = not args.no_cuda and torch.cuda.is_available()\r\n\r\n # set seed\r\n random.seed(args.seed)\r\n np.random.seed(args.seed)\r\n torch.manual_seed(args.seed)\r\n if use_cuda:\r\n torch.cuda.manual_seed_all(args.seed) # if got GPU also set this seed\r\n\r\n\r\n # torch.manual_seed(args.seed)\r\n\r\n device = torch.device(\"cuda\" if use_cuda else \"cpu\")\r\n\r\n kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}\r\n train_loader = torch.utils.data.DataLoader(\r\n datasets.MNIST('../data', train=True, download=True,\r\n transform=transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Normalize((0.1307,), (0.3081,))\r\n ])),\r\n batch_size=args.batch_size, shuffle=True, **kwargs)\r\n test_loader = torch.utils.data.DataLoader(\r\n datasets.MNIST('../data', train=False, transform=transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Norm", "url": "https://github.com/pytorch/examples/issues/542", "state": "closed", "labels": [], "created_at": "2019-04-12T04:31:25Z", "updated_at": "2019-04-12T05:53:14Z", "comments": 1, "user": "Jacob-Ma" }, { "repo": "pytorch/tutorials", "number": 476, "title": "Example with torch.empty in What is PyTorch? is misleading", "body": "In this [example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/tensor_tutorial.py) the output of a `torch.empty` call looks the same result I would obtain with `torch.zeros`, while it should be filled with garbage values.\r\n\r\nThis might be misleading to beginners.", "url": "https://github.com/pytorch/tutorials/issues/476", "state": "closed", "labels": [], "created_at": "2019-04-11T09:19:06Z", "updated_at": "2019-08-23T21:18:53Z", "user": "alexchapeaux" }, { "repo": "pytorch/pytorch", "number": 19098, "title": "[C++ front end] how to use clamp to clip gradients?", "body": "## \u2753 Questions and Help\r\nhi, I wonder if this could clip the gradients: \r\n\r\nfor(int i=0; i<net.parameters().size(); i++)\r\n\t\t{\r\n\t\t\tnet.parameters().at(i).grad() = torch::clamp(net.parameters().at(i).grad(), -GRADIENT_CLIP, GRADIENT_CLIP);\t\t\t\r\n\t\t}\r\noptimizer.step();\r\n\r\nI found it doesn't seem to work, and I still got large output.\r\nHow can I use the \"clamp\u201c correctly?\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/19098", "state": "closed", "labels": [], "created_at": "2019-04-10T05:37:24Z", "updated_at": "2019-04-10T05:38:01Z", "user": "ZhuXingJune" }, { "repo": "pytorch/pytorch", "number": 19012, "title": "I have a piece of code that is written in LUA and I want to know what is the pytorch equivalent of the code.?How do I implement these lines in pytorch .. Can somebody help me with it? The code is mentioned in the comment", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/19012", "state": "closed", "labels": [], "created_at": "2019-04-08T10:15:59Z", "updated_at": "2019-04-08T10:30:32Z", "user": "AshishRMenon" }, { "repo": "pytorch/pytorch", "number": 18951, "title": "Completed code with bug report for hdf5 dataset. How to fix?", "body": "Hello all, I want to report the issue of pytorch with hdf5 loader. The full source code and bug are provided \r\nThe problem is that I want to call the `test_dataloader.py` in two terminals. The file is used to load the custom hdf5 dataset (`custom_h5_loader`). To generate h5 files, you may need first run the file `convert_to_h5` to generate 100 random h5 files.\r\nTo reproduce the error. Please run follows steps\r\n\r\n**Step 1:** Generate the hdf5\r\n\r\n```\r\nfrom __future__ import print_function\r\nimport h5py\r\nimport numpy as np\r\nimport random\r\nimport os\r\n\r\nif not os.path.exists('./data_h5'):\r\n os.makedirs('./data_h5')\r\n\r\nfor index in range(100):\r\n data = np.random.uniform(0,1, size=(3,128,128))\r\n data = data[None, ...]\r\n print (data.shape)\r\n with h5py.File('./data_h5/' +'%s.h5' % (str(index)), 'w') as f:\r\n f['data'] = data\r\n```\r\nStep2: Create a python file custom_h5_loader.py and paste the code\r\n```\r\nimport h5py\r\nimport torch.utils.data as data\r\nimport glob\r\nimport torch\r\nimport numpy as np\r\nimport os\r\nclass custom_h5_loader(data.Dataset):\r\n\r\n def __init__(self, root_path):\r\n self.hdf5_list = [x for x in glob.glob(os.path.join(root_path, '*.h5'))]\r\n self.data_list = []\r\n for ind in range (len(self.hdf5_list)):\r\n self.h5_file = h5py.File(self.hdf5_list[ind])\r\n data_i = self.h5_file.get('data') \r\n self.data_list.append(data_i)\r\n\r\n def __getitem__(self, index):\r\n self.data = np.asarray(self.data_list[index]) \r\n return (torch.from_numpy(self.data).float())\r\n\r\n def __len__(self):\r\n return len(self.hdf5_list)\r\n```\r\n**Step 3:** Create a python file with name test_dataloader.py\r\n```\r\nfrom dataloader import custom_h5_loader\r\nimport torch\r\nimport torchvision.datasets as dsets\r\n\r\ntrain_h5_dataset = custom_h5_loader('./data_h5')\r\nh5_loader = torch.utils.data.DataLoader(dataset=train_h5_dataset, batch_size=2, shuffle=True, num_workers=4) \r\nfor epoch in range(100000):\r\n for i, data in enumerate(h5_loader): \r\n print (data.shape)\r\n```\r\nStep 4: Open first terminal and run (it worked)\r\n\r\n> python test_dataloader.py\r\n\r\nStep 5: Open the second terminal and run (Error report in below)\r\n\r\n> python test_dataloader.py\r\n\r\nThe error is \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/john/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py\", line 162, in make_fid\r\n fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n File \"h5py/h5f.pyx\", line 78, in h5py.h5f.open\r\nOSError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/john/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py\", line 165, in make_fid\r\n fid = h5f.open(name, h5f.ACC_RDONLY, fapl=fapl)\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n File \"h5py/h5f.pyx\", line 78, in h5py.h5f.open\r\nOSError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test_dataloader.py\", line 5, in <module>\r\n train_h5_dataset = custom_h5_loader('./data_h5')\r\n File \"/home/john/test_hdf5/dataloader.py\", line 13, in __init__\r\n self.h5_file = h5py.File(self.hdf5_list[ind])\r\n File \"/home/john/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py\", line 312, in __init__\r\n fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)\r\n File \"/home/john/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py\", line 167, in make_fid\r\n fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n File \"h5py/h5f.pyx\", line 98, in h5py.h5f.create\r\nOSError: Unable to create file (unable to open file: name = './data_h5/47.h5', errno = 17, error message = 'File exists', flags = 15, o_flags = c2)\r\n```\r\n\r\nThis is my configuration\r\n```\r\nHDF5 Version: 1.10.2\r\nConfigured on: Wed May 9 23:24:59 UTC 2018\r\nFeatures:\r\n---------\r\n Parallel HDF5: no\r\n High-level library: yes\r\n Threadsafety: yes\r\nprint (torch.__version__)\r\n1.0.0.dev20181227\r\n\r\n```", "url": "https://github.com/pytorch/pytorch/issues/18951", "state": "closed", "labels": [], "created_at": "2019-04-05T14:50:55Z", "updated_at": "2019-04-06T12:36:34Z", "user": "John1231983" }, { "repo": "pytorch/pytorch", "number": 18872, "title": "How to convert a cudnn.BLSTM model to nn.LSTM bidirectional model", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\nI have a *.t7 model that consists in few convolution layers and 1 block of cudnn.BLSTM(). To convert the model to pytorch, I create the same architecture with pytorch and try to get the weights from the t7 file. I think the convolution layers were correct but I have a doubt about the cudnn.BLSTM. When I extract the BLSTM weighs, I got one dimentional list of millions of parameters which corresponds to the same numbers of parameters in pytorch LSTM. However, in pytorch the weights and biases are with well know structure and weight_ih_l0, weight_hh_l0,... bias_ih_l_0, bias_hh_l0, ... weight_ih_l0_reverse, ... but in the cuddnn.BLSTM(), all parameters are set in one flattened list, so how to know the order and the shape of weights and biases ??\r\nI debug the cudnn.BLSTM structure on th terminal and I get some idea about the concatenation orders and the shape:\r\n Exemple \r\n```\r\n# torch\r\nrnn = cudnn.BLSTM(1,1, 2, false, 0.5)\r\n# get the weights\r\nweights = rnn:weights() \r\nth> rnn:weights()\r\n{\r\n 1 : \r\n {\r\n 1 : CudaTensor - size: 1\r\n 2 : CudaTensor - size: 1\r\n 3 : CudaTensor - size: 1\r\n 4 : CudaTensor - size: 1\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n 2 : \r\n {\r\n 1 : CudaTensor - size: 1\r\n 2 : CudaTensor - size: 1\r\n 3 : CudaTensor - size: 1\r\n 4 : CudaTensor - size: 1\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n 3 : \r\n {\r\n 1 : CudaTensor - size: 2\r\n 2 : CudaTensor - size: 2\r\n 3 : CudaTensor - size: 2\r\n 4 : CudaTensor - size: 2\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n 4 : \r\n {\r\n 1 : CudaTensor - size: 2\r\n 2 : CudaTensor - size: 2\r\n 3 : CudaTensor - size: 2\r\n 4 : CudaTensor - size: 2\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n}\r\n\r\nbiases = rnn:biaises()\r\n\r\nth> rnn:biases()\r\n{\r\n 1 : \r\n {\r\n 1 : CudaTensor - size: 1\r\n 2 : CudaTensor - size: 1\r\n 3 : CudaTensor - size: 1\r\n 4 : CudaTensor - size: 1\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n 2 : \r\n {\r\n 1 : CudaTensor - size: 1\r\n 2 : CudaTensor - size: 1\r\n 3 : CudaTensor - size: 1\r\n 4 : CudaTensor - size: 1\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n 3 : \r\n {\r\n 1 : CudaTensor - size: 1\r\n 2 : CudaTensor - size: 1\r\n 3 : CudaTensor - size: 1\r\n 4 : CudaTensor - size: 1\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n 4 : \r\n {\r\n 1 : CudaTensor - size: 1\r\n 2 : CudaTensor - size: 1\r\n 3 : CudaTensor - size: 1\r\n 4 : CudaTensor - size: 1\r\n 5 : CudaTensor - size: 1\r\n 6 : CudaTensor - size: 1\r\n 7 : CudaTensor - size: 1\r\n 8 : CudaTensor - size: 1\r\n }\r\n}\r\n\r\n```\r\nall_flattened_params = rnn:parameters()\r\n\r\nwith this small example: I see that the rnn:parameters() function put the weighs and after that the biases in the above order. So:\r\nweights =all_flattened_params[:-32]\r\nbiases = all_flattened_params[-32:]\r\nNow, How to know the order of weights and biases regarding the pytorch nn.LSTM() ?\r\nI supposed that this order:\r\n weight_ih_l0, weight_hh_l0, weight_ih_l0_reverse, weight_hh_l0_reverse, weight_ih_l1, ....\r\nbias_ih_l0, bias_hh_l0, bias_ih_l0_reverse, bias_hh_l0_reverse, .... but my model does not give the right output!!", "url": "https://github.com/pytorch/pytorch/issues/18872", "state": "closed", "labels": [], "created_at": "2019-04-04T18:30:20Z", "updated_at": "2019-04-04T19:00:04Z", "user": "rafikg" }, { "repo": "pytorch/pytorch", "number": 18837, "title": "How to use libtorch api torch::nn::parallel::data_parallel train on multi-gpu", "body": "## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/18837", "state": "closed", "labels": [ "module: performance", "oncall: distributed", "module: multi-gpu", "module: docs", "module: cpp", "module: nn", "triaged" ], "created_at": "2019-04-04T02:48:41Z", "updated_at": "2020-06-25T16:48:51Z", "user": "DDFlyInCode" }, { "repo": "pytorch/tutorials", "number": 468, "title": "auxilary net confusion Inception_v3 Vs. GoogLeNet in finetune script?", "body": "Hi, I followed the finetune tutorial (but using this script to train from scratch): for `inception` as there is only one `aux_logit` below snippet working fine.\r\n\r\n```\r\n elif model_name == \"inception\":\r\n \"\"\" Inception v3 \r\n Be careful, expects (299,299) sized images and has auxiliary output\r\n \"\"\"\r\n model_ft = models.inception_v3(pretrained=use_pretrained)\r\n set_parameter_requires_grad(model_ft, feature_extract)\r\n # Handle the auxilary net\r\n num_ftrs = model_ft.AuxLogits.fc.in_features\r\n model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)\r\n # Handle the primary net\r\n num_ftrs = model_ft.fc.in_features\r\n model_ft.fc = nn.Linear(num_ftrs,num_classes)\r\n input_size = 299\r\n```\r\ncorrespoing `inception_v3` net file snippet:\r\n\r\n```\r\nif self.training and self.aux_logits:\r\naux = self.AuxLogits(x)\r\n```\r\nand the `fc` snippet:\r\n```\r\nself.fc = nn.Linear(768, num_classes)\r\n```\r\n\r\nWhereas for `GoogLeNet` has two auxilary outputs, the net file snippet has:\r\n\r\n```\r\nif self.training and self.aux_logits:\r\naux1 = self.aux1(x)\r\n.....\r\nif self.training and self.aux_logits:\r\naux2 = self.aux2(x)\r\n```\r\n\r\nand the `fc` snippets: \r\n\r\n```\r\nself.fc1 = nn.Linear(2048, 1024)\r\nself.fc2 = nn.Linear(1024, num_classes)\r\n```\r\n\r\nNow, my confusion is about using the `fc` in finetuning script, how to embed?\r\n\r\n```\r\nnum_ftrs = model_ft.(aux1/aux2).(fc1/fc2).in_features\r\nmodel_ft.(aux1/aux2).(fc1/fc2) = nn.Linear(num_ftrs, num_classes)\r\n```\r\nany thoughts?\r\n", "url": "https://github.com/pytorch/tutorials/issues/468", "state": "closed", "labels": [], "created_at": "2019-04-03T15:43:05Z", "updated_at": "2019-04-07T10:52:28Z", "comments": 0, "user": "rajasekharponakala" }, { "repo": "pytorch/pytorch", "number": 18781, "title": "TORCH_CUDA_ARCH_LIST=All should know what is possible", "body": "## \ud83d\udc1b Bug\r\n\r\nWhen setting TORCH_CUDA_ARCH_LIST=All, I expect Torch to compile with all CUDA architectures available to my current version of CUDA. Instead, it attempted to build for cuda 2.0.\r\n\r\nSee error:\r\nnvcc fatal : Unsupported gpu architecture 'compute_20'\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Install CUDA >= 9.0\r\n2. TORCH_CUDA_ARCH_LIST=All cmake -DUSE_CUDA=ON ..\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\nFilters out 2.x architectures if CUDA >= 9.0\r\n\r\n## Environment\r\nCUDA 10.1", "url": "https://github.com/pytorch/pytorch/issues/18781", "state": "closed", "labels": [ "module: build", "module: docs", "module: cuda", "module: molly-guard", "triaged" ], "created_at": "2019-04-03T00:31:35Z", "updated_at": "2024-08-04T05:06:56Z", "user": "xsacha" }, { "repo": "pytorch/tutorials", "number": 463, "title": "The input of a GRU is of shape (seq_len, batch, input_size). I wonder does seq_len means anything?", "body": "I noticed that in the documentation of pytorch GRU, the input shape should be (seq_len, batch, input_size), thus the input ought to be a sequence and the model will deal with the sequence inside itself. But in this notebook, the author passes a tensor only of length one in each iteration in function `train`. I mean if the model can deal with sequential inputs, why not just feed a sequence of sentence to it?\r\n\r\nThis is my first time to start an issue on github, please forgive me if there is anything wrong.", "url": "https://github.com/pytorch/tutorials/issues/463", "state": "closed", "labels": [], "created_at": "2019-04-02T09:32:10Z", "updated_at": "2019-08-23T21:49:49Z", "comments": 1, "user": "CSUN1997" }, { "repo": "pytorch/text", "number": 522, "title": "how to set random seed for BucketIterator to guarantee that it produce the same itrator every time you run the code ?", "body": "", "url": "https://github.com/pytorch/text/issues/522", "state": "closed", "labels": [ "obsolete" ], "created_at": "2019-04-02T01:47:03Z", "updated_at": "2022-01-25T04:04:42Z", "user": "zide05" }, { "repo": "pytorch/pytorch", "number": 18677, "title": "How to compile/install caffe2 with cuda 9.0?", "body": "I'm building caffe2 on ubuntu 18.04 with CUDA 9.0? But when I run \"python setup.py install\" command, I have met issue about version of CUDA. It needs to CUDA 9.2 instead of 9.0 but i only want to build with 9.0. \r\nHow to pass it?\r\n\r\nThank you!", "url": "https://github.com/pytorch/pytorch/issues/18677", "state": "open", "labels": [ "caffe2" ], "created_at": "2019-04-01T06:54:50Z", "updated_at": "2019-05-27T12:43:18Z", "user": "TuanHAnhVN" }, { "repo": "pytorch/tutorials", "number": 459, "title": "Inappropriate example code of vector-Jacobian product in (AUTOGRAD: AUTOMATIC DIFFERENTIATION) ", "body": "I want to check the vector-Jacobian product. But in the example code the x is randomly generated and the the function relation between x and y is not clear. So how can I check if I correctly understand the vector-Jacobian product ? Can you improve it ?", "url": "https://github.com/pytorch/tutorials/issues/459", "state": "closed", "labels": [], "created_at": "2019-03-30T13:57:54Z", "updated_at": "2021-06-16T20:24:11Z", "comments": 2, "user": "guixianjin" }, { "repo": "pytorch/tutorials", "number": 458, "title": "how can i remove the layers in model.", "body": "I want to use the googlenet model to do finetuing, remove the fc layer, and then add other layers.", "url": "https://github.com/pytorch/tutorials/issues/458", "state": "closed", "labels": [], "created_at": "2019-03-30T08:28:08Z", "updated_at": "2021-06-16T20:25:08Z", "comments": 1, "user": "wangjue-wzq" }, { "repo": "pytorch/examples", "number": 534, "title": "when i run the main.py in imagenet,i encounter a problem", "body": "![image](https://user-images.githubusercontent.com/45848862/55045271-37901d80-5078-11e9-954e-4d7f4bdb4894.png)\r\n", "url": "https://github.com/pytorch/examples/issues/534", "state": "open", "labels": [ "help wanted" ], "created_at": "2019-03-27T02:09:15Z", "updated_at": "2022-03-10T06:03:41Z", "comments": 0, "user": "Xavier-cvpr" }, { "repo": "pytorch/examples", "number": 531, "title": "Running examples/world_language_model", "body": "Hello, I am trying to run 'examples/world_language_model'.\r\nHowever, when I do 'python main.py --cuda' in the above directory.\r\nIt prints an error like this.\r\n![image](https://user-images.githubusercontent.com/45330740/54750424-fa143600-4c1a-11e9-99ad-227b54d115f9.png)\r\nDoes anyone know how to solve this problem?", "url": "https://github.com/pytorch/examples/issues/531", "state": "closed", "labels": [], "created_at": "2019-03-21T11:51:02Z", "updated_at": "2020-03-25T00:54:09Z", "comments": 1, "user": "ohcurrent" }, { "repo": "pytorch/examples", "number": 524, "title": "[super_resolution]How can I get 'model_epoch_500.pth' file? ", "body": "When I run:\r\n`python super_resolve.py --input_image dataset/BSDS300/images/test/16077.jpg --model model_epoch_500.pth --output_filename out.png` .\r\n\r\nIt outpus : \r\n` [Errno 2] No such file or directory: 'model_epoch_500.pth'`\r\n\r\nHow can I get 'model_epoch_500.pth' file? ", "url": "https://github.com/pytorch/examples/issues/524", "state": "closed", "labels": [], "created_at": "2019-03-08T11:08:17Z", "updated_at": "2019-05-31T08:57:22Z", "comments": 0, "user": "dyfloveslife" }, { "repo": "pytorch/pytorch", "number": 17654, "title": "I want to know how to use the select(int64_t dim, int64_t index) in at::Tensor?What is the definition of a parameter ?", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/17654", "state": "closed", "labels": [], "created_at": "2019-03-04T12:57:04Z", "updated_at": "2019-03-04T17:05:24Z", "user": "SongyiGao" }, { "repo": "pytorch/tutorials", "number": 441, "title": "Add 'Open in Colab' Button to Tutorial Code ", "body": "Is it possible to edit the downloadable Jupyter notebooks at the bottom of each 60-minute blitz section? In a utopian scenario there would be a button at the top of each file, allowing the user to 'Open in Colab'. If this button was present the user would fix the code so that each cell was ran with the output visible.\r\n\r\nHere is an example with Keras, to explicate what I mean. There are some uploaded PyTorch Jupyter Notebook files in the same repository (along with the Keras Jupyter Notebooks) to gain a more comprehensive perspective.\r\n\r\nhttps://github.com/PhillySchoolofAI/DL-Libraries/blob/master/KerasFunctionalAPI.ipynb", "url": "https://github.com/pytorch/tutorials/issues/441", "state": "closed", "labels": [], "created_at": "2019-03-04T12:26:51Z", "updated_at": "2019-03-31T10:34:34Z", "comments": 2, "user": "pynchmeister" }, { "repo": "pytorch/examples", "number": 518, "title": "How to train from scratch on custom model", "body": "Hi,\r\n\r\nThank you very much for the code.\r\nI am new to pytorch (have worked a lot with tensorflow), and have a question which is probably basic, but I can't find the answer.\r\nIf I want to train ImageNet on a model which doesn't appear in the code list (under \"model names\"), but I do have the model as pth.tar file, how am I able to do the training?\r\n\r\nThanks!", "url": "https://github.com/pytorch/examples/issues/518", "state": "closed", "labels": [], "created_at": "2019-02-26T10:19:24Z", "updated_at": "2022-03-10T05:28:08Z", "comments": 1, "user": "jennyzu" }, { "repo": "pytorch/examples", "number": 517, "title": "How to save model in mnist.cpp?", "body": "How to save the model in cpp api mnist.cpp?\r\nmodel.save and torch::save(model,\"mnisttrain.pkl\")\r\nAll error", "url": "https://github.com/pytorch/examples/issues/517", "state": "open", "labels": [ "c++" ], "created_at": "2019-02-26T06:58:05Z", "updated_at": "2022-03-09T20:49:34Z", "comments": 5, "user": "engineer1109" }, { "repo": "pytorch/tutorials", "number": 437, "title": "A short tutorial showing the input arguments for NLL loss/ cross entropy loss would be incredibly helpful", "body": "The arguments NLL loss (and by proxy cross entropy loss) take are in a relatively weird format. The documentation for the function does all it can within reason of the original documentation, but there's an incredible number of questions posted about weird problems giving them the kind of arguments. Much more than for other comparable things, and most don't really have good reusable answers.\r\n\r\nThe obvious solution to this is for someone to create a simple example based tutorial of using NLL loss, and clearly showing exactly what format the arguments need to be in (perhaps starting with input and targets that are one hot encoded to make it as idiot proof as possible). \r\n\r\nI've spent 4 hours trying to solve a problem exactly like this without success, and am about to refer to source over it. Someone please take mercy on future programmers.", "url": "https://github.com/pytorch/tutorials/issues/437", "state": "open", "labels": [], "created_at": "2019-02-26T05:35:50Z", "updated_at": "2019-02-26T05:35:50Z", "comments": 0, "user": "jkterry1" }, { "repo": "pytorch/pytorch", "number": 17368, "title": "What is a version/git-hash of nightly build? ", "body": "## \ud83d\ude80 Feature\r\n\r\nNightly build does not include git-hash of pytorch.\r\nSo we can not know what the build is.\r\n\r\nhttps://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip\r\n\r\nI know the zip includes build_version which is like \"1.0.0dev20190221\".\r\nCould you add git-hash of pytorch in build_version and include native_functions.yaml and the README of the yaml?\r\n\r\n## Motivation\r\n\r\nWe are writing ffi-bindings by using nightly build and native_functions.yaml of pytorch-github.\r\nBoth the build and the yaml-spec's format are changed frequently and do not match version.\r\n", "url": "https://github.com/pytorch/pytorch/issues/17368", "state": "closed", "labels": [ "awaiting response (this tag is deprecated)" ], "created_at": "2019-02-21T19:54:41Z", "updated_at": "2019-02-28T22:34:49Z", "user": "junjihashimoto" }, { "repo": "pytorch/ELF", "number": 142, "title": "what is meaning of outputs at verbose mode?", "body": "after I input quit at the df_console , it was still calulateing\r\n```\r\nD:\\elfv2\\play_opengo_v2\\elf_gpu_full\\elf>df_console --load d:/pretrained-go-19x19-v1.bin --num_block 20 --dim 224 --ver\r\nbose\r\n[2019-02-21 10:35:22.103] [elfgames::go::common::GoGameBase-12] [info] [0] Seed: 62127748, thread_id: 156604934899288204\r\n\r\n\r\n? Invalid input\r\n\r\n\r\n? Invalid input\r\n\r\ngenmove b\r\n[2019-02-21 10:48:10.811] [elfgames::go::GoGameSelfPlay-0-15] [info] Current board:\r\n A B C D E F G H J K L M N O P Q R S T\r\n19 . . . . . . . . . . . . . . . . . . . 19\r\n18 . . . . . . . . . . . . . . . . . . . 18\r\n17 . . . . . . . . . . . . . . . . . . . 17\r\n16 . . . + . . . . . + . . . . . + . . . 16\r\n15 . . . . . . . . . . . . . . . . . . . 15\r\n14 . . . . . . . . . . . . . . . . . . . 14\r\n13 . . . . . . . . . . . . . . . . . . . 13\r\n12 . . . . . . . . . . . . . . . . . . . 12\r\n11 . . . . . . . . . . . . . . . . . . . 11 WHITE (O) has captured 0 stones\r\n10 . . . + . . . . . + . . . . . + . . . 10 BLACK (X) has captured 0 stones\r\n 9 . . . . . . . . . . . . . . . . . . . 9\r\n 8 . . . . . . . . . . . . . . . . . . . 8\r\n 7 . . . . . . . . . . . . . . . . . . . 7\r\n 6 . . . . . . . . . . . . . . . . . . . 6\r\n 5 . . . . . . . . . . . . . . . . . . . 5\r\n 4 . . . + . . . . . + . . . . . + . . . 4\r\n 3 . . . . . . . . . . . . . . . . . . . 3\r\n 2 . . . . . . . . . . . . . . . . . . . 2\r\n 1 . . . . . . . . . . . . . . . . . . . 1\r\n A B C D E F G H J K L M N O P Q R S T\r\nLast move: C0, nextPlayer: Black\r\n\r\n[1] Propose move [Q16][pp][352]\r\n\r\n= Q16\r\n\r\n\r\n? Invalid input\r\n\r\n\r\n? Invalid input\r\n\r\n\r\n? Invalid input\r\n\r\n\r\n? Invalid input\r\n\r\nquit\r\n[2019-02-21 10:51:47.147] [elf::base::Context-3] [info] Prepare to stop ...\r\n[2019-02-21 10:51:47.521] [elfgames::go::GoGameSelfPlay-0-15] [info] Current board:\r\n A B C D E F G H J K L M N O P Q R S T\r\n19 . . . . . . . . . . . . . . . . . . . 19\r\n18 . . . . . . . . . . . . . . . . . . . 18\r\n17 . . . . . . . . . . . . . . . . . . . 17\r\n16 . . . + . . . . . + . . . . . X). . . 16\r\n15 . . . . . . . . . . . . . . . . . . . 15\r\n14 . . . . . . . . . . . . . . . . . . . 14\r\n13 . . . . . . . . . . . . . . . . . . . 13\r\n12 . . . . . . . . . . . . . . . . . . . 12\r\n11 . . . . . . . . . . . . . . . . . . . 11 WHITE (O) has captured 0 stones\r\n10 . . . + . . . . . + . . . . . + . . . 10 BLACK (X) has captured 0 stones\r\n 9 . . . . . . . . . . . . . . . . . . . 9\r\n 8 . . . . . . . . . . . . . . . . . . . 8\r\n 7 . . . . . . . . . . . . . . . . . . . 7\r\n 6 . . . . . . . . . . . . . . . . . . . 6\r\n 5 . . . . . . . . . . . . . . . . . . . 5\r\n 4 . . . + . . . . . + . . . . . + . . . 4\r\n 3 . . . . . . . . . . . . . . . . . . . 3\r\n 2 . . . . . . . . . . . . . . . . . . . 2\r\n 1 . . . . . . . . . . . . . . . . . . . 1\r\n A B C D E F G H J K L M N O P Q R S T\r\nLast move: Q16, nextPlayer: White\r\n\r\n[2] Propose move [D4][dd][88]\r\n\r\n[2019-02-21 10:51:48.657] [elfgames::go::GoGameSelfPlay-0-15] [info] Current board:\r\n A B C D E F G H J K L M N O P Q R S T\r\n19 . . . . . . . . . . . . . . . . . . . 19\r\n18 . . . . . . . . . . . . . . . . . . . 18\r\n17 . . . . . . . . . . . . . . . . . . . 17\r\n16 . . . + . . . . . + . . . . . X . . . 16\r\n15 . . . . . . . . . . . . . . . . . . . 15\r\n14 . . . . . . . . . . . . . . . . . . . 14\r\n13 . . . . . . . . . . . . . . . . . . . 13\r\n12 . . . . . . . . . . . . . . . . . . . 12\r\n11 . . . . . . . . . . . . . . . . . . . 11 WHITE (O) has captured 0 stones\r\n10 . . . + . . . . . + . . . . . + . . . 10 BLACK (X) has captured 0 stones\r\n 9 . . . . . . . . . . . . . . . . . . . 9\r\n 8 . . . . . . . . . . . . . . . . . . . 8\r\n 7 . . . . . . . . . . . . . . . . . . . 7\r\n 6 . . . . . . . . . . . . . . . . . . . 6\r\n 5 . . . . . . . . . . . . . . . . . . . 5\r\n 4 . . . O . . . . . + . . . . . + . . . 4\r\n 3 . . . . . . . . . . . . . . . . . . . 3\r\n 2 X). . . . . . . . . . . . . . . . . . 2\r\n 1 . . . . . . . . . . . . . . . . . . . 1\r\n A B C D E F G H J K L M N O P Q R S T\r\nLast move: A2, nextPlayer: White\r\n\r\n[4] Propose move [F17][fq][363]\r\n\r\n[2019-02-21 10:51:50.631] [elfgames::go::GoGameSelfPlay-0-15] [info] Current board:\r\n A B C D E F G H J K L M N O P Q R S T\r\n19 . . . . . . . . . . . . . . . . . . . 19\r\n18 . . . . . . . . . . . . . . . . . . . 18\r\n17 . . . . . O). . . . . . . . . . . . . 17\r\n16 . . . + . . . . . + . . . . . X . . . 16\r\n15 . . . . . . . . . . . . . . . . . . . 15\r\n14 . . . . . . . . . . . . . . . . . . . 14\r\n13 . . . . . . . . . . . . . . . . . . . 13\r\n12 . . . . . . . . . . . . . . . . . . . 12\r\n11 . . . . . . . . . . . . . . . . . . . 11 WHITE (O) has captured 0 stones\r\n10 . . . + . . . . . + . . . . . + . . . 10 BLACK (X) has captured 0 stones\r\n 9 . . . . . . . . . . . . . . . . . . . 9\r\n 8 . . . . . . . . . . . . . . . . . . . 8\r\n 7 . . . . . . . . . . . . . . . . . . . 7\r\n 6 . . . . . . . . . . . . . . . . . . . 6\r\n 5 . . . . . . . . . . . . . . . . . . . 5\r\n 4 . . . O . . . . . + . . . . . + . . . 4\r\n 3 . . . . . . . . . . . . . . . . . . . 3\r\n 2 X . . . . . . .", "url": "https://github.com/pytorch/ELF/issues/142", "state": "open", "labels": [], "created_at": "2019-02-21T02:51:58Z", "updated_at": "2019-02-21T02:51:58Z", "user": "l1t1" }, { "repo": "pytorch/examples", "number": 514, "title": "Why does discriminator's output change between batchsize:64 and batchsize:1 on inference.", "body": "I'm trying to get the discriminator output using train finished discriminator.\r\nProcedure is below.\r\n1. training dcgan.\r\n2. preparing my image data and resize it 64 * 64.\r\n3. load my image data using dataloader(same as training's one.)\r\n4. I change only batch size at inference.\r\n5. I got small Discriminator outputs(after sigmoid result).\r\n\r\nfor example)\r\nI prepared 64 images under the dataroot directory. And I tried 2 experience.\r\n\r\nI got a below's discriminator output at inference using batch size 64.\r\n```\r\ntensor([0.9955, 0.8801, 0.9727, 0.7377, 0.2667, 0.9432, 0.9941, 0.6896, 0.8638,\r\n 0.5006, 0.9766, 0.4148, 0.9577, 0.9065, 0.9849, 0.9027, 0.1619, 0.5418,\r\n 0.9256, 0.7502, 0.1467, 0.8197, 0.9100, 0.3416, 0.0066, 0.9521, 0.9973,\r\n 1.0000, 0.4952, 0.3026, 0.5347, 0.8695, 0.8033, 0.6709, 0.3602, 0.2145,\r\n 0.6901, 0.0129, 0.6780, 0.5321, 0.8195, 0.8662, 0.1759, 0.5599, 0.7313,\r\n 0.5138, 0.9396, 0.9256, 0.3011, 0.8163, 0.8046, 0.4802, 0.6256, 0.1656,\r\n 0.9368, 0.1080, 0.5960, 0.9493, 0.9533, 0.9609, 0.0137, 0.1603, 0.7717,\r\n 0.5684], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\n```\r\n\r\nI got a below's discriminator output at inference using batch size 1.\r\n```\r\ntensor([0.6289], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.2455], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.8702], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.0000], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.0002], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\n............\r\ntensor([0.0002], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.0022], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.9955], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\ntensor([0.1370], device='cuda:0', grad_fn=<SqueezeBackward1>)\r\n```\r\n\r\nI wonder why I got a small output(different from batch size64) on batch size1. \r\nFor instance, I got the 0.0000 on batch size 1, but at batch size 64 0.0000 is nothing.\r\n\r\nAlso I tried to match tensor size using torch.cat at batch size 1.\r\nI changed from [1, 3, 64, 64] -> [64, 3, 64, 64], using same one image's tensor and torch.cat.\r\nBut I got different output value.\r\n\r\nIf you have any suggestions or point out then please let me know.", "url": "https://github.com/pytorch/examples/issues/514", "state": "closed", "labels": [], "created_at": "2019-02-20T09:07:47Z", "updated_at": "2019-02-20T10:55:48Z", "comments": 2, "user": "y-shirai-r" }, { "repo": "pytorch/examples", "number": 507, "title": "Is there a plan to make the imagenet example in this repository support `fp16`?", "body": "Thanks! :)", "url": "https://github.com/pytorch/examples/issues/507", "state": "closed", "labels": [], "created_at": "2019-02-14T19:38:58Z", "updated_at": "2022-03-10T03:12:22Z", "comments": 1, "user": "deepakn94" }, { "repo": "pytorch/pytorch", "number": 17111, "title": "where is the code for the implement of loss?", "body": "i want to find the implement of nn.BCEWithLogitsLoss, but it returns F functions , and i cannot find where F functions is , i want to modify the loss", "url": "https://github.com/pytorch/pytorch/issues/17111", "state": "closed", "labels": [], "created_at": "2019-02-14T12:29:06Z", "updated_at": "2019-02-14T15:29:47Z", "user": "Jasperty" }, { "repo": "pytorch/examples", "number": 503, "title": "I have some basic questions about training", "body": "Hi, I have some questions about training the model.\r\n\r\n1. \"neural_style.py train --dataset /Users/me/Downloads/examples-master/fast_neural_style/me0 --style-image /Users/met/Downloads/examples-master/fast_neural_style/images/content-images/amber.jpg --save-model-dir /Users/umit/Downloads/examples-master/fast_neural_style/me/11 --epochs 2 --cuda 0\" \r\n\r\nThis is saving a .model file to my computer. What is the model format? How can i convert to onnx and eventually to coreml? \r\n\r\n2. Training image set data is [80K/13GB] on here. What happens i use much less photos? Like around 100?", "url": "https://github.com/pytorch/examples/issues/503", "state": "open", "labels": [ "onnx" ], "created_at": "2019-02-03T08:40:43Z", "updated_at": "2022-03-10T03:14:41Z", "comments": 0, "user": "Umity" }, { "repo": "pytorch/examples", "number": 502, "title": "neural_style.py: error: unrecognized arguments: --export_onnx", "body": "I am getting this error when i use:\r\n\r\npython neural_style/neural_style.py train --dataset /Users/me/Downloads/examples-master/fast_neural_style/me0 --style-image /Users/me/Downloads/examples-master/fast_neural_style/images/content-images/amber.jpg --save-model-dir /Users/me/Downloads/examples-master/fast_neural_style/me2 --epochs 2 --cuda 0 --export_onnx /Users/umit/Downloads/examples-master/fast_neural_style/me2/onnx/pytorch_model.onnx\r\n\r\nhow can i fix this?\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/502", "state": "closed", "labels": [], "created_at": "2019-02-02T21:19:03Z", "updated_at": "2019-02-03T08:30:05Z", "comments": 0, "user": "Umity" }, { "repo": "pytorch/tutorials", "number": 431, "title": "cpp extension tutorial: not device agnostic?", "body": "Would the kerne calll in `lltm_cuda_forward` in the tutorial `tutorials/advanced_source/cpp_extension.rst` fail on multi gpu systems if the inputs are not on the default device, i.e., `device:0`?\r\n\r\nTo my understanding, some \"magic\" takes care of setting the right context if we add functionality do pytorch via custom kernels, [see here](https://github.com/pytorch/pytorch/tree/7d7855ea3124c16862ea7ed4758f4c7a804ca1ac/aten/src/ATen/native#device_guard).\r\nHowever, it seems like in the tutorial this machinery is not used. \r\nExplicit usage of `at::OptionalDeviceGuard` should resolve the issue (?) in the tutorial. \r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/431", "state": "open", "labels": [ "C++" ], "created_at": "2019-01-31T07:39:07Z", "updated_at": "2023-03-15T02:14:36Z", "comments": 1, "user": "c-hofer" }, { "repo": "pytorch/examples", "number": 501, "title": "why you set epoch to the sampler in the distributed example?", "body": "Hi,\r\n\r\nThanks for providing this helpful tutorial series. I am reading the part of training imagenet with distributed mode: \r\n\r\nAt [this line](https://github.com/pytorch/examples/blob/fe8abc3c810420df2856c6e668258f396b154cee/imagenet/main.py#L208), I do not understand the reason why shall I set epoch it the sampler. What is the difference between setting the epoch or not? Cannot I directly fetch data from the dataloader with this sampler as one args? ", "url": "https://github.com/pytorch/examples/issues/501", "state": "closed", "labels": [], "created_at": "2019-01-30T03:44:09Z", "updated_at": "2023-01-05T08:01:47Z", "comments": 4, "user": "CoinCheung" }, { "repo": "pytorch/pytorch", "number": 16439, "title": "What is the difference between F.cross_entropy() and F.nll_loss() ??", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/16439", "state": "closed", "labels": [], "created_at": "2019-01-28T11:06:23Z", "updated_at": "2019-01-28T13:25:31Z", "user": "lakshmiumenon" }, { "repo": "pytorch/pytorch", "number": 16438, "title": "What is the difference between F.cross_entropy() and F.nll_loss() ??", "body": "## \u2753 Questions and Help\r\n\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n", "url": "https://github.com/pytorch/pytorch/issues/16438", "state": "closed", "labels": [], "created_at": "2019-01-28T11:06:21Z", "updated_at": "2019-01-28T13:45:23Z", "user": "lakshmiumenon" }, { "repo": "pytorch/examples", "number": 500, "title": "_pickle.UnpicklingError: invalid load key, '\\xff'.", "body": "May I know how to fix the following error\r\n```\r\n\r\nmahmood@orca:fast_neural_style$ python3.7 neural_style/neural_style.py eval --content-image images/content-images/amber.jpg --model images/style-images/mosaic.jpg --output-image a1.jpg --cuda 1\r\nTraceback (most recent call last):\r\n File \"neural_style/neural_style.py\", line 240, in <module>\r\n main()\r\n File \"neural_style/neural_style.py\", line 236, in main\r\n stylize(args)\r\n File \"neural_style/neural_style.py\", line 138, in stylize\r\n state_dict = torch.load(args.model)\r\n File \"/home/mahmood/anaconda3/lib/python3.7/site-packages/torch/serialization.py\", line 367, in load\r\n return _load(f, map_location, pickle_module)\r\n File \"/home/mahmood/anaconda3/lib/python3.7/site-packages/torch/serialization.py\", line 528, in _load\r\n magic_number = pickle_module.load(f)\r\n_pickle.UnpicklingError: invalid load key, '\\xff'.\r\n\r\n```", "url": "https://github.com/pytorch/examples/issues/500", "state": "open", "labels": [ "bug", "vision", "pickle" ], "created_at": "2019-01-26T15:37:28Z", "updated_at": "2022-03-10T05:20:57Z", "comments": 0, "user": "mahmoodn" }, { "repo": "pytorch/examples", "number": 499, "title": "Crash in mnist example with num_workers > 0", "body": "I'm getting a crash in the mnist example at the end of the 1st epoch when I run with any num_workers > 0 I'm running the python code in PyCharm debugger on a Ubuntu 16.04 system with PyTorch 1.0 with CUDA enabled. \r\n\r\nraceback (most recent call last):\r\n File \"/snap/pycharm-community/108/helpers/pydev/pydevd.py\", line 1741, in <module>\r\nTraceback (most recent call last):\r\n File \"/snap/pycharm-community/108/helpers/pydev/pydevd.py\", line 1741, in <module>\r\n main()\r\n File \"/snap/pycharm-community/108/helpers/pydev/pydevd.py\", line 1735, in main\r\n main()\r\n File \"/snap/pycharm-community/108/helpers/pydev/pydevd.py\", line 1735, in main\r\n globals = debugger.run(setup['file'], None, None, is_module)\r\n File \"/snap/pycharm-community/108/helpers/pydev/pydevd.py\", line 1135, in run\r\n globals = debugger.run(setup['file'], None, None, is_module)\r\n File \"/snap/pycharm-community/108/helpers/pydev/pydevd.py\", line 1135, in run\r\n pydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"/snap/pycharm-community/108/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\npydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"/snap/pycharm-community/108/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n\r\n File \"/home/ankur/dev/benchmark/mnist_main.py\", line 119, in <module>\r\n File \"/home/ankur/dev/benchmark/mnist_main.py\", line 119, in <module>\r\n main()main()\r\n\r\n File \"/home/ankur/dev/benchmark/mnist_main.py\", line 112, in main\r\n File \"/home/ankur/dev/benchmark/mnist_main.py\", line 112, in main\r\n test(args, model, device, test_loader)test(args, model, device, test_loader)\r\n\r\n File \"/home/ankur/dev/benchmark/mnist_main.py\", line 49, in test\r\n File \"/home/ankur/dev/benchmark/mnist_main.py\", line 49, in test\r\n for data, target in test_loader:for data, target in test_loader:\r\n\r\n File \"/home/ankur/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 819, in __iter__\r\n File \"/home/ankur/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 631, in __next__\r\n idx, batch = self._get_batch()\r\n File \"/home/ankur/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 601, in _get_batch\r\n return _DataLoaderIter(self)\r\n File \"/home/ankur/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 560, in __init__\r\n return self.data_queue.get(timeout=MP_STATUS_CHECK_INTERVAL)\r\n File \"/home/ankur/miniconda3/lib/python3.7/queue.py\", line 179, in get\r\n w.start()\r\n File \"/home/ankur/miniconda3/lib/python3.7/multiprocessing/process.py\", line 112, in start\r\n self.not_empty.wait(remaining)\r\n File \"/home/ankur/miniconda3/lib/python3.7/threading.py\", line 300, in wait\r\n self._popen = self._Popen(self)\r\n File \"/home/ankur/miniconda3/lib/python3.7/multiprocessing/context.py\", line 223, in _Popen\r\n gotit = waiter.acquire(True, timeout)\r\nreturn _default_context.get_context().Process._Popen(process_obj)\r\n File \"/home/ankur/miniconda3/lib/python3.7/multiprocessing/context.py\", line 277, in _Popen\r\nKeyboardInterrupt\r\n return Popen(process_obj)\r\n File \"/home/ankur/miniconda3/lib/python3.7/multiprocessing/popen_fork.py\", line 20, in __init__\r\n self._launch(process_obj)\r\n File \"/home/ankur/miniconda3/lib/python3.7/multiprocessing/popen_fork.py\", line 70, in _launch\r\n self.pid = os.fork()\r\n File \"/snap/pycharm-community/108/helpers/pydev/_pydev_bundle/pydev_monkey.py\", line 496, in new_fork\r\n", "url": "https://github.com/pytorch/examples/issues/499", "state": "open", "labels": [ "help wanted" ], "created_at": "2019-01-25T22:24:02Z", "updated_at": "2022-03-10T06:03:51Z", "comments": 0, "user": "ankur6ue" }, { "repo": "pytorch/tutorials", "number": 411, "title": "ValueError low>=high in RandomCrop", "body": "Hello everyone,\r\n\r\nFirst off, thanks for such detailed PyTorch tutorials! Recently, I was going through the data loading and processing tutorial [here](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html). Maybe there's some misunderstanding from my side but in `beginner_source/data_loading_tutorial.py`, for class `RandomCrop`, when the value of `output_size` is greater than original image size, it throws a `ValueError: low >= high` as `h < new_h` and `w < new_w`.\r\n\r\nSo, can I get a confirmation as to whether or not this is a bug? If yes, I would be happy to fix it. \r\n\r\n(**Note**: To reproduce the error try changing value of `crop` [here](https://github.com/pytorch/tutorials/blob/master/beginner_source/data_loading_tutorial.py#L304) to something greater such as `crop = RandomCrop(224)`)\r\n\r\nPing @chsasank \r\n\r\nThanks,\r\nGaurav", "url": "https://github.com/pytorch/tutorials/issues/411", "state": "closed", "labels": [], "created_at": "2019-01-11T23:54:39Z", "updated_at": "2019-09-12T04:24:36Z", "comments": 1, "user": "Demfier" }, { "repo": "pytorch/tutorials", "number": 410, "title": "TypeError : filename should be a str in beginner/nn_tutorial.py", "body": "I got a `TypeError` error in `beginner_source/nn_tutorial.py` while building tutorials with Python 3.5 on Ubuntu 16.04.\r\n\r\nIt seems like the type of parameter passed in `gzip.open()` in [\"beginner_source/nn_tutorial.py(L64)\"](https://github.com/pytorch/tutorials/blob/master/beginner_source/nn_tutorial.py#L64) should be converted.\r\n\r\nThe error message is like below:\r\n```\r\nWARNING: /home/ubuntu/tutorials/beginner_source/nn_tutorial.py failed to execute correctly: Traceback (most recent call last):\r\n File \"/home/ubuntu/tutorials/beginner_source/nn_tutorial.py\", line 64, in <module>\r\n with gzip.open(PATH / FILENAME, \"rb\") as f:\r\n File \"/usr/lib/python3.5/gzip.py\", line 57, in open\r\n raise TypeError(\"filename must be a str or bytes object, or a file\")\r\nTypeError: filename must be a str or bytes object, or a file\r\n```\r\n\r\nI think using `(PATH / FILENAME).as_posix()` more suitable.\r\n\r\nIf this error is visible to everyone, can I fix it?\r\n", "url": "https://github.com/pytorch/tutorials/issues/410", "state": "closed", "labels": [], "created_at": "2019-01-11T06:37:53Z", "updated_at": "2019-02-08T20:26:28Z", "comments": 1, "user": "9bow" }, { "repo": "pytorch/examples", "number": 483, "title": "different node has different parameters", "body": "I have tried it, but if I found that each model in all node has different gradients,so it results to different model among GPUs, At last I do like this:\r\n #something to do###############\r\n loss.backward()\r\n self.average_gradients()\r\n self.optimizer.step():\r\n #other thing to do###############\r\n\r\n def average_gradients(self):\r\n world_size = distributed.get_world_size()\r\n\r\n for p in self.net.parameters():\r\n distributed.all_reduce(p.grad.data, op=distributed.reduce_op.SUM)\r\n p.grad.data /= float(world_size)\r\n\r\nIt work normally,but I do not know whether it is right, cause official of pyTorch do not mention it.\r\ncould you tell me is it right? thank you!!!\r\n\r\n\r\nAnd another question: I found I can not run on 2 or more machines, I do not know how to configure it, should I make a configur so that all machines in my group can access each other without password by ssh?", "url": "https://github.com/pytorch/examples/issues/483", "state": "closed", "labels": [], "created_at": "2018-12-24T11:54:04Z", "updated_at": "2018-12-30T03:53:49Z", "comments": 1, "user": "YihengJiang" }, { "repo": "pytorch/examples", "number": 482, "title": "Encountered IsADirectoryError at neural style eval", "body": "Hello, I am new in python and machine learning related field.\r\n\r\nI caught the following error when trying to test a style from the examples:\r\n\r\n```\r\n /cygdrive/d/Downloaded Programs/git/examples/fast_neural_style\r\n$ python neural_style/neural_style.py eval --content-image <images/content-images/amber.jpg> --model <saved_models/candy.pth> --output-image <images/output-images/> --content-scale 1 --cuda 1\r\nFatal Python error: init_sys_streams: can't initialize sys standard streams\r\nIsADirectoryError: [Errno 21] Is a directory: 0\r\n```\r\nHow can I fix this error?\r\nThanks for any suggestions!\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/482", "state": "closed", "labels": [], "created_at": "2018-12-24T09:19:06Z", "updated_at": "2022-03-10T05:47:27Z", "comments": 1, "user": "HosinLau" }, { "repo": "pytorch/examples", "number": 481, "title": "the imagenet main when is use multi gpu(not set gpu args) then the input will not call input.cuda() why?", "body": "![image](https://user-images.githubusercontent.com/6283983/50394800-c734e000-079a-11e9-89cd-964cb751a227.png)\r\nif i do't set args.gpu, the only target.cuda() call, why do this kind, but the code run success", "url": "https://github.com/pytorch/examples/issues/481", "state": "closed", "labels": [], "created_at": "2018-12-24T08:42:29Z", "updated_at": "2018-12-30T03:45:44Z", "comments": 1, "user": "mmxuan18" }, { "repo": "pytorch/tutorials", "number": 400, "title": "C++ Frontend Tutorial with GPU Support", "body": "I am following [this tutorial](https://pytorch.org/cppdocs/installing.html) on using PyTorch with C++ frontend. However, I would like to have a CUDA support, not a CPU only. I have also the `torch` package installed using `conda` but I guess it is not enough to compile C++ sources because I am getting the following error:\r\n```\r\n$ cmake -DCMAKE_PREFIX_PATH=/usr/lib/libtorch ..\r\nCUDA_TOOLKIT_ROOT_DIR not found or specified\r\n-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) (Required is at least version \"7.0\")\r\nCMake Warning at /usr/lib/libtorch/share/cmake/Caffe2/public/cuda.cmake:15 (message):\r\n Caffe2: CUDA cannot be found. Depending on whether you are building Caffe2\r\n or a Caffe2 dependent library, the next warning / error will give you more\r\n info.\r\n```\r\nAs I can see, I need a CUDA toolkit installed, and the env variable pointing to the installation. Could you please create a version of the tutorial that would explain how to better handle this? I would like to have _both_ Python package and to build programmes from C++ sources.\r\n\r\n---\r\n_I am not sure if I've submitted the issue into a right repository so let me know if this should be moved somewhere else._", "url": "https://github.com/pytorch/tutorials/issues/400", "state": "closed", "labels": [], "created_at": "2018-12-24T07:50:37Z", "updated_at": "2021-06-16T20:45:56Z", "comments": 0, "user": "i-zaitsev" }, { "repo": "pytorch/examples", "number": 479, "title": "Share dataloader in multi node multi gpus training with multiprocessing-distributed", "body": "In the example of [imagenet](https://github.com/pytorch/examples/blob/master/imagenet/main.py), `ngpus` process is created, so if I am training on 4 nodes with 4 gpus on each, there would be 16 processes in total. \r\nIs there any way I could share the dataloader for the processes on the same node? Since I implemented a special dataloader with cost a lot of memory. \r\nMany thanks. ", "url": "https://github.com/pytorch/examples/issues/479", "state": "closed", "labels": [], "created_at": "2018-12-19T13:41:36Z", "updated_at": "2020-09-11T12:57:16Z", "comments": 0, "user": "xvjiarui" }, { "repo": "pytorch/examples", "number": 478, "title": "in imagenet example why the val need to first resize to 256 and then crop 224, if the input is 299 how to set the resize input?", "body": "![image](https://user-images.githubusercontent.com/6283983/50218419-da286880-03c6-11e9-84a7-fc6bb57a61b1.png)\r\nwhen the model is inceptionv3 the input size is 299, while others is 224. so the resize parameter counld set to what? and why in val stage there need to first resize to a bigger size then crop, some example directly use resize(224) ", "url": "https://github.com/pytorch/examples/issues/478", "state": "closed", "labels": [], "created_at": "2018-12-19T11:48:52Z", "updated_at": "2019-03-27T18:01:58Z", "comments": 1, "user": "mmxuan18" }, { "repo": "pytorch/examples", "number": 476, "title": "--resume fails after 1 epoch with Pytorch 1.0 release", "body": "Using --resume fails after 1 epoch with Pytorch 1.0 release with error below. I tried this with resnet50 and resnet18\r\n```\r\nTraceback (most recent call last):\r\n File \"main.py\", line 398, in <module>\r\n main()\r\n File \"main.py\", line 110, in main\r\n mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))\r\n File \"/home/tools/anaconda3-5.3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 167, in spawn\r\n while not spawn_context.join():\r\n File \"/home/tools/anaconda3-5.3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 114, in join\r\n raise Exception(msg)\r\nException:\r\n\r\n-- Process 1 terminated with the following error:\r\nTraceback (most recent call last):\r\n File \"/home/tools/anaconda3-5.3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 19, in _wrap\r\n fn(i, *args)\r\n File \"/space8T/mdflickner/pytorch/examples/imagenet/main.py\", line 241, in main_worker\r\n is_best = acc1 > best_acc1\r\nRuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathCompareT.cu:15\r\n```", "url": "https://github.com/pytorch/examples/issues/476", "state": "open", "labels": [ "help wanted", "vision" ], "created_at": "2018-12-17T17:04:15Z", "updated_at": "2022-03-10T06:04:01Z", "comments": 1, "user": "mdflickner" }, { "repo": "pytorch/examples", "number": 473, "title": "the sum of doc_topics is not equal 1", "body": "Hi, I have a question. when I run lda.py, I find the sum of `doc_topics` is not equal 1. In fact, they are decreasing in the training process. Is there something wrong?", "url": "https://github.com/pytorch/examples/issues/473", "state": "closed", "labels": [], "created_at": "2018-12-14T14:50:43Z", "updated_at": "2018-12-19T22:17:36Z", "comments": 2, "user": "dongfeng951" }, { "repo": "pytorch/examples", "number": 472, "title": "How can I find a reference to understand the meaning of pro.sample?", "body": "what is the meaning of `pyro.sample( ........, infer={\"enumerate\": \"parallel\"})`? How can I find a reference to understand the meaning of `pro.sample`? I cannot find it.\r\nThanks!", "url": "https://github.com/pytorch/examples/issues/472", "state": "closed", "labels": [], "created_at": "2018-12-14T13:33:42Z", "updated_at": "2018-12-15T04:02:46Z", "comments": 1, "user": "dongfeng951" }, { "repo": "pytorch/examples", "number": 471, "title": "How to invoke GPU?", "body": "Hi, when I run the example codes such as vae.py, the GPU cannot be invoked automatically and therefore the training is very slow.\r\nBut the pytorch can invoke GPU automatically.\r\nSo how to invoke GPU? \r\nThank you!", "url": "https://github.com/pytorch/examples/issues/471", "state": "closed", "labels": [], "created_at": "2018-12-14T08:14:46Z", "updated_at": "2018-12-15T04:18:49Z", "comments": 2, "user": "dongfeng951" }, { "repo": "pytorch/examples", "number": 470, "title": "The Volatile GPU-Util is always 0, in examples/imagenet", "body": "I run the example of imagenet in https://github.com/pytorch/examples/tree/master/imagenet, althougt I can run it successfully, but it is slow, and the Volatile GPU-Util is always 0 with command 'nvidia-smi'\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 390.87 Driver Version: 390.87 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 GeForce GTX 108... Off | 00000000:01:00.0 On | N/A |\r\n| 31% 58C P2 70W / 250W | 9584MiB / 11170MiB | 0% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n \r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| 0 947 G /usr/lib/xorg/Xorg 285MiB |\r\n| 0 1752 G compiz 154MiB |\r\n| 0 1930 G fcitx-qimpanel 9MiB |\r\n| 0 4690 G ...quest-channel-token=4115043597718524916 72MiB |\r\n| 0 26519 C python 9057MiB |\r\n+-----------------------------------------------------------------------------+\r\n\r\n```\r\n", "url": "https://github.com/pytorch/examples/issues/470", "state": "open", "labels": [ "question" ], "created_at": "2018-12-13T06:49:15Z", "updated_at": "2022-03-10T16:46:35Z", "comments": 10, "user": "wangxianrui" }, { "repo": "pytorch/pytorch", "number": 14889, "title": "what is the algorithm theory of torch.nn.AdaptiveMaxPool2d?", "body": "what is the algorithm theory of torch.nn.AdaptiveMaxPool2d?\r\n Is there any papers about torch.nn.AdaptiveMaxPool2d?\r\n And how to find the c++ of implementing torch.nn.AdaptiveMaxPool2d in pytorch?", "url": "https://github.com/pytorch/pytorch/issues/14889", "state": "closed", "labels": [], "created_at": "2018-12-07T10:16:15Z", "updated_at": "2018-12-07T15:46:19Z", "user": "zsf23" }, { "repo": "pytorch/pytorch", "number": 14850, "title": "Document what is C10", "body": "C10 seems to have an increasingly important role throughout the PyTorch code base (e.g., see #6325 or count the number of open issues containing \"c10\") yet I was unable to find a high-level description about it. There are only \"rumors\" to be found about C10, see for example [this post](https://discuss.pytorch.org/t/pytorch-and-caffe2-convergence/21713/4) at pytorch.org:\r\n> I read on github, that there is a new backend called C10 in progress which combines features and backends from ATen and Caffe2. This backend should be a more generic one which means that adding new tensor types and similar stuff will be easier (the actual discussion was about introducing complex tensors).\r\n\r\nSomeone else on [Reddit](https://www.reddit.com/r/MachineLearning/comments/8xurkp/n_tensorflow_190_is_out/e27ewhz/):\r\n> I'd never heard of C10 until you posted this, so caveat emptor, but from the few Google hits available it seems that the major motivations for C10 include:\r\n>\r\n> * Common Tensor ops for PyTorch and Caffe2 (only PyTorch uses ATen)\r\n> * Pluggable tensor ops/backend (maybe easing future AMD, TPU, etc support?)\r\n>\r\n> There's also talk of C10 helping integration of Complex tensor support for PyTorch, which helps give an idea of the level of abstraction they are shooting for.\r\n\r\nAt the minimum, please add a README to the pytorch/c10 directory briefly describing the project.", "url": "https://github.com/pytorch/pytorch/issues/14850", "state": "closed", "labels": [ "module: docs", "triaged" ], "created_at": "2018-12-06T16:00:51Z", "updated_at": "2024-03-13T05:40:46Z", "user": "christoph-conrads" }, { "repo": "pytorch/examples", "number": 456, "title": "How can I get the name of each image in the whole imagenet training process?", "body": "I want to obtain the name and the true label of each image, how can I modify the code to do that ? I find the data_loader just return the input tensor and the label without the image name.", "url": "https://github.com/pytorch/examples/issues/456", "state": "closed", "labels": [], "created_at": "2018-11-30T06:40:48Z", "updated_at": "2018-11-30T06:54:39Z", "comments": 1, "user": "lith0613" }, { "repo": "pytorch/pytorch", "number": 14460, "title": "C++ API use model.pt in GPU . When I use lstm in model, there is what(): Expected object of backend CPU but got backend CUDA for argument #2 'mat2' (checked_tensor_unwrap at /pytorch/aten/src/ATen/Utils.h:70)", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nC++ code\uff1a\r\n\r\nmodule->to(at::kCUDA);\r\nauto gpu_tensor = img_var.to(at::kCUDA);\r\nvector<torch::jit::IValue> inputs;\r\ninputs.push_back(gpu_tensor);\r\nauto out_tensor = module->forward(inputs).toTensor();\r\n\r\nmodel\uff1a\r\n\r\n\t\t# LSTM\r\n\r\n\t\tself.lstm1 = nn.LSTM(input_size=64, hidden_size=64, num_layers=2, batch_first=True)\r\n\r\n\t\tself.lstm2 = nn.LSTM(input_size=64, hidden_size=64, num_layers=2, batch_first=True)\r\n\r\n\t\t#self.lstm2 = nn.Sequential(*lstm2)\r\n\r\n\t\tself.lstm3 = nn.LSTM(input_size=64, hidden_size=64, num_layers=2, batch_first=True)\r\n\r\n\r\nand\r\n\r\n\t\tim6_1, hidden1 = self.lstm1(img5_1)\r\n\t\t#self.encode5(a)\r\n\t\tim6_2, hidden2 = self.lstm2(img5_2, hidden1)\r\n\t\t#self.encode5(a)\r\n\t\tim6_3, hidden3 = self.lstm3(img5_3, hidden2)\r\n\r\n error\uff1a\r\n![image](https://user-images.githubusercontent.com/30424546/49136932-10be1680-f326-11e8-90ca-7bd061b8f8c2.png)\r\n\r\n\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n\r\n - PyTorch Version ( 1.0):\r\n - OS ( CentOS):\r\n\r\n - Python version:2.7\r\n - CUDA/cuDNN version:9.0\r\n - GCC version:5.4.0\r\n \r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n", "url": "https://github.com/pytorch/pytorch/issues/14460", "state": "closed", "labels": [], "created_at": "2018-11-28T07:58:33Z", "updated_at": "2021-06-01T21:26:27Z", "user": "joy-yjl" }, { "repo": "pytorch/pytorch", "number": 14456, "title": "What is wrong with my model? It slows many times after switching from version 0.4.1 to 1.0", "body": "This is the definition of my model: \r\n```python\r\nimport torchvision\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\n\r\n\r\nclass Model(nn.Module):\r\n def __init__(self, in_dim, out_dim, *args, **kwargs):\r\n super(Model, self).__init__(*args, **kwargs)\r\n vgg16 = torchvision.models.vgg16()\r\n\r\n layers = []\r\n layers.append(nn.Conv2d(in_dim, 64, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(64, 64, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.MaxPool2d(3, stride = 2, padding = 1))\r\n\r\n layers.append(nn.Conv2d(64, 128, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(128, 128, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.MaxPool2d(3, stride = 2, padding = 1))\r\n\r\n layers.append(nn.Conv2d(128, 256, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.MaxPool2d(3, stride = 2, padding = 1))\r\n\r\n layers.append(nn.Conv2d(256, 512, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(512, 512, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(512, 512, kernel_size = 3, stride = 1, padding = 1))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.MaxPool2d(3, stride = 1, padding = 1))\r\n\r\n layers.append(nn.Conv2d(512,\r\n 512,\r\n kernel_size = 3,\r\n stride = 1,\r\n padding = 2,\r\n dilation = 2))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(512,\r\n 512,\r\n kernel_size = 3,\r\n stride = 1,\r\n padding = 2,\r\n dilation = 2))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.Conv2d(512,\r\n 512,\r\n kernel_size = 3,\r\n stride = 1,\r\n padding = 2,\r\n dilation = 2))\r\n layers.append(nn.ReLU(inplace = True))\r\n layers.append(nn.MaxPool2d(3, stride = 1, padding = 1))\r\n self.features = nn.Sequential(*layers)\r\n\r\n classifier = []\r\n classifier.append(nn.AvgPool2d(3, stride = 1, padding = 1))\r\n classifier.append(nn.Conv2d(512,\r\n 1024,\r\n kernel_size = 3,\r\n stride = 1,\r\n padding = 12,\r\n dilation = 12))\r\n classifier.append(nn.ReLU(inplace = True))\r\n classifier.append(nn.Conv2d(1024, 1024, kernel_size = 1, stride = 1, padding = 0))\r\n classifier.append(nn.ReLU(inplace = True))\r\n classifier.append(nn.Dropout(p = 0.5))\r\n classifier.append(nn.Conv2d(1024, out_dim, kernel_size = 1))\r\n self.classifier = nn.Sequential(*classifier)\r\n\r\n self.init_weights()\r\n\r\n\r\n def forward(self, x):\r\n im = x\r\n x = self.features(x)\r\n x = self.classifier(x)\r\n return x\r\n\r\n def init_weights(self):\r\n vgg = torchvision.models.vgg16(pretrained = True)\r\n state_vgg = vgg.features.state_dict()\r\n self.features.load_state_dict(state_vgg)\r\n\r\n for ly in self.classifier.children():\r\n if isinstance(ly, nn.Conv2d):\r\n nn.init.kaiming_normal_(ly.weight, a=1)\r\n nn.init.constant_(ly.bias, 0)\r\n```\r\nAnd this is my test script: \r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport time\r\nfrom model import Model\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n net = Model(3, 21)\r\n net.train()\r\n net.cuda()\r\n net = nn.DataParallel(net)\r\n Loss = nn.CrossEntropyLoss(ignore_index = 255)\r\n Loss.cuda()\r\n optim = torch.optim.SGD(net.parameters(), lr = 1e-3, momentum = 0.9, weight_decay = 5e-4)\r\n\r\n st = time.time()\r\n scale = [0.5, 0.75, 1]\r\n loss_avg = []\r\n for i in range(10000):\r\n in_ten = torch.randn(70, 3, 224, 224)\r\n label = torch.randint(0, 21, [70, 1, 224, 224])\r\n in_ten = in_ten.cuda()\r\n label = label.cuda()\r\n label = torch.tensor(label).long().cuda()\r\n optim.zero_grad()\r\n H, W = in_ten.size()[2:]\r\n for sub_i, s in enumerate(scale):\r\n print(time.time() - st)\r\n h, w = int(H * s), int(W * s)\r\n in_ten_s = F.interpolate(in_ten, (h, w), mode = 'bilinear')\r\n out = net(in_ten_s)\r\n out = F.interpolate(out, [H, W], mode = 'bilinear')\r\n ", "url": "https://github.com/pytorch/pytorch/issues/14456", "state": "closed", "labels": [ "module: performance" ], "created_at": "2018-11-28T06:44:26Z", "updated_at": "2019-06-09T02:44:25Z", "user": "CoinCheung" }, { "repo": "pytorch/examples", "number": 453, "title": "Is the loss of the first word covered during the language model evaluation?", "body": "In the language model example, it seems that during the evaluation, the code starts from computing the loss of the second word. Thus, skipping the loss of the first word. \r\nhttps://github.com/pytorch/examples/blob/537f6971872b839b36983ff40dafe688276fe6c3/word_language_model/main.py#L136\r\nhttps://github.com/pytorch/examples/blob/537f6971872b839b36983ff40dafe688276fe6c3/word_language_model/main.py#L121-L125\r\n\r\nFurthermore, the evaluation data is divided into 10 batches, hence, the losses of 10 words are skipped.\r\nAm I right or I did miss something?\r\nhttps://github.com/pytorch/examples/blob/537f6971872b839b36983ff40dafe688276fe6c3/word_language_model/main.py#L85-L88", "url": "https://github.com/pytorch/examples/issues/453", "state": "open", "labels": [ "good first issue", "nlp" ], "created_at": "2018-11-26T10:28:03Z", "updated_at": "2022-03-10T06:08:08Z", "comments": 0, "user": "khassanoff" }, { "repo": "pytorch/examples", "number": 450, "title": "How to use trained model to classifier pictures?", "body": "I have trained a best model by imagenet,but code repo has given does not have test option,so how can I use the model have trained to classifier pictures with labels?", "url": "https://github.com/pytorch/examples/issues/450", "state": "open", "labels": [ "help wanted", "vision" ], "created_at": "2018-11-25T03:27:13Z", "updated_at": "2022-03-10T06:07:49Z", "comments": 2, "user": "mohhao" }, { "repo": "pytorch/examples", "number": 448, "title": "which pytorch version can run the fast rcnn demo?", "body": "", "url": "https://github.com/pytorch/examples/issues/448", "state": "closed", "labels": [], "created_at": "2018-11-21T10:42:41Z", "updated_at": "2022-03-10T00:26:13Z", "comments": 2, "user": "Bigwode" }, { "repo": "pytorch/examples", "number": 443, "title": "DCGAN: Generate more number of images", "body": "Is there a way we can generate an arbitrary number of images? Right now the fake sample is outputting to 64 images with default settings. My goal is to get 250 fake images. Is this possible?", "url": "https://github.com/pytorch/examples/issues/443", "state": "closed", "labels": [], "created_at": "2018-11-16T04:49:33Z", "updated_at": "2018-11-16T04:50:14Z", "comments": 1, "user": "MonojitBanerjee" }, { "repo": "pytorch/pytorch", "number": 13460, "title": "What is the net *.pb file encoding?", "body": "Hi there,\r\n\r\nI am running the following code:\r\n\r\n```python\r\nwith open(EXPORT_PATH + \"mnist_init_net.pb\", encoding=\"utf-8\") as f:\r\n init_net = f.read()\r\n```\r\n\r\nI get the following error:\r\n```python\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xf1 in position 24: invalid continuation byte\r\n```\r\n\r\nIt seems like the simple thing to do is to change the encoding type from not being utf-8 (which open() defaults to it seems in this case). What encoding should I use?\r\n\r\nmnist_init_net.pb file is generated via:\r\n\r\n```python\r\ninit_net, predict_net = c2.onnx_graph_to_caffe2_net(model)\r\nwith open(EXPORT_PATH + \"mnist_init_net.pb\", \"wb\") as f:\r\n f.write(init_net.SerializeToString())\r\n```\r\n\r\nIs ISO-8859-1 correct?\r\n\r\n---- \r\n```\r\npython -c \"import torch; print(torch.__version__)\"\r\n1.0.0.dev20181029\r\n\r\npython -c \"import onnx; print(onnx.__version__)\"\r\n1.3.0\r\n\r\nOS: OS X 10.13\r\n\r\npython --version\r\nPython 3.6.6 :: Anaconda custom (64-bit)\r\n```", "url": "https://github.com/pytorch/pytorch/issues/13460", "state": "closed", "labels": [], "created_at": "2018-11-01T18:26:20Z", "updated_at": "2018-11-07T16:07:03Z", "user": "Suhail" }, { "repo": "pytorch/examples", "number": 431, "title": "How to run distributed training on multiple Node using ImageNet using ResNet model ", "body": "The script mentioned in https://github.com/pytorch/examples/tree/master/imagenet does provides good guideline on single node training however it doesn't have good documentation on Distributed training on multiple Node.\r\n\r\nI tried to use two machines with 8 gpus with below command\r\n\r\nMachine-1 script\r\n```\r\nHOST_PORT=\"tcp://Machine-1-ip:13333\"\r\n\r\nNODE=0\r\nRANKS_PER_NODE=8\r\n\r\n\r\nfor i in $(seq 0 7); do\r\n LOCAL_RANK=$i\r\n DISTRIBUTED_RANK=$((RANKS_PER_NODE * NODE + LOCAL_RANK))\r\n NCCL_DEBUG=INFO NCCL_MIN_NRINGS=5 python /home/ubuntu/examples/imagenet/main.py \\\r\n --a resnet18 \\\r\n /home/ubuntu/mini_imagenet \\\r\n --dist-url $HOST_PORT \\\r\n --gpu $DISTRIBUTED_RANK \\\r\n --dist-backend nccl \\\r\n --world-size 16 &\r\n PIDS[$LOCAL_RANK]=$!\r\ndone\r\n```\r\n\r\nOn machine-2\r\n\r\n```\r\nHOST_PORT=\"tcp://Machine-1-ip:13333\"\r\n\r\nNODE=1\r\nRANKS_PER_NODE=8\r\n\r\n\r\nfor i in $(seq 0 7); do\r\n LOCAL_RANK=$i\r\n DISTRIBUTED_RANK=$((RANKS_PER_NODE * NODE + LOCAL_RANK))\r\n NCCL_DEBUG=INFO NCCL_MIN_NRINGS=5 python /home/ubuntu/examples/imagenet/main.py \\\r\n --a resnet18 \\\r\n /home/ubuntu/mini_imagenet \\\r\n --dist-url $HOST_PORT \\\r\n --gpu $DISTRIBUTED_RANK \\\r\n --dist-backend nccl \\\r\n --world-size 16 &\r\n PIDS[$LOCAL_RANK]=$!\r\ndone\r\n```\r\n\r\nHowever it fails with below **error** \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/examples/imagenet/main.py\", line 347, in <module>\r\n main()\r\n File \"/home/ubuntu/examples/imagenet/main.py\", line 96, in main\r\n world_size=args.world_size)\r\n File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/__init__.py\", line 94, in init_process_group\r\n group_name, rank)\r\nRuntimeError: the MPI backend is not available; try to recompile the THD package with MPI support at /opt/conda/conda-bld/pytorch_1532579245307/work/torch/lib/THD/process_group/General.cpp:17\r\n\r\n```\r\n", "url": "https://github.com/pytorch/examples/issues/431", "state": "open", "labels": [ "distributed" ], "created_at": "2018-10-31T06:11:37Z", "updated_at": "2022-06-15T10:40:29Z", "comments": 12, "user": "goswamig" }, { "repo": "pytorch/examples", "number": 430, "title": "error in the backward pass while using the pytorch roi pooling", "body": "I am using [link](https://github.com/pytorch/examples/blob/d8d378c31d2766009db400ac03f41dd837a56c2a/fast_rcnn/roi_pooling.py#L38-L53) but i get error while doing the backward pass \r\n```\r\n\r\n File \"/home/alireza/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py\", line 668, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"/home/alireza/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py\", line 108, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"/home/alireza/RFCN/trainval_net.py\", line 357, in <module>\r\n loss.backward()\r\n\r\n File \"/home/alireza/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py\", line 167, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)\r\n\r\n File \"/home/alireza/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py\", line 99, in backward\r\n variables, grad_variables, retain_graph)\r\n\r\n File \"/home/alireza/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py\", line 195, in backward\r\n raise NotImplementedError\r\n\r\nNotImplementedError\r\n```\r\nany suggestion what should i do?\r\n\r\nin the example of the code [link](https://github.com/pytorch/examples/blob/d8d378c31d2766009db400ac03f41dd837a56c2a/fast_rcnn/roi_pooling.py#L38-L53) mentioned that for backward i should use \r\n\r\n`out.backward(out.data.clone().uniform_())`\r\nbut im not sure where should i use that?\r\nim using the forward pass inside another function as below:\r\n```\r\nclass PSRoIPoolingFunction(Function):\r\n def __init__(self, pooled_height, pooled_width, spatial_scale, group_size, output_dim):\r\n self.pooled_width = int(pooled_width)\r\n self.pooled_height = int(pooled_height)\r\n self.spatial_scale = float(spatial_scale)\r\n self.group_size = int(group_size)\r\n self.output_dim = int(output_dim)\r\n self.output = None\r\n self.mappingchannel = None\r\n self.rois = None\r\n self.feature_size = None\r\n\r\n def forward(self, features, rois):\r\n batch_size, num_channels, data_height, data_width = features.size()\r\n num_rois = rois.size()[0]\r\n output = torch.zeros(num_rois, self.output_dim, self.pooled_height, self.pooled_width)\r\n # mappingchannel = torch.IntTensor(num_rois, self.output_dim, self.pooled_height, self.pooled_width).zero_()\r\n\r\n # ROI Pooling\r\n out2 = roi_pooling(features, rois, size=(self.pooled_height,self.pooled_width),\r\n spatial_scale = self.spatial_scale)\r\n \r\n # AVerage pooling for Position Sensitive\r\n \r\n output = Variable(output.cuda())\r\n\r\n chan= 0\r\n for i in range(0,out2.size(1),self.pooled_height*self.pooled_width):\r\n output[:,chan,:,:] = torch.mean(out2[:,i:i+self.pooled_height*self.pooled_width,:,:],1,keepdim=True)\r\n chan += 1\r\n\r\n\r\n return output.data\r\n```\r\n\r\nshould i use the backward pass somewhere?\r\n\r\nHow I should use it? :/", "url": "https://github.com/pytorch/examples/issues/430", "state": "closed", "labels": [], "created_at": "2018-10-30T20:00:14Z", "updated_at": "2018-10-30T23:31:42Z", "comments": 2, "user": "isalirezag" }, { "repo": "pytorch/examples", "number": 428, "title": "how to deal with backward pass in pytorch version of ROI Pooling", "body": "I am trying to make position sensitive roi pooling (PSROIPooling)which is proposed in RFCN work.\r\nPSROIPooling is basically ROIPooling + average pooling.\r\nI am using the `roi_pooling.py` that is written in pytorch and provided [here](https://github.com/pytorch/examples/blob/d8d378c31d2766009db400ac03f41dd837a56c2a/fast_rcnn/roi_pooling.py#L38-L53).\r\n\r\nand trying to change [this part of the code](https://github.com/princewang1994/R-FCN.pytorch/blob/master/lib/model/psroi_pooling/functions/psroi_pooling.py) to be completely in pytorch (please note that the current version is in cuda, but i need to do some modification, so that is why im trying to change it to be in pytorch)\r\n\r\nso I change that [file](https://github.com/princewang1994/R-FCN.pytorch/blob/master/lib/model/psroi_pooling/functions/psroi_pooling.py) from:\r\n```\r\nimport torch\r\nfrom torch.autograd import Function\r\nfrom .._ext import psroi_pooling \r\n\r\n\r\nclass PSRoIPoolingFunction(Function):\r\n def __init__(self, pooled_height, pooled_width, spatial_scale, group_size, output_dim):\r\n self.pooled_width = int(pooled_width)\r\n self.pooled_height = int(pooled_height)\r\n self.spatial_scale = float(spatial_scale)\r\n self.group_size = int(group_size)\r\n self.output_dim = int(output_dim)\r\n self.output = None\r\n self.mappingchannel = None\r\n self.rois = None\r\n self.feature_size = None\r\n\r\n def forward(self, features, rois):\r\n batch_size, num_channels, data_height, data_width = features.size()\r\n num_rois = rois.size()[0]\r\n\r\n output = torch.zeros(num_rois, self.output_dim, self.pooled_height, self.pooled_width)\r\n mappingchannel = torch.IntTensor(num_rois, self.output_dim, self.pooled_height, self.pooled_width).zero_()\r\n output = output.cuda()\r\n\r\n mappingchannel = mappingchannel.cuda()\r\n\r\n psroi_pooling.psroi_pooling_forward_cuda(self.pooled_height, self.pooled_width, self.spatial_scale, self.group_size, self.output_dim, \\\r\n features, rois, output, mappingchannel)\r\n\r\n\r\n \r\n \r\n self.output = output\r\n self.mappingchannel = mappingchannel\r\n self.rois = rois\r\n self.feature_size = features.size()\r\n\r\n return output\r\n\r\n def backward(self, grad_output):\r\n assert(self.feature_size is not None and grad_output.is_cuda)\r\n\r\n batch_size, num_channels, data_height, data_width = self.feature_size\r\n\r\n grad_input = torch.zeros(batch_size, num_channels, data_height, data_width).cuda()\r\n\r\n psroi_pooling.psroi_pooling_backward_cuda(self.pooled_height, self.pooled_width, self.spatial_scale, self.output_dim, \\\r\n grad_output, self.rois, grad_input, self.mappingchannel)\r\n return grad_input, None\r\n\r\n```\r\n\r\n\r\n\r\nto be like this:\r\n\r\n```\r\nimport torch\r\nfrom torch.autograd import Function\r\nfrom .._ext import psroi_pooling \r\n\r\n\r\nfrom .ROI_Pooling_PyTorch import *\r\nfrom .ROI_Pooling_PyTorch import roi_pooling\r\nfrom torch.autograd import Variable\r\n\r\nclass PSRoIPoolingFunction(Function):\r\n def __init__(self, pooled_height, pooled_width, spatial_scale, group_size, output_dim):\r\n self.pooled_width = int(pooled_width)\r\n self.pooled_height = int(pooled_height)\r\n self.spatial_scale = float(spatial_scale)\r\n self.group_size = int(group_size)\r\n self.output_dim = int(output_dim)\r\n self.output = None\r\n self.mappingchannel = None\r\n self.rois = None\r\n self.feature_size = None\r\n\r\n def forward(self, features, rois):\r\n batch_size, num_channels, data_height, data_width = features.size()\r\n num_rois = rois.size()[0]\r\n output = torch.zeros(num_rois, self.output_dim, self.pooled_height, self.pooled_width)\r\n # mappingchannel = torch.IntTensor(num_rois, self.output_dim, self.pooled_height, self.pooled_width).zero_()\r\n\r\n # ROI Pooling\r\n out2 = roi_pooling(features, rois, size=(self.pooled_height,self.pooled_width),\r\n spatial_scale = self.spatial_scale)\r\n \r\n # AVerage pooling for Position Sensitive\r\n \r\n output = Variable(output.cuda())\r\n\r\n chan= 0\r\n for i in range(0,out2.size(1),self.pooled_height*self.pooled_width):\r\n output[:,chan,:,:] = torch.mean(out2[:,i:i+self.pooled_height*self.pooled_width,:,:],1,keepdim=True)\r\n chan += 1\r\n \r\n # mappingchannel = mappingchannel.cuda()\r\n \r\n self.output = output\r\n # self.mappingchannel = mappingchannel\r\n self.rois = rois\r\n self.feature_size = features.size()\r\n \r\n\r\n return output.data\r\n\r\n def backward(self, grad_output):\r\n \r\n# =============================================================================\r\n# What should i put here?????\r\n# =============================================================================\r\n```\r\n\r\n\r\nthe forward pass sounds like working, but the backwar", "url": "https://github.com/pytorch/examples/issues/428", "state": "closed", "labels": [], "created_at": "2018-10-30T02:08:40Z", "updated_at": "2018-10-30T02:21:14Z", "comments": 1, "user": "isalirezag" }, { "repo": "pytorch/examples", "number": 425, "title": "DCGAN: code and paper don't have the same feature maps?", "body": "## From the code\r\n\r\nInput(100\\*1\\*1) --->((ngf\\*8) \\*4\\*4)--->((ngf\\*4) \\*8\\*8)--->((ngf\\*2) \\*16\\*16)--->(ngf \\*32\\*32)--->(3\\*64\\*64)\r\naccording to the code, **ngf=64**. Therefore we have\r\n**Input(100\\*1\\*1) --->(512\\*4\\*4)--->(256\\*8\\*8)--->(128\\*16\\*16)--->(64\\*32\\*32)--->(3\\*64\\*64)**\r\n```python\r\nclass Generator(nn.Module):\r\n def __init__(self, ngpu):\r\n super(Generator, self).__init__()\r\n self.ngpu = ngpu\r\n self.main = nn.Sequential(\r\n # input is Z, going into a convolution\r\n nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),\r\n nn.BatchNorm2d(ngf * 8),\r\n nn.ReLU(True),\r\n # state size. (ngf*8) x 4 x 4\r\n nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),\r\n nn.BatchNorm2d(ngf * 4),\r\n nn.ReLU(True),\r\n # state size. (ngf*4) x 8 x 8\r\n nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),\r\n nn.BatchNorm2d(ngf * 2),\r\n nn.ReLU(True),\r\n # state size. (ngf*2) x 16 x 16\r\n nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),\r\n nn.BatchNorm2d(ngf),\r\n nn.ReLU(True),\r\n # state size. (ngf) x 32 x 32\r\n nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),\r\n nn.Tanh()\r\n # state size. (nc) x 64 x 64\r\n )\r\n\r\n```\r\n\r\n## From the paper \r\n\r\n![image](https://user-images.githubusercontent.com/4425798/47482661-cbf81900-d869-11e8-9dfd-df6b5d6fb3c0.png)\r\n**Input(100\\*1\\*1) --->(1024\\*4\\*4)--->(512\\*8\\*8)--->(256\\*16\\*16)--->(128\\*32\\*32)--->(3\\*64\\*64)**\r\n\r\n## My question is \r\nwhy the two generator's feature maps sizes don't match?\r\n\r\nThank you\r\n", "url": "https://github.com/pytorch/examples/issues/425", "state": "closed", "labels": [], "created_at": "2018-10-25T07:36:28Z", "updated_at": "2018-11-03T04:24:48Z", "comments": 1, "user": "zhibo-liu" }, { "repo": "pytorch/examples", "number": 421, "title": "what is spatial_scale in roi pooling", "body": "can you please explain to me what is spatial_scale here:\r\n[Link](https://github.com/pytorch/examples/blob/d8d378c31d2766009db400ac03f41dd837a56c2a/fast_rcnn/roi_pooling.py#L38-L53)\r\n\r\nalso in ```[..., roi[2]:(roi[4]+1), roi[1]:(roi[3]+1)]```, what does `...` in the begining of the list do?\r\n\r\nThanks", "url": "https://github.com/pytorch/examples/issues/421", "state": "closed", "labels": [], "created_at": "2018-10-11T14:39:24Z", "updated_at": "2018-10-11T19:06:28Z", "user": "isalirezag" }, { "repo": "pytorch/text", "number": 424, "title": "What is \"parse_field\" for?", "body": "In torchtext.datasets.SNLI.splits, there is a parameter named \"parse_field\". I found if setting this field with a \"datasets.snli.ShiftReduceField\" object, the vocabulary becomes much smaller and SNLI accuracy always improves (compared with default value). It is amazing! But I can't find any description about it...\r\n\r\n\r\n ", "url": "https://github.com/pytorch/text/issues/424", "state": "closed", "labels": [], "created_at": "2018-09-27T04:45:10Z", "updated_at": "2018-10-02T03:39:58Z", "user": "jueliangguke" }, { "repo": "pytorch/examples", "number": 412, "title": "RuntimeError: Found 0 images in subfolders of in AWS", "body": "Have anyone used torchvision.datasets.ImageFolder in AWS?\r\nI met this error in predicting my pictures in my own folder\r\n\r\nRuntimeError: Found 0 images in subfolders of: mine/1/2\r\nSupported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif\r\n\r\nBut, I have uploaded 3 jpg images in mine/1/2.\r\n\r\nIt also have found the folder\r\n\r\nIs there anything I have missed?\r\nAny suggestion will be appreciated\r\n<img width=\"371\" alt=\"2\" src=\"https://user-images.githubusercontent.com/42711020/45485510-cfb75c80-b74f-11e8-8ce5-bbfc05ea19ad.png\">\r\n<img width=\"413\" alt=\"2 2\" src=\"https://user-images.githubusercontent.com/42711020/45485518-d2b24d00-b74f-11e8-91b3-c5953d27b165.png\">\r\n<img width=\"249\" alt=\"2 1\" src=\"https://user-images.githubusercontent.com/42711020/45485520-d7770100-b74f-11e8-9f3b-5d0fa0b47f3b.png\">\r\n\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/412", "state": "closed", "labels": [], "created_at": "2018-09-13T11:23:47Z", "updated_at": "2021-09-07T03:11:15Z", "comments": 2, "user": "Aaron4Fun" }, { "repo": "pytorch/examples", "number": 411, "title": "AlexNet code", "body": "Where can I find the AlexNet code? I would like to implement it in a distributed mode using MPI. ", "url": "https://github.com/pytorch/examples/issues/411", "state": "closed", "labels": [], "created_at": "2018-09-11T13:08:24Z", "updated_at": "2022-03-10T00:27:48Z", "comments": 1, "user": "abidmalikwaterloo" }, { "repo": "pytorch/pytorch", "number": 11130, "title": "where is the caffe2 folder?", "body": "Hi,\r\n\r\nIn the old version of caffe2, I could find the caffe2 folder ( \"/usr/local/caffe2\"). Where is the the caffe2 folder(within PYTORCH ) right now?\r\n \r\n ", "url": "https://github.com/pytorch/pytorch/issues/11130", "state": "closed", "labels": [ "caffe2" ], "created_at": "2018-08-31T02:25:04Z", "updated_at": "2018-09-07T02:59:19Z", "user": "ddeeppnneett" }, { "repo": "pytorch/examples", "number": 409, "title": "How to extract a trained model ", "body": "Hi,\r\n\r\nI have trained a model of resnet 152 using the code provided in 'examples/imagenet/main.py' I understand that it saves a checkpoint after every epoch, and at the end of the training it will save the best trained model.\r\n\r\nMy question is how can i extract this model?\r\n", "url": "https://github.com/pytorch/examples/issues/409", "state": "closed", "labels": [], "created_at": "2018-08-29T04:33:42Z", "updated_at": "2022-03-10T05:45:07Z", "comments": 3, "user": "mvk07" }, { "repo": "pytorch/examples", "number": 406, "title": "UserWarning: nn.Upsampling is deprecated. Use nn.functional.interpolate instead. warnings.warn(\"nn.Upsampling is deprecated. Use nn.functional.interpolate instead.\")", "body": "UserWarning: nn.Upsampling is deprecated. Use nn.functional.interpolate instead.\r\n warnings.warn(\"nn.Upsampling is deprecated. Use nn.functional.interpolate instead.\")\r\n\r\nHow can I solve this problem?", "url": "https://github.com/pytorch/examples/issues/406", "state": "closed", "labels": [], "created_at": "2018-08-26T09:23:08Z", "updated_at": "2022-03-10T06:01:35Z", "comments": 1, "user": "u0251077" }, { "repo": "pytorch/tutorials", "number": 281, "title": "Question: neural_style_tutorial\uff1ahow to adjust different input image size to achieve this?", "body": "I'm new to dl and pytorch. \r\n\r\nneural_style_tutorial\r\nthis tutorial is about fixed size image.\r\n\r\nbut most image cant have the same size defined,how to adjust different input image size to this model?\r\n\r\nmany thanks, if you can help!!!", "url": "https://github.com/pytorch/tutorials/issues/281", "state": "closed", "labels": [], "created_at": "2018-08-10T06:02:04Z", "updated_at": "2021-06-16T21:11:13Z", "comments": 0, "user": "aohan237" }, { "repo": "pytorch/examples", "number": 399, "title": "Why don't we use MSE as a reconstruction loss for VAE ?", "body": "Hi,\r\n\r\nI am wondering if there is a theoretical reason for using BCE as a reconstruction loss for variation auto-encoders ? Can't we simply use MSE or norm-based reconstruction loss instead ?\r\n\r\nBest Regards", "url": "https://github.com/pytorch/examples/issues/399", "state": "open", "labels": [ "good first issue" ], "created_at": "2018-08-07T11:23:11Z", "updated_at": "2022-03-10T06:02:04Z", "comments": 7, "user": "ahmed-fau" }, { "repo": "pytorch/examples", "number": 393, "title": "How large batch size should I set for imagenet training", "body": "I just use the default setting of batch size 256 and 8 TiTAN XP gpus on resnet34\uff0c it takes about 1.5 hours for one epoch, I want to speed up the training process, Can I increase the batch size ?", "url": "https://github.com/pytorch/examples/issues/393", "state": "closed", "labels": [], "created_at": "2018-07-26T02:51:18Z", "updated_at": "2018-07-27T04:04:14Z", "comments": 1, "user": "lith0613" }, { "repo": "pytorch/examples", "number": 384, "title": "lm example\uff1aiteration over a 0-d tensor", "body": "I run example code , give me this error. I don't know how to solve this .\r\nAll error give from function repackage_hidden.\r\nmy pytorch version is .4", "url": "https://github.com/pytorch/examples/issues/384", "state": "closed", "labels": [], "created_at": "2018-07-13T10:53:35Z", "updated_at": "2019-06-24T11:15:58Z", "comments": 2, "user": "EricAugust" }, { "repo": "pytorch/pytorch", "number": 9207, "title": "Where is the include and lib path for caffe2?", "body": "i installed pytorch with caffe2 from source by using 'python setup_caffe2.py install' command.\r\nCan anyone tell that where is the default include and lib path for caffe2?", "url": "https://github.com/pytorch/pytorch/issues/9207", "state": "open", "labels": [ "caffe2" ], "created_at": "2018-07-06T14:44:20Z", "updated_at": "2018-07-14T03:58:25Z", "user": "universewill" }, { "repo": "pytorch/examples", "number": 376, "title": "trying to understand the meaning of model.train() and model.eval()", "body": "Hi\r\n\r\nSo i see in the main.py we have model.train() and model.val(), i dont understand how to use them. can someone explain it to me please.\r\nFor example in here: \r\n`python main.py -a resnet18 [imagenet-folder with train and val folders]` we did not specify train or eval, so how do we know which one to use.\r\nI know my question is stupid, please let me know if there is any good tutorial to read and understand it.\r\n\r\nThanks", "url": "https://github.com/pytorch/examples/issues/376", "state": "closed", "labels": [], "created_at": "2018-06-23T22:14:02Z", "updated_at": "2018-06-23T22:18:09Z", "comments": 1, "user": "isalirezag" }, { "repo": "pytorch/examples", "number": 374, "title": "About distributed training of Imagenet, I am confused there is no operation to collect grads from machines and average them before update grads. ", "body": "I write a distributed training model refer to the code imagenet/main.py , and the models on different machine own their independent optimizer. But I noticed that after backward() there is no operation to collect param grads from other processes and average them to get new grads for update. Does pytorch accomplish the average task implicitly by the optimizer.step() function? I am so confused.. ", "url": "https://github.com/pytorch/examples/issues/374", "state": "closed", "labels": [], "created_at": "2018-06-14T11:41:15Z", "updated_at": "2019-05-21T21:49:59Z", "comments": 4, "user": "TobeyYang" }, { "repo": "pytorch/tutorials", "number": 259, "title": "Confirm if batch training in seq2seq tutorial?", "body": "for the tutorial _pytorch->tutorials/intermediate_source/seq2seq_translation_tutorial.py_: [https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py](url)\r\n\r\nAccording to lines 636-646, It seems like it is training with one sentence at a time, instead of batch training. Am I understanding it right? ", "url": "https://github.com/pytorch/tutorials/issues/259", "state": "closed", "labels": [], "created_at": "2018-06-14T04:55:06Z", "updated_at": "2021-07-30T23:01:59Z", "comments": 3, "user": "ecilay" }, { "repo": "pytorch/examples", "number": 373, "title": "float16 mixed precision training on Titan V is slower than float32", "body": "Since I cannot find a place to download imagenet dataset, I modified mnist example to support float16 training, please see the code in https://github.com/qigtang/examples.git, commit ed095d384529808f930161cbf005963ad482c22a\r\n\r\nWhen running in my Titan V GPU\r\n![image](https://user-images.githubusercontent.com/7813095/41369964-34d2d178-6efb-11e8-9367-c16cca1a9b5b.png)\r\n\r\n=======float 32 mode, ========\r\ntime python main.py\r\n\r\nreal 0m31.326s\r\nuser 1m24.282s\r\nsys 0m19.782s\r\n\r\n=======float 16 mode==========\r\ntime python main.py --fp16\r\nreal 0m34.736s\r\nuser 1m23.025s\r\nsys 0m21.134s\r\n\r\nThe float16 code is actually slower. What a surprise. \r\nThe docker image I am using is \r\nnvcr.io/nvidia/pytorch 18.05-py3 \r\n\r\n@csarofeen @nvidia \r\n\r\nQustion: \r\n1. Does pytorch 0.4 compile half math into Volta tensorcore float16*float16 operation? \r\n2. Why the official nvidia mixed training document is not writing down any performance number at all?\r\n\r\n\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/373", "state": "closed", "labels": [ "question" ], "created_at": "2018-06-13T18:22:29Z", "updated_at": "2022-03-10T04:11:59Z", "comments": 1, "user": "qigtang" }, { "repo": "pytorch/examples", "number": 372, "title": "SNLI Why config.d_out is 4 in snli/train.py?", "body": "In SNLI, the unknown label in answers is removed. It becomes a 3-way classification, i.e., entailment, neutral, and contradiction. But why the config.d_out is assigned to 4 in line 39 in [snli/train.py](https://github.com/pytorch/examples/blob/f83508117b1ba9b752b227de992799093af3b215/snli/train.py#L39)?", "url": "https://github.com/pytorch/examples/issues/372", "state": "closed", "labels": [], "created_at": "2018-06-12T11:21:10Z", "updated_at": "2020-02-19T06:43:35Z", "comments": 0, "user": "shaoxiongji" }, { "repo": "pytorch/text", "number": 335, "title": "where is the documentation?", "body": "", "url": "https://github.com/pytorch/text/issues/335", "state": "closed", "labels": [], "created_at": "2018-06-05T08:24:53Z", "updated_at": "2018-06-06T11:05:01Z", "user": "udion" }, { "repo": "pytorch/examples", "number": 368, "title": "Is it a right implement for rnn model?", "body": "I find a implement of rnn model,but the \"forward\" is not the normal format,there are there parameters for \"forward\" function.I wonder is it a right implement of rnn model?\r\nthe link:https://github.com/zhangxu0307/time_series_forecasting_pytorch/blob/master/code/model.py", "url": "https://github.com/pytorch/examples/issues/368", "state": "closed", "labels": [], "created_at": "2018-06-04T10:15:28Z", "updated_at": "2018-06-04T16:20:42Z", "comments": 1, "user": "lxj0276" }, { "repo": "pytorch/examples", "number": 367, "title": "TransformerNet no longer works in pytorch 0.4", "body": "Is there anything that can be done to fix this?\r\nWhen I call it I receive: \r\n\r\nTraceback (most recent call last):\r\n File \"neural_style.py\", line 651, in <module>\r\n main()\r\n File \"neural_style.py\", line 645, in main\r\n stylize(args)\r\n File \"neural_style.py\", line 437, in stylize\r\n style_model.load_state_dict(torch.load(modX))\r\n File \"D:\\Vitrual.C.Drive\\Anaconda\\envs\\Pytorch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 721, in load_state_dict\r\n self.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for TransformerNet:\r\n Unexpected running stats buffer(s) \"in1.running_mean\" and \"in1.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"in2.running_mean\" and \"in2.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"in3.running_mean\" and \"in3.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"res1.in1.running_mean\" and \"res1.in1.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"res1.in2.running_mean\" and \"res1.in2.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"res2.in1.running_mean\" and \"res2.in1.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"res2.in2.running_mean\" and \"res2.in2.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"res3.in1.running_mean\" and \"res3.in1.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.\r\n Unexpected running stats buffer(s) \"res3.in2.running_mean\" and \"res3.in2.running_var\" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to e", "url": "https://github.com/pytorch/examples/issues/367", "state": "closed", "labels": [], "created_at": "2018-06-03T22:24:45Z", "updated_at": "2022-11-25T21:38:19Z", "comments": 2, "user": "Zekodon" }, { "repo": "pytorch/examples", "number": 362, "title": "InceptionV3 cannot work!", "body": "`python main.py -a inception_v3 ./imagenet/cat2dog --batch-size 16 --print-freq 1 --pretrained;`\r\n=> using pre-trained model 'inception_v3'\r\nTraceback (most recent call last):\r\n File \"main.py\", line 314, in <module>\r\n main()\r\n File \"main.py\", line 157, in main\r\n train(train_loader, model, criterion, optimizer, epoch)\r\n File \"main.py\", line 189, in train\r\n target = target.cuda(non_blocking=True)\r\nTypeError: _cuda() got an unexpected keyword argument 'non_blocking'\r\n", "url": "https://github.com/pytorch/examples/issues/362", "state": "open", "labels": [ "help wanted", "vision" ], "created_at": "2018-05-27T21:15:55Z", "updated_at": "2022-03-10T06:02:49Z", "comments": 8, "user": "happsky" }, { "repo": "pytorch/examples", "number": 357, "title": "language model generator question", "body": "In this file:\r\n\r\nhttps://github.com/pytorch/examples/blob/master/word_language_model/generate.py\r\n\r\nWhat does this input mean in the generation?\r\n\r\n input = torch.randint(ntokens, (1, 1), dtype=torch.long).to(device)\r\n\r\nAs I understand it in a rnn-based language model, the last output of the rnn is fed into the current input and the sequence is unrolled. What is the meaning of this random input? Does it enforce the last output is being fed into the current input in the unrolling?\r\n\r\nThanks!\r\n\r\n(I am building a sequence generator that needs to consume its output from the last input, and I am wondering how to do it. Are you suggesting just feeding in random input would also work? Any hints would be helpful ! )", "url": "https://github.com/pytorch/examples/issues/357", "state": "open", "labels": [ "triaged" ], "created_at": "2018-05-18T22:37:47Z", "updated_at": "2022-03-10T00:29:50Z", "comments": 2, "user": "evanthebouncy" }, { "repo": "pytorch/examples", "number": 355, "title": "Imagenet training example - RandomResizedCrop", "body": "This is regarding \r\nhttps://github.com/pytorch/examples/blob/master/imagenet/main.py#L122\r\n\r\n\r\nThe default scale argument for the transform RandomResizedCrop is defined as scale=(0.08, 1.0) - defined in pytorch/vision/transform\r\n\r\nRandomResizedCrop is doing a crop first and then scale to the desired size. What could be the logic in in setting the lower limit of crop to as low as 0.08? 0.08 would corresponds to a very small portion of the image.\r\n\r\nI have seen (in my limited experimentation) that this is the reason for very slow training on ImageNet classification.\r\n\r\nIf we just change it to scale=(0.5, 1.0), then it trains fine. 0.75 would roughly correspond to what is commonly used area ratio of (224x224)/(256x256). Since this scale is a random range, and we want the middle to be around 0.75, scale=(0.5, 1.0) is a good choice.\r\n\r\nThe change can be done by passing scale argument to RandomResizedCrop transform.\r\n\r\n transforms.Compose([\r\n transforms.RandomResizedCrop(224, scale=(0.5, 1.0)),\r\n transforms.RandomHorizontalFlip(),\r\n transforms.ToTensor(),\r\n normalize,\r\n ]))\r\n\r\nDoes this make sense? I have to admit that I have done only limited experimentation with this.\r\n", "url": "https://github.com/pytorch/examples/issues/355", "state": "closed", "labels": [], "created_at": "2018-05-16T21:40:38Z", "updated_at": "2018-06-05T13:24:36Z", "comments": 1, "user": "mathmanu" }, { "repo": "pytorch/examples", "number": 347, "title": "fast_neural_style using cuda", "body": "0.4.0\r\nCuda 9.0\r\ncudnn 7.1\r\npython3.5\r\n\r\nI am trying to train a new model using cuda. \r\n\r\nI am getting a RuntimeError\r\n```\r\n\r\nTraceback (most recent call last):\r\n File \"neural_style/neural_style.py\", line 239, in <module>\r\n main()\r\n File \"neural_style/neural_style.py\", line 233, in main\r\n train(args)\r\n File \"neural_style/neural_style.py\", line 78, in train\r\n features_x = vgg(x)\r\n File \"/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 491, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dell/sbull/onnx/examples/fast_neural_style/neural_style/vgg.py\", line 28, in forward\r\n h = self.slice1(X)\r\n File \"/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 491, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/container.py\", line 91, in forward\r\n input = module(input)\r\n File \"/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 491, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/conv.py\", line 301, in forward\r\n self.padding, self.dilation, self.groups)\r\nRuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'\r\n\r\n```\r\n\r\nI have spent some time trying to figure it out what to fix, but without luck.\r\n\r\nIt seems like vgg16 for x needs to be set to run using cuda, like y, but I cannot figure out how to get that to work.\r\n\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/347", "state": "closed", "labels": [], "created_at": "2018-05-03T19:17:57Z", "updated_at": "2018-05-15T20:39:02Z", "comments": 2, "user": "spencerbull" }, { "repo": "pytorch/ELF", "number": 6, "title": "What is the winrate for the Leela Zero rematch how is it coming along? ", "body": "https://github.com/gcp/leela-zero/issues/1311#issuecomment-386156687", "url": "https://github.com/pytorch/ELF/issues/6", "state": "closed", "labels": [], "created_at": "2018-05-03T03:21:42Z", "updated_at": "2018-05-03T15:09:12Z", "user": "bochen2027" }, { "repo": "pytorch/vision", "number": 484, "title": "What is the relationship between the output label of pretrained model in model zoo and wordnet synset id? ", "body": "we can easily access pytorch pre-trained model like VGG, AlexNet and SqueezeNet by\r\n\r\n import torchvision \r\n torchvision.models.vgg16(pretrained=True)\r\n\r\ncan anyone point out what's the relationship between the output label(index of maximum output value) and the actual category?\r\n\r\ni downloaded ILSVRC2012_devkit_t12 and got the imagenet id and other metainfo provided by meta.mat, however it seems pre-trained model have some different id. because when i evaluate the network with ILSVRC2012 validation set, it reports 100% error.", "url": "https://github.com/pytorch/vision/issues/484", "state": "open", "labels": [ "enhancement" ], "created_at": "2018-05-02T07:23:46Z", "updated_at": "2019-06-10T10:06:57Z", "user": "imkzh" }, { "repo": "pytorch/tutorials", "number": 226, "title": "how to get turtorial for pytorch-0.3.1", "body": "the site http://pytorch.org/tutorials/ is only for pytorch-0.4.0 now\r\nhow to get the earlier version of tutorials", "url": "https://github.com/pytorch/tutorials/issues/226", "state": "closed", "labels": [], "created_at": "2018-04-25T04:18:33Z", "updated_at": "2018-04-27T11:06:26Z", "comments": 1, "user": "HarryRuiTse" }, { "repo": "pytorch/pytorch", "number": 6486, "title": "Where is the Caffe2 website?", "body": "The gh-pages branch doesn't exist.", "url": "https://github.com/pytorch/pytorch/issues/6486", "state": "closed", "labels": [], "created_at": "2018-04-10T21:54:41Z", "updated_at": "2018-04-10T21:58:08Z", "user": "louisabraham" }, { "repo": "pytorch/examples", "number": 330, "title": "Use pretrained word embeddings", "body": "I want to use my pretrained word embeddings to train this model. How do I go about implementing it? \r\n\r\nThanks! ", "url": "https://github.com/pytorch/examples/issues/330", "state": "closed", "labels": [ "question" ], "created_at": "2018-04-10T18:07:59Z", "updated_at": "2022-03-10T03:43:27Z", "comments": 3, "user": "BordiaS" }, { "repo": "pytorch/pytorch", "number": 6468, "title": "BatchNorm2d when batch size 1 works, what is it doing?", "body": "`BatchNorm2d` works even when batch size is 1, which puzzles me. So what is it doing when batch size is 1? The only related thread I could find is https://github.com/pytorch/pytorch/issues/1381 without much explanation.\r\n\r\nminimal example:\r\n```\r\nx = Variable(torch.randn(1,2,3,3))\r\nm = nn.BatchNorm2d(2)\r\ny = m(x)\r\n```", "url": "https://github.com/pytorch/pytorch/issues/6468", "state": "closed", "labels": [], "created_at": "2018-04-10T15:09:39Z", "updated_at": "2018-04-10T16:04:25Z", "user": "chanshing" }, { "repo": "pytorch/examples", "number": 327, "title": "Absence of seed for result reproduction", "body": "Hello,\r\n\r\nWhen running ImageNet with different resnet architectures (18,152..) l'm not able to reproduce the results. There is a small variation in accuracy.\r\n\r\nhttps://github.com/pytorch/examples/blob/master/imagenet/main.py\r\n\r\nWhat is wrong ?\r\n\r\n\r\neven by making in \r\n```\r\n main() : \r\n seed=15\r\n torch.manual_seed(seed)\r\n np.random.seed(seed)\r\n```\r\n\r\nl don't get the same result.\r\n\r\nThank you for your consideration\r\n ", "url": "https://github.com/pytorch/examples/issues/327", "state": "closed", "labels": [], "created_at": "2018-04-09T13:40:23Z", "updated_at": "2022-03-10T03:40:23Z", "comments": 1, "user": "pinkfloyd06" }, { "repo": "pytorch/examples", "number": 326, "title": "[Super resolution] image Resizing &low psnr value result", "body": "https://github.com/pytorch/examples/blob/dcdabc22b305d2f2989c6f03570dfcd3919e8a5b/super_resolution/data.py#L41\r\nI think resizing LANCZOS interpolation is better than default BILINEAR\r\n`Resize(crop_size // upscale_factor,interpolation=Image.LANCZOS)`\r\n__How does downsampling work in a normal SR?__\r\n\r\nAnd In the Set5 dataset, I found that the psnr value is lower than the bicubic method.\r\nWhy..? ", "url": "https://github.com/pytorch/examples/issues/326", "state": "open", "labels": [ "vision" ], "created_at": "2018-04-08T13:17:50Z", "updated_at": "2022-03-10T03:44:41Z", "comments": 8, "user": "ryujaehun" }, { "repo": "pytorch/tutorials", "number": 221, "title": "epub format support", "body": "Is it possible to provide an epub format of the tutorials officially ?\r\nI have tried to build by `make epub`, \r\nbut it took too much time and I never finishd it. ", "url": "https://github.com/pytorch/tutorials/issues/221", "state": "closed", "labels": [], "created_at": "2018-04-05T13:11:36Z", "updated_at": "2018-04-27T11:08:18Z", "comments": 3, "user": "zmlcc" }, { "repo": "pytorch/tutorials", "number": 218, "title": "Char-RNN tutorial giving Error.", "body": "I was running the code for Char level RNN in the PyTorch docs, found here: http://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html . \r\nI got the error:\r\n```\r\nTraceback (most recent call last):\r\n File \"names.py\", line 86, in <module>\r\n rnn = RNN(n_letters, n_hidden, n_categories)\r\n File \"names.py\", line 72, in __init__\r\n self.i2o = nn.Linear(input_size + hidden_size, output_size)\r\n File \"/home/ayush99/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py\", line 46, in __init__\r\n self.reset_parameters()\r\n File \"/home/ayush99/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py\", line 49, in reset_parameters\r\n stdv = 1. / math.sqrt(self.weight.size(1))\r\nRuntimeError: invalid argument 2: dimension 1 out of range of 0D tensor at /opt/conda/conda-bld/pytorch-cpu_1518282373170/work/torch/lib/TH/generic/THTensor.c:24\r\n```\r\nSystem specs: I was running this on the CPU.\r\nWhy is this happening? The examples from the docs should work just fine.", "url": "https://github.com/pytorch/tutorials/issues/218", "state": "closed", "labels": [], "created_at": "2018-03-25T15:37:52Z", "updated_at": "2021-06-16T21:33:27Z", "comments": 1, "user": "ayush1999" }, { "repo": "pytorch/tutorials", "number": 216, "title": "The code snippets in How to create custom C extension has something wrong IMHO.", "body": "In the official tutorial about [how to create custom C extension](http://pytorch.org/tutorials/advanced/c_extension.html) page, I think there are still minor problems. First, in the src/my_lib.c file, here is the code snippets,\r\n```\r\nint my_lib_add_backward(THFloatTensor *grad_output, THFloatTensor *grad_input)\r\n{\r\n THFloatTensor_resizeAs(grad_input, grad_output);\r\n THFloatTensor_fill(grad_input, 1);\r\n return 1;\r\n}\r\n```\r\nthe statement `THFloatTensor_fill(grad_input, 1)` in the function `my_lib_add_backward` isn't correct enough in my opinion, because I think in the backward function, given the gradient w.r.t the output, you should return that gradient w.r.t the input, so grad_input should be the same as grad_output rather than filled with 1 only, \r\n\r\nWhat's more, in the step2, at the backward method of Function MyAddFunction, there should return 2 grad_input, because there are 2 inputs in the corresponding forward method. Below is the related class definition.\r\n\r\n```\r\nclass MyAddFunction(Function):\r\n def forward(self, input1, input2):\r\n output = torch.FloatTensor()\r\n my_lib.my_lib_add_forward(input1, input2, output)\r\n return output\r\n\r\n def backward(self, grad_output):\r\n grad_input = torch.FloatTensor()\r\n my_lib.my_lib_add_backward(grad_output, grad_input)\r\n return grad_input\r\n```\r\n\r\nHoping for explanation or modification in order not to confuse the newbies who reads this page.", "url": "https://github.com/pytorch/tutorials/issues/216", "state": "closed", "labels": [], "created_at": "2018-03-24T14:53:42Z", "updated_at": "2018-05-19T18:00:54Z", "comments": 1, "user": "sonack" }, { "repo": "pytorch/examples", "number": 317, "title": "How to understand this way of declaring a class?", "body": "`class Linear(Bottle, nn.Linear):\r\n pass`\r\n(in snli/model.py line 16)\r\nI'm new user of torch. I get confused about this statement. Can someone help me?\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/317", "state": "closed", "labels": [], "created_at": "2018-03-17T08:24:56Z", "updated_at": "2018-03-17T14:25:02Z", "comments": 1, "user": "jueliangguke" }, { "repo": "pytorch/pytorch", "number": 5833, "title": "[Doc Bug] where is classmethod torch.nn.Embedding.from_pretrained?", "body": "There is a method to initialize Embedding from pretrained data (torch.Tensor).\r\n\r\nhttp://pytorch.org/docs/master/nn.html\r\n\r\nHowever that method does not exist in pytorch 0.3.1 .\r\n\r\nIf it was deprecated, what should I do to load pretrained word vectors such as torchtext.vocab.GloVe?\r\n\r\n```python\r\nimport torch as th\r\nemb = th.nn.Embedding(10, 20)\r\nemb.weight[1] = 0 # ERROR\r\nemb.weight.requires_grad = False\r\nemb.weight[1] = 0 # still ERROR\r\n```\r\n\r\nerror message:\r\n```\r\nRuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it\r\n```", "url": "https://github.com/pytorch/pytorch/issues/5833", "state": "closed", "labels": [], "created_at": "2018-03-16T13:03:21Z", "updated_at": "2018-03-16T13:19:43Z", "user": "cdluminate" }, { "repo": "pytorch/examples", "number": 316, "title": "Imagenet datasets", "body": " How to get validation images of ImageNet dataset", "url": "https://github.com/pytorch/examples/issues/316", "state": "closed", "labels": [], "created_at": "2018-03-16T08:31:52Z", "updated_at": "2018-11-07T17:33:11Z", "comments": 2, "user": "22wei22" }, { "repo": "pytorch/examples", "number": 312, "title": "Doc comment on `accuracy` method in imagenet example, incorrect?", "body": "I'm confused with the doc comment for the `accuracy` function in the imagenet example:\r\n\r\n```python\r\ndef accuracy(output, target, topk=(1,)):\r\n \"\"\"Computes the precision@k for the specified values of k\"\"\"\r\n maxk = max(topk)\r\n batch_size = target.size(0)\r\n\r\n _, pred = output.topk(maxk, 1, True, True)\r\n pred = pred.t()\r\n correct = pred.eq(target.view(1, -1).expand_as(pred))\r\n\r\n res = []\r\n for k in topk:\r\n correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)\r\n res.append(correct_k.mul_(100.0 / batch_size))\r\nreturn res\r\n```\r\nhttps://github.com/pytorch/examples/blob/master/imagenet/main.py#L298-L299\r\n\r\nThis seems like it computes accuracy and not precision as false positives are not accounted for. Should the doc comment read \"Computes the accuracy@k for the specified values of k\" or is my understanding of precision for object detection incorrect?\r\n\r\nMany thanks for pytorch, it's a great library!", "url": "https://github.com/pytorch/examples/issues/312", "state": "open", "labels": [ "good first issue" ], "created_at": "2018-02-27T12:02:28Z", "updated_at": "2022-03-10T03:09:32Z", "comments": 1, "user": "willprice" }, { "repo": "pytorch/examples", "number": 308, "title": "Clarification", "body": "https://github.com/pytorch/examples/blob/4ef2d4d0c8524372d0047e050065edcac665ce1a/vae/main.py#L61\r\nIs there a particular reason why the method .exp_() is preferred to .exp() ?", "url": "https://github.com/pytorch/examples/issues/308", "state": "closed", "labels": [], "created_at": "2018-02-23T12:07:39Z", "updated_at": "2018-12-13T06:45:41Z", "comments": 1, "user": "ggbioing" }, { "repo": "pytorch/examples", "number": 304, "title": "Is it possible to run snli: train.py on CPU (without CUDA)?", "body": "```\r\n$ conda list pytorch\r\n# packages in environment at /Users/davidlaxer/anaconda:\r\n#\r\npytorch 0.2.0 py27_4cu75 soumith\r\n\r\n$ export NO_CUDA=0; python train.py \r\nTraceback (most recent call last):\r\n File \"train.py\", line 17, in <module>\r\n torch.cuda.set_device(args.gpu)\r\n File \"/Users/davidlaxer/anaconda/lib/python2.7/site-packages/torch/cuda/__init__.py\", line 162, in set_device\r\n torch._C._cuda_setDevice(device)\r\nAttributeError: 'module' object has no attribute '_cuda_setDevice'\r\n```\r\n", "url": "https://github.com/pytorch/examples/issues/304", "state": "closed", "labels": [], "created_at": "2018-02-10T19:38:49Z", "updated_at": "2022-04-07T18:19:14Z", "comments": 3, "user": "dbl001" }, { "repo": "pytorch/examples", "number": 298, "title": "Reversed Sign?", "body": "https://github.com/pytorch/examples/blob/963f7d1777cd20af3be30df40633356ba82a6b0c/vae/main.py#L105\r\n\r\nAren't we trying to maximize that and hence there needs to be a negative sign here?", "url": "https://github.com/pytorch/examples/issues/298", "state": "closed", "labels": [], "created_at": "2018-02-03T18:14:11Z", "updated_at": "2018-02-07T11:29:35Z", "comments": 2, "user": "whamza15" }, { "repo": "pytorch/examples", "number": 286, "title": "Batching in Word Level Language Model", "body": "Hi,\r\n\r\nIt is not clear how does the batching happen in the Language model?\r\n\r\nIt is not clear if it the input to the model in every iteration of the loop is [seq_length, batch_size, embed_size] or [batch_size, seq_length, embed_size]?\r\n\r\nAlso, why does rnn model return output and hidden separately, they are the same... as for a rnn layer hidden itself is the output.\r\n\r\nThanks for the awesome library.", "url": "https://github.com/pytorch/examples/issues/286", "state": "closed", "labels": [], "created_at": "2018-01-15T16:44:11Z", "updated_at": "2018-01-17T03:32:51Z", "comments": 7, "user": "mourinhoxyz" }, { "repo": "pytorch/examples", "number": 280, "title": "Needs updating for PyTorch HEAD (no_grad)", "body": "volatile is no more in PyTorch HEAD, which means that you have to use the `no_grad` context manager now. Any examples using volatile need to be ported accordingly. However, we shouldn't do this until the next release, because examples should work for the current release. (If someone wants to get the jump, maybe a dev branch is warranted.)\r\n\r\nCC @colesbury \r\n ", "url": "https://github.com/pytorch/examples/issues/280", "state": "open", "labels": [ "help wanted" ], "created_at": "2018-01-09T18:58:27Z", "updated_at": "2022-03-10T05:54:42Z", "comments": 2, "user": "ezyang" }, { "repo": "pytorch/examples", "number": 278, "title": "Is total variation loss necessary in fast_neural_style?", "body": "I notice that there is no total variation loss regularization implemented in the example of `fast_neural_style`. But the paper declared it and their torch version use it. I'm wondering if total variation loss is necessary or not in style transfer. ", "url": "https://github.com/pytorch/examples/issues/278", "state": "open", "labels": [ "question", "good first issue" ], "created_at": "2018-01-03T08:26:09Z", "updated_at": "2022-03-10T05:55:02Z", "comments": 0, "user": "ZhuFengdaaa" }, { "repo": "pytorch/examples", "number": 277, "title": "ValueError: optimizer got an empty parameter list", "body": "Hi PyTorch Friends,\r\n\r\nI'm trying to building customized layer by following the guide [Extending PyTorch Tutorial](http://pytorch.org/docs/master/notes/extending.html) and use the customized layers to replace the nn.Conv2d and nn.Linear layer in the official example of [mnist main.py](https://github.com/pytorch/examples/blob/master/mnist/main.py) line 55-59.\r\n\r\nHowever, after replacing with my own customized layers, the testing step (forward) is working without error, while training the new model, it gives an error as \"ValueError: optimizer got an empty parameter list\". Also, the new_model.parameters() does not have any items.\r\n\r\nThe following is my modified Net (nn.Module)\r\n\r\n class Decomp_Net(nn.Module):\r\n def __init__(self, path_pretrained_model=\"mymodel.pth\"):\r\n super(Decomp_Net, self).__init__()\r\n # Load the pretrained model\r\n # Load the saved weights\r\n self.path_pretrained_model = path_pretrained_model\r\n try:\r\n params = torch.load(self.path_pretrained_model)\r\n print(\"Loaded pretrained model.\")\r\n except:\r\n raise(\"No pretrained model saved.\")\r\n\r\n # Conv Layer 1\r\n self.W_conv1 = params.items()[0]\r\n self.B_conv1 = params.items()[1][1]\r\n self.W_conv1 = self.W_conv1[1].view(10, 25)\r\n self.W_conv1 = self.W_conv1.t()\r\n self.D_conv1, self.X_a_conv1 = create_dic_fuc.create_dic(A=self.W_conv1, M=25, N=10, Lmax=9, Epsilon=0.7, mode=1)\r\n\r\n # Conv Layer 2\r\n self.W_conv2 = params.items()[2]\r\n self.B_conv2 = params.items()[3][1]\r\n self.W_conv2 = self.W_conv2[1].view(200, 25)\r\n self.W_conv2 = self.W_conv2.t()\r\n self.D_conv2, self.X_a_conv2 = create_dic_fuc.create_dic(A=self.W_conv2, M=25, N=200, Lmax=199, Epsilon=0.7, mode=1)\r\n\r\n # Layer FC1\r\n self.W_fc1 = params.items()[4]\r\n self.B_fc1 = params.items()[5][1]\r\n self.D_fc1, self.X_a_fc1 = create_dic_fuc.create_dic(A=self.W_fc1[1], M=50, N=320, Lmax=319, Epsilon=0.8, mode=1)\r\n\r\n # Layer FC2\r\n self.W_fc2 = params.items()[6] # Feching the last fully connect layer of the orinal model\r\n self.B_fc2 = params.items()[7][1] \r\n self.D_fc2, self.X_a_fc2 = create_dic_fuc.create_dic(A=self.W_fc2[1], M=10, N=50, Lmax=49, Epsilon=0.5, mode=1)\r\n\r\n self.conv1 = ConvDecomp2d(coefs=self.X_a_conv1, dictionary=self.D_conv1, bias_val=self.B_conv1, input_channels=1, output_channels=10, kernel_size=5, bias=True)\r\n self.conv2 = ConvDecomp2d(coefs=self.X_a_conv2, dictionary=self.D_conv2, bias_val=self.B_conv2, input_channels=10, output_channels=20, kernel_size=5, bias=True)\r\n self.conv2_drop = nn.Dropout2d()\r\n self.fc1 = FCDecomp(coefs=self.X_a_fc1, dictionary=self.D_fc1, bias_val=self.B_fc1, input_features=320, output_features=50)\r\n self.fc2 = FCDecomp(coefs=self.X_a_fc2, dictionary=self.D_fc2, bias_val=self.B_fc2, input_features=50, output_features=10)\r\n\r\n def forward(self, x):\r\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\r\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\r\n x = x.view(-1, 320)\r\n x = F.relu(self.fc1(x))\r\n x = F.dropout(x, training=self.training)\r\n x = self.fc2(x)\r\n return F.log_softmax(x)\r\n\r\nI defined the customized function as follows:\r\n\r\n class LinearDecomp(Function):\r\n # Note that both forward and backward are @staticmethods\r\n @staticmethod\r\n def forward(ctx, input, coefs, dictionary, bias=None):\r\n weight = torch.mm(dictionary, coefs).cuda() # reconstruct the weight\r\n ctx.save_for_backward(input, weight, dictionary, coefs, bias)\r\n output = input.mm(weight.t())\r\n if bias is not None:\r\n output += bias.unsqueeze(0).expand_as(output)\r\n return output\r\n\r\n # This function has only a single output, so it gets only one gradient\r\n @staticmethod\r\n def backward(ctx, grad_output):\r\n input, weight, coefs, dictionary, bias = ctx.saved_variables\r\n grad_input = grad_input = grad_coefs = grad_bias = None\r\n grad_weight = grad_output.t().mm(input) # do not output\r\n\r\n if ctx.needs_input_grad[0]:\r\n grad_input = grad_output.mm(weight)\r\n\r\n # if ctx.needs_input_grad[1]:\r\n grad_weight = grad_output.t().mm(input) # do not output grad_weight\r\n\r\n if ctx.needs_input_grad[2]:\r\n grad_coefs = dictionary.t().mm(grad_weight)\r\n\r\n if ctx.needs_input_grad[3]:\r\n grad_dictionary = grad_weight.t().mm(grad_coefs.t())\r\n\r\n if bias is not None and ctx.needs_input_grad[4]:\r\n grad_bias = grad_output.sum(0).squeeze(0)\r\n\r\n return grad_input, grad_coefs, grad_d", "url": "https://github.com/pytorch/examples/issues/277", "state": "closed", "labels": [], "created_at": "2018-01-03T04:35:54Z", "updated_at": "2018-03-05T10:06:12Z", "comments": 1, "user": "OpenBanboo" }, { "repo": "pytorch/examples", "number": 271, "title": "Transfer Learning on DC-GAN", "body": "Are the models for the generator and discriminator trained on LSUN or imagenet dataset made public?. If they are made public, where can I download them from?", "url": "https://github.com/pytorch/examples/issues/271", "state": "closed", "labels": [ "question" ], "created_at": "2017-12-19T06:26:09Z", "updated_at": "2022-03-10T02:41:26Z", "comments": 1, "user": "brijml" }, { "repo": "pytorch/tutorials", "number": 189, "title": "Tutorial about torch.distributions ?", "body": "", "url": "https://github.com/pytorch/tutorials/issues/189", "state": "closed", "labels": [], "created_at": "2017-12-18T15:57:51Z", "updated_at": "2021-06-16T21:41:33Z", "comments": 3, "user": "zuoxingdong" }, { "repo": "pytorch/tutorials", "number": 176, "title": "[Request] Tutorial on testing and improving data loading", "body": "Hi, I think pytorch is a great framework and I'm using it consistently in my work. As a self-taught in machine learning I have sometimes difficulties to understand how to solve some bottlenecks in training, for example slow I/O. I get the idea, but I lack a general view of the topic.\r\n\r\nI think it would be nice to have something similar to [this](https://github.com/kzuiderveld/deeplearning1/blob/master/Improving%20training%20speeds%20using%20Keras%202.ipynb) and expanding it by explaining some common problems, how to catch them and some actions to solve them.", "url": "https://github.com/pytorch/tutorials/issues/176", "state": "closed", "labels": [], "created_at": "2017-11-14T12:39:26Z", "updated_at": "2018-01-22T05:34:20Z", "comments": 1, "user": "iacolippo" }, { "repo": "pytorch/examples", "number": 253, "title": "Error For imagenet/main.py training with DistributedDataParallel().", "body": "I got DistributedDataParallel() error.\r\n\r\nI just fixed calling init_process_group() to pass rank like the below\r\ndist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, rank = args.rank,\r\n world_size=args.world_size)\r\n\r\n$ CUDA_VISIBLE_DEVICES=0 python main.py /dataset/imagenet_classify/ --world-size 2 --dist-backend gloo --dist-url tcp://127.0.0.1:23456 --rank 0\r\n$ CUDA_VISIBLE_DEVICES=1 python main.py /dataset/imagenet_classify/ --world-size 2 --dist-backend gloo --dist-url tcp://127.0.0.1:23456 --rank 1\r\n\r\n\r\nError Message\r\n=> creating model 'resnet18'\r\nTraceback (most recent call last):\r\n File \"distributed_imagenet_main.py\", line 319, in <module>\r\n main()\r\n File \"distributed_imagenet_main.py\", line 92, in main\r\n model = torch.nn.parallel.DistributedDataParallel(model)\r\n File \"/home/andrew/ml/local/lib/python2.7/site-packages/torch/nn/parallel/distributed.py\", line 124, in __init__\r\n for param_tuple in zip(*map(lambda m: m.parameters(), self._module_copies)):\r\n File \"/home/andrew/ml/local/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 262, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'DistributedDataParallel' object has no attribute '_module_copies'\r\nterminate called after throwing an instance of 'gloo::EnforceNotMet'\r\n what(): [enforce fail at /pytorch/torch/lib/gloo/gloo/cuda.cu:249] error == cudaSuccess. 29 vs 0. Error at: /pytorch/torch/lib/gloo/gloo/cuda.cu:249: driver shutting down\r\nAborted (core dumped)\r\n\r\nWhat is the problem?\r\nIs there any guild document for training ImageNet with distributed nodes?", "url": "https://github.com/pytorch/examples/issues/253", "state": "closed", "labels": [], "created_at": "2017-11-10T06:50:22Z", "updated_at": "2018-12-11T07:49:24Z", "comments": 2, "user": "andrew-yang0722" }, { "repo": "pytorch/examples", "number": 252, "title": "mnist dataset(jpg format) load slow", "body": "I put different label of Mnist datasets in different folders, as is shown in attached figure.\r\n![1510280662 1](https://user-images.githubusercontent.com/7909474/32640049-ac7d6d80-c601-11e7-9fb8-e1af8b7934f6.png)\r\n![1510280674 1](https://user-images.githubusercontent.com/7909474/32640050-ace75998-c601-11e7-976d-583b5bae2b0b.jpg)\r\n![1510280765 1](https://user-images.githubusercontent.com/7909474/32640051-ad239a84-c601-11e7-8eaa-7125168f0476.jpg)\r\n I found dataset loading is very slow compared to official example, my script is also attached!\r\n[mnist-example.txt](https://github.com/pytorch/examples/files/1459883/mnist-example.txt)\r\nmy data load time and official example load time log\r\n![1510281089 1](https://user-images.githubusercontent.com/7909474/32640276-e6b60e20-c602-11e7-9004-260d8b86cce6.jpg)\r\n![1510281229 1](https://user-images.githubusercontent.com/7909474/32640277-e7ea9b8a-c602-11e7-8e32-593bb9a72fd1.jpg)\r\n\r\n", "url": "https://github.com/pytorch/examples/issues/252", "state": "closed", "labels": [ "question" ], "created_at": "2017-11-10T02:35:47Z", "updated_at": "2022-03-10T02:20:15Z", "comments": 3, "user": "Darknesszlx" }, { "repo": "pytorch/examples", "number": 248, "title": " UserWarning: RNN module weights are not part...", "body": "Hello, on World LM model I get this user warning,\r\ndo not know what is means, so I am posting it here just to let you know.\r\n\r\n`python3 main.py --cuda --epochs 6`\r\n\r\n```\r\nUserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters().\r\n output, hidden = self.rnn(emb, hidden)\r\n```\r\n", "url": "https://github.com/pytorch/examples/issues/248", "state": "closed", "labels": [], "created_at": "2017-10-29T09:32:06Z", "updated_at": "2018-12-03T13:59:27Z", "comments": 7, "user": "Robomate" }, { "repo": "pytorch/examples", "number": 241, "title": "Any example for Domain Adaptation?", "body": "Domain Adaptation is an interesting area at present. If any example of Domain adaptation in pytorch is available, it would be really helpful. For an example, Deep CORAL paper is developed using caffe. The code of Deep CORAL is: \r\nhttps://github.com/VisionLearningGroup/CORAL\r\nIf this code would be available in pytorch, it would be really great.", "url": "https://github.com/pytorch/examples/issues/241", "state": "closed", "labels": [], "created_at": "2017-10-24T23:09:31Z", "updated_at": "2022-03-10T02:26:00Z", "comments": 1, "user": "redhat12345" }, { "repo": "pytorch/examples", "number": 240, "title": "error in vae?", "body": "In vae/main.py, line 61, shoudn't `std = logvar.mul(0.5).exp_()` be `std = logvar.exp_().pow(0.5)`?\r\n\r\nsorry, I just realized...\r\n", "url": "https://github.com/pytorch/examples/issues/240", "state": "closed", "labels": [], "created_at": "2017-10-24T14:14:25Z", "updated_at": "2017-10-24T16:01:59Z", "comments": 0, "user": "fedecarne" }, { "repo": "pytorch/tutorials", "number": 156, "title": "Explain optimizer.zero_grad()", "body": "I think the call to [optimizer.zero_grad()](https://github.com/pytorch/tutorials/blob/master/beginner_source/examples_nn/two_layer_net_optim.py#L52) should be explained in the beginner tutorials. In particular:\r\n\r\n* What is the point of this call?\r\n* Why is not it made automatically?\r\n\r\nThanks!", "url": "https://github.com/pytorch/tutorials/issues/156", "state": "closed", "labels": [], "created_at": "2017-10-11T12:32:06Z", "updated_at": "2018-01-22T08:22:20Z", "comments": 0, "user": "Vayel" }, { "repo": "pytorch/examples", "number": 231, "title": "As for the pretrained model in torchvision, what's the image channel RGB or BGR?", "body": "", "url": "https://github.com/pytorch/examples/issues/231", "state": "closed", "labels": [], "created_at": "2017-10-10T13:48:00Z", "updated_at": "2017-10-12T00:53:17Z", "comments": 2, "user": "AlexHex7" }, { "repo": "pytorch/tutorials", "number": 147, "title": "No module named 'torch.onnx' when following super_resolution_with_caffe2.html ", "body": "I am following tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html (Transfering a model from PyTorch to Caffe2 and Mobile using ONNX). At the beginning I get:\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-2-cabf174890ab> in <module>()\r\n 5 from torch.autograd import Variable\r\n 6 import torch.utils.model_zoo as model_zoo\r\n----> 7 import torch.onnx\r\n\r\nModuleNotFoundError: No module named 'torch.onnx'\r\n\r\nMy environment is: Ubuntu 16.04, anaconda 4.3.25. my environment has Python 3.6. PyTorch 0.20. onnx 0.1, torchvision 0.1.9.\r\n\r\nPlease let me know what is missing, Thanks, \r\n\r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/147", "state": "closed", "labels": [], "created_at": "2017-09-28T20:17:10Z", "updated_at": "2017-11-08T12:58:48Z", "comments": 4, "user": "liqunfu" }, { "repo": "pytorch/text", "number": 125, "title": "what is the purpose of this project?", "body": "pytorch has offered utils.data.dataset, and what is the purpose of torchtext?\r\nwhat features do torchtext support?", "url": "https://github.com/pytorch/text/issues/125", "state": "closed", "labels": [], "created_at": "2017-09-19T05:44:43Z", "updated_at": "2017-12-22T07:00:38Z", "user": "rabintang" }, { "repo": "pytorch/pytorch", "number": 2557, "title": "What is the Torch7 's nn.Add layer in PyTorch?", "body": "I find the torch.legacy.nn.Add layer, but it doesn't support autograd. Any other solutions?", "url": "https://github.com/pytorch/pytorch/issues/2557", "state": "closed", "labels": [], "created_at": "2017-08-29T02:32:49Z", "updated_at": "2017-08-29T02:40:14Z", "user": "yytdfc" }, { "repo": "pytorch/examples", "number": 207, "title": "how to finetune my own trained model on new datasets?", "body": "I have trained my own model ,now i want use this trained model to initialize my new networks or finetune this trained model on new datasets, anyone know how to do it ?", "url": "https://github.com/pytorch/examples/issues/207", "state": "closed", "labels": [], "created_at": "2017-08-24T09:01:19Z", "updated_at": "2017-08-24T09:53:38Z", "comments": 0, "user": "visonpon" }, { "repo": "pytorch/tutorials", "number": 123, "title": "Neural style transfer question", "body": "Hi, not sure if this is the right place to ask questions, but I'm working through the neural style transfer tutorial and am confused about something.\r\n\r\nWhat is the purpose of the `backward` method in `ContentLoss` and `StyleLoss`?\r\n\r\nIf we remove the `backward` method, won't this work as well for the `closure` function in `run_style_transfer`?\r\n\r\n```python\r\n def closure():\r\n # correct the values of updated input image\r\n input_param.data.clamp_(0, 1)\r\n\r\n optimizer.zero_grad()\r\n model(input_param)\r\n style_score = 0\r\n content_score = 0\r\n\r\n for sl in style_losses:\r\n style_score += sl.loss\r\n for cl in content_losses:\r\n content_score += cl.loss\r\n\r\n run[0] += 1\r\n if run[0] % 50 == 0:\r\n print(\"run {}:\".format(run))\r\n print('Style Loss : {:4f} Content Loss: {:4f}'.format(\r\n style_score.data[0], content_score.data[0]))\r\n print()\r\n\r\n total_score = style_score+content_score\r\n total_score.backward()\r\n\r\n return total_score\r\n```\r\n\r\nOn a related note, won't multiple `backward` calls in the original code accumulate the gradients for the image? Why is it okay to do this? Am I wrong in assuming that you should only call `backward` once? I'm new to Pytorch so I apologize if I'm missing anything fundamental. Thanks!\r\n\r\nEDIT: Tagging the author @alexis-jacq if you don't mind :)", "url": "https://github.com/pytorch/tutorials/issues/123", "state": "closed", "labels": [], "created_at": "2017-08-10T17:48:30Z", "updated_at": "2017-08-12T14:07:01Z", "comments": 2, "user": "reiinakano" }, { "repo": "pytorch/pytorch", "number": 2247, "title": "what is exactly batch_size in pytorch?", "body": "Sorry im new to this.\r\nI am not sure if I understand right. in pytorch it says: batch_size (int, optional) \u2013 how many samples per batch to load (default: 1).\r\nI know that, batch size = the number of training examples in one forward/backward pass. \r\nWhat does it mean that it says \"how many **samples** per **batch** to load\". can you define sample and batch here for me please. \r\nAlso, what would be the maximum number for batch_size?\r\n\r\nThanks", "url": "https://github.com/pytorch/pytorch/issues/2247", "state": "closed", "labels": [], "created_at": "2017-07-30T04:38:06Z", "updated_at": "2017-07-31T07:38:59Z", "user": "isalirezag" }, { "repo": "pytorch/pytorch", "number": 2227, "title": "where is the torch.nn.NLLLoss ?", "body": "i want to find how NLLLoss calcuate the loss, but i can't find its code.\r\n\r\n\r\n# loss\r\ndef nll_loss(input, target, weight=None, size_average=True, ignore_index=-100):\r\n r\"\"\"The negative log likelihood loss.\r\n See :class:`~torch.nn.NLLLoss` for details.\r\n\r\nwhere is `~torch.nn.NLLLoss` ?", "url": "https://github.com/pytorch/pytorch/issues/2227", "state": "closed", "labels": [], "created_at": "2017-07-28T08:35:34Z", "updated_at": "2022-07-26T18:28:32Z", "user": "susht3" }, { "repo": "pytorch/examples", "number": 187, "title": "fast-neural-style uses mscoco but normalizes for imagenet mean", "body": "Documentation for `fast-neural-style` uses mscoco training dataset, but subtracts imagenet mean from image input data. \r\n\r\nThe effects are probably very minor, but anybody have the mean stats for mscoco?", "url": "https://github.com/pytorch/examples/issues/187", "state": "closed", "labels": [], "created_at": "2017-07-21T09:18:40Z", "updated_at": "2017-07-24T01:20:02Z", "comments": 1, "user": "twairball" }, { "repo": "pytorch/tutorials", "number": 116, "title": "How to save the model in Classifying name tutorial?", "body": "I am 100% successfully run the tutorial and I make some problem change, where I fixed the sequence to 10 and just 3 feature. It almost same with the tutorial. I have successfully save the model, but I have problem when loading it.\r\n\r\n```\r\nimport torch.nn as nn\r\nfrom torch.autograd import Variable\r\nclass RNN(nn.Module):\r\n def __init__(self, input_size, hidden_size, output_size):\r\n super(RNN, self).__init__() \r\n self.input_size = input_size\r\n self.hidden_size = hidden_size\r\n self.output_size = output_size\r\n \r\n self.i2h = nn.Linear(input_size + hidden_size, hidden_size)\r\n self.i2o = nn.Linear(input_size + hidden_size, output_size)\r\n self.softmax = nn.LogSoftmax()\r\n \r\n def forward(self, input, hidden):\r\n combined = torch.cat((input, hidden), 1)\r\n hidden = self.i2h(combined)\r\n output = self.i2o(combined)\r\n output = self.softmax(output)\r\n return output, hidden\r\n\r\n def init_hidden(self):\r\n return Variable(torch.zeros(1, self.hidden_size))`\r\n```\r\nI saved the model using this code. I put it in the end of training.\r\n`torch.save(rnn.state_dict(),'./halo.pkl')`\r\nThe network is still same. Here is the code to load the model.\r\n```\r\ndef restore_net(filename):\r\n n_hidden = 128\r\n n_letters = 3\r\n n_categories = 2\r\n rnn = RNN(n_letters, n_hidden, n_categories)\r\n rnn.load_state_dict(filename)\r\n return rnn\r\n```\r\nHowever I got this error.\r\n![image](https://user-images.githubusercontent.com/2309538/28114649-4165a2f0-6734-11e7-9103-1c871e803531.png)\r\n\r\nAnyone can have a suggestion how should I save it?\r\n-Thank you-", "url": "https://github.com/pytorch/tutorials/issues/116", "state": "closed", "labels": [], "created_at": "2017-07-12T11:02:16Z", "updated_at": "2017-07-12T11:07:13Z", "comments": 1, "user": "herleeyandi" }, { "repo": "pytorch/examples", "number": 178, "title": "ImageNet Error", "body": "Hi,\r\n\r\nI am trying to train the models on ImageNet following [this](https://github.com/pytorch/examples/tree/master/imagenet#training). However, I got no luck.\r\n\r\nDoes anyone know how to fix the following issue?\r\n\r\n```shell\r\nkwang@cdc-177:~/PyTorch/examples/imagenet$ CUDA_VISIBLE_DEVICES=1 python main.py -a resnet18 /imagenet_dir\r\n=> creating model 'resnet18'\r\nTraceback (most recent call last):\r\n File \"main.py\", line 289, in <module>\r\n main()\r\n File \"main.py\", line 131, in main\r\n train(train_loader, model, criterion, optimizer, epoch)\r\n File \"main.py\", line 159, in train\r\n for i, (input, target) in enumerate(train_loader):\r\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 201, in __next__\r\n return self._process_next_batch(batch)\r\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 221, in _process_next_batch\r\n raise batch.exc_type(batch.exc_msg)\r\nAttributeError: Traceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 40, in _worker_loop\r\n samples = collate_fn([dataset[i] for i in batch_indices])\r\n File \"build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py\", line 116, in __getitem__\r\n img = self.loader(path)\r\n File \"build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py\", line 63, in default_loader\r\n return pil_loader(path)\r\n File \"build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py\", line 45, in pil_loader\r\n with Image.open(f) as img:\r\n File \"/usr/lib/python2.7/dist-packages/PIL/Image.py\", line 528, in __getattr__\r\n raise AttributeError(name)\r\nAttributeError: __exit__\r\n```\r\n\r\n`PyTorch` is okay and I can run some other experiments with it.\r\n\r\nThanks!", "url": "https://github.com/pytorch/examples/issues/178", "state": "closed", "labels": [], "created_at": "2017-07-07T07:03:45Z", "updated_at": "2017-07-08T02:50:51Z", "comments": 1, "user": "wk910930" }, { "repo": "pytorch/examples", "number": 173, "title": "imagenet example did not transfer input to gpu?", "body": "In the imagenet training code, `input` is not explicitly converted to cuda in these [lines](https://github.com/pytorch/examples/blob/master/imagenet/main.py#L163-L165). \r\n\r\nI've noticed that the training loader has `pin_memory` flag as True. In fact, even if a tensor has called `pin_memory()`, it is still a `FloatTensor` instead of `cuda.FloatTensor`. If I understand the [documentation](http://pytorch.org/docs/master/notes/cuda.html) correctly, the benefit of using `pin_memory()` is that you can use `async=True` in the `cuda()` method, which would be faster due to asynchronous. \r\n\r\nIf I did not miss anything, this is a bug in the code, right?", "url": "https://github.com/pytorch/examples/issues/173", "state": "closed", "labels": [], "created_at": "2017-06-30T03:12:38Z", "updated_at": "2018-03-16T08:35:16Z", "comments": 2, "user": "iammarvelous" }, { "repo": "pytorch/tutorials", "number": 101, "title": "Regarding exercises in Character-Level RNN ", "body": "I was wondering where I can find the dataset for the exercises given in Classifying Names with Character-Level RNN.\r\nFor example:\r\nAny word -> language\r\nFirst name -> gender\r\nCharacter name -> writer\r\nPage title -> blog or subreddit\r\n\r\nTo complete this task, do I have to create my own dataset or is there any repo where I can download those datasets?", "url": "https://github.com/pytorch/tutorials/issues/101", "state": "closed", "labels": [], "created_at": "2017-06-26T21:36:40Z", "updated_at": "2018-01-22T04:55:21Z", "comments": 1, "user": "oya163" }, { "repo": "pytorch/examples", "number": 170, "title": "Potential speedup for DCGAN ", "body": "In the dcgan example, while training the discriminator, why is backward called twice ? First its called on the real images, then the fake images. \r\nInstead, shouldn't doing something like: \r\n`totalError = real_loss + fake_loss , \r\nand then calling totalError.backward() `\r\nsave one whole backprop ?\r\nDoes doing it the way i suggested change anything qualitatively ?", "url": "https://github.com/pytorch/examples/issues/170", "state": "closed", "labels": [], "created_at": "2017-06-16T05:47:31Z", "updated_at": "2017-10-04T15:02:47Z", "comments": 8, "user": "harveyslash" }, { "repo": "pytorch/tutorials", "number": 98, "title": "update beginner tutorial to most recent pytorch version?", "body": "This [beginner tutorial](http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py) uses `y.grad_fn` where, from googling around it seems like it should now use `y.creator`. The image is updated, but the text/code isn't.\r\n\r\nRegardless, the tutorial should probably say what version of PyTorch it's for and how to check, right?\r\n\r\nI'm happy to make the modifications and do a pull request, but wasn't sure what kind of solution was desired.", "url": "https://github.com/pytorch/tutorials/issues/98", "state": "closed", "labels": [], "created_at": "2017-06-15T02:50:43Z", "updated_at": "2017-06-15T19:48:14Z", "comments": 3, "user": "erindb" }, { "repo": "pytorch/examples", "number": 168, "title": "Regarding dimensions of mean and variance ", "body": "Its a multivariate normal distribution in latent space and input space so mean(mu) and variance should be in multidimensional form(matrix) per distribution but your code is generating single value of mean and variance per distribution. So what is the math or implementation process behind it?", "url": "https://github.com/pytorch/examples/issues/168", "state": "closed", "labels": [], "created_at": "2017-06-09T10:31:23Z", "updated_at": "2017-10-01T22:51:56Z", "comments": 1, "user": "anindyasarkarIITH" }, { "repo": "pytorch/examples", "number": 166, "title": "why input data is not copied to CUDA memory during training (only target) ?", "body": "in ImageNet example why only target is copied to CUDA memory target.cuda(async=True) and the absence of input.cuda() in training phase? ", "url": "https://github.com/pytorch/examples/issues/166", "state": "closed", "labels": [], "created_at": "2017-06-07T08:48:38Z", "updated_at": "2017-06-07T09:52:22Z", "comments": 1, "user": "chahrazaddo" }, { "repo": "pytorch/tutorials", "number": 94, "title": "blog tutorial and slides", "body": "Couldn't find you on twitter so raising this here.\r\n\r\nI wrote a beginner's first steps blog and a presentation for the pydata london monthly meetup:\r\n\r\n- [https://goo.gl/EmSfNk](https://goo.gl/EmSfNk)\r\n- [http://makeyourownneuralnetwork.blogspot.co.uk/2017/05/learning-mnist-with-gpu-acceleration.html](http://makeyourownneuralnetwork.blogspot.co.uk/2017/05/learning-mnist-with-gpu-acceleration.html)\r\n\r\nPerhaps these could be the basis for a very beginner-friendly gentle introduction to PyTorch and it's concepts?\r\n\r\nMyself I couldn't find beginner-friendly guides with a logical progression, The existing tutorials are not really for complete (but intelligent or interested) beginners.\r\n\r\nHow do I help?", "url": "https://github.com/pytorch/tutorials/issues/94", "state": "closed", "labels": [], "created_at": "2017-06-01T12:54:46Z", "updated_at": "2017-07-05T17:28:08Z", "comments": 1, "user": "makeyourownneuralnetwork" }, { "repo": "pytorch/examples", "number": 163, "title": "super_resolution model building question", "body": "class Net(nn.Module):\r\n def __init__(self, upscale_factor):\r\n super(Net, self).__init__()\r\n\r\n self.relu = nn.ReLU()\r\n self.conv1 = nn.Conv2d(1, 64, 5, 1, 2)\r\n self.conv2 = nn.Conv2d(64, 64, 3, 1, 1)\r\n self.conv3 = nn.Conv2d(64, 32, 3, 1, 1)\r\n self.conv4 = nn.Conv2d(32, upscale_factor ** 2, 3, 1, 1)\r\n self.pixel_shuffle = nn.PixelShuffle(upscale_factor)\r\n\r\nhow could u get the following information from the paper,\r\nin the self.conv1, there is padding = 2\r\nlayer num l in paper is 3 ,why do u add self.conv2,?\r\nself.conv4 's output_channel is upscale_factor**2\r\n", "url": "https://github.com/pytorch/examples/issues/163", "state": "closed", "labels": [ "question" ], "created_at": "2017-05-30T12:48:11Z", "updated_at": "2022-03-10T01:56:57Z", "comments": 1, "user": "pageedward" }, { "repo": "pytorch/examples", "number": 162, "title": "Request for examples on Recurrent Highway Networks (RHN)", "body": "Is it possible to use the existing torch.nn modules and implement RHNs? Would it make sense to have RHN as a separate module in torch.nn?\r\n\r\nFor reference, someone did raise this issue in pytorch/pytorch https://github.com/pytorch/pytorch/issues/516", "url": "https://github.com/pytorch/examples/issues/162", "state": "closed", "labels": [], "created_at": "2017-05-30T05:49:20Z", "updated_at": "2022-03-10T01:56:13Z", "comments": 2, "user": "sanyam5" }, { "repo": "pytorch/tutorials", "number": 89, "title": "is the grad value wrong in beginner_source/blitz/autograd_tutorial.py line 92?", "body": "in line 92: `z_i = 3(x_i+2)^2` and `z_i\\bigr\\rvert_{x_i=1} = 27`.\r\n\r\nI think `z_i\\bigr\\rvert_{x_i=1} = 6(x_i+2)\\rvert_{x_i=1} = 6*(1+2) = 18`, please correct me if I am wrong, otherwise i will submit a pull request.\r\n", "url": "https://github.com/pytorch/tutorials/issues/89", "state": "closed", "labels": [], "created_at": "2017-05-25T23:59:47Z", "updated_at": "2017-05-29T17:02:53Z", "comments": 2, "user": "ningzhou" }, { "repo": "pytorch/examples", "number": 158, "title": "Shapes in SNLI", "body": "Looking over the SNLI example, something seems off to me. I hope I'm just missing something. First, a batch is embedded and, from the docs, I understand that Embedding layers output the shape `(N, W, D)` where N is the batch size and W is the sequence length. This is passed to the Encoder where it extracts the batch_size with `batch_size = inputs.size()[1]`. Wouldn't that give you the W and not N? Also, the inputs are passed as-is to the LSTM, which expects the shape `(W, N, D)`, but no reshaping is ever done. It seems like the Encoder is assuming `(W, N, D)` data from the start but there is never any `view` done on the embed to change the order of the dimensions, right?", "url": "https://github.com/pytorch/examples/issues/158", "state": "closed", "labels": [ "question", "nlp" ], "created_at": "2017-05-07T02:32:11Z", "updated_at": "2022-03-10T03:19:09Z", "comments": 2, "user": "neverfox" }, { "repo": "pytorch/examples", "number": 157, "title": "two lines of code in mnist/main.py", "body": "There are two arguments called batch_size and test_batch_size:\r\n`parser.add_argument('--batch-size', type=int, default=64, metavar='N',\r\n help='input batch size for training (default: 64)')`\r\n`parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\r\n help='input batch size for testing (default: 1000)')`\r\nbut batch_size is used here:\r\n`test_loader = torch.utils.data.DataLoader(\r\n datasets.MNIST('../data', train=False, transform=transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Normalize((0.1307,), (0.3081,))\r\n ])),\r\n batch_size=args.batch_size, shuffle=True, **kwargs)`\r\n\r\nAlso, what does this line(line 105) do:\r\n`test_loss = test_loss`\r\n\r\nand it seems that `epoch` is not used in test().", "url": "https://github.com/pytorch/examples/issues/157", "state": "closed", "labels": [], "created_at": "2017-05-04T07:40:05Z", "updated_at": "2020-10-10T02:22:56Z", "comments": 0, "user": "iamabug" }, { "repo": "pytorch/tutorials", "number": 77, "title": "Slowdown in DQN RL Tutorial", "body": "After about 5 episodes on latest master build of Pytorch, the time to execute each step t in the main loop slows way down. I tried a pip install of Pytorch as well to test if it was just my version and same thing. I am on OSX with no cuda. Is slowdown normal? I don't see anything in the optimization step that should really slow this down over time. Didn't know if this could be gym related as well. \r\n\r\nIf this isn't normal I will try to dig in some more and see what is causing this for me.\r\n\r\nThanks", "url": "https://github.com/pytorch/tutorials/issues/77", "state": "closed", "labels": [], "created_at": "2017-04-26T21:58:41Z", "updated_at": "2018-01-22T04:54:10Z", "comments": 1, "user": "lbollar" }, { "repo": "pytorch/pytorch", "number": 1344, "title": "What the function is about element-wise product(Hadamard product) in pytorch?", "body": "", "url": "https://github.com/pytorch/pytorch/issues/1344", "state": "closed", "labels": [], "created_at": "2017-04-24T10:59:08Z", "updated_at": "2017-04-24T13:07:50Z", "user": "stevenhanjun" }, { "repo": "pytorch/examples", "number": 147, "title": "imagenet example training gets slower over time.", "body": "It seems that as I do training, the per batch time gets slower and slower. \r\n\r\nFor example, when I run `CUDA_VISIBLE_DEVICES=0 python main.py -a alexnet --lr 0.01 --workers 22 /ssd/cv_datasets/ILSVRC2015/Data/CLS-LOC`.\r\n\r\nInitially I get an average per batch time of about 0.25s\r\n\r\nAfter several batches, I get 0.5s.\r\n\r\nI `top` and find that most of memory (128GB) is occupied \r\n\r\nHow to fix this?", "url": "https://github.com/pytorch/examples/issues/147", "state": "closed", "labels": [], "created_at": "2017-04-20T19:27:35Z", "updated_at": "2019-05-03T09:09:49Z", "comments": 10, "user": "zym1010" }, { "repo": "pytorch/examples", "number": 144, "title": "why treating Alexnet/VGG differently in ImageNet example?", "body": "in <https://github.com/pytorch/examples/blob/master/imagenet/main.py#L68-L72>, it seems that special care has to be taken when wrapping the module with `DataParallel`. Why is this the case? Also, I don't understand why for AlexNet and VGG, `features` is wrapped, yet `classifier` is not.", "url": "https://github.com/pytorch/examples/issues/144", "state": "closed", "labels": [], "created_at": "2017-04-16T04:26:33Z", "updated_at": "2020-01-08T00:27:23Z", "comments": 6, "user": "zym1010" }, { "repo": "pytorch/examples", "number": 142, "title": "action.reinforce(reward)", "body": "What does \"action.reinforce(reward)\" mean? Does it means gradient descent?\r\n![image](https://cloud.githubusercontent.com/assets/12723964/25036722/01b61e70-2128-11e7-9c41-3f21fb5fe13b.png)\r\n", "url": "https://github.com/pytorch/examples/issues/142", "state": "closed", "labels": [], "created_at": "2017-04-14T07:35:47Z", "updated_at": "2017-04-14T11:54:32Z", "comments": 1, "user": "susht3" }, { "repo": "pytorch/examples", "number": 137, "title": "How To Correctly Kill MultiProcesses During Multi-GPU Training", "body": "During the training of using examples/imagenet/main.py, I used the following command:\r\n\r\n CUDA_VISIBLE_DEVICES=0,1,2,3 nohup python main.py [options] path/to/imagenetdir 1>a.log 2>a.err &\r\n\r\nThen it starts 5 processes in the system, 1 main process appears in nvidia-smi.\r\n\r\nMost of the Time (90% of the time) after I first kill the main process, GPU usage down to 0% so I can kill the other 4 to release GPU Mem to start a new training task. Sometimes (10% of the time), after I killed these 5 processes, the main process remained to be \"python [defunct]\" that cannot be killed even by sudo kill -s 9. The usage of GPU AND the GPU mem are not released.\r\n\r\nMulti-gpu training happened at where I use the following line in my code:\r\n\r\n model = torch.nn.DataParallel(model).cuda()\r\n\r\nPlease give some hint on \"how to correctly kill multi-gpu training pytorch process[es].\"\r\n\r\nThanks.", "url": "https://github.com/pytorch/examples/issues/137", "state": "closed", "labels": [], "created_at": "2017-04-10T07:36:38Z", "updated_at": "2022-03-09T21:27:41Z", "comments": 1, "user": "catalystfrank" }, { "repo": "pytorch/examples", "number": 126, "title": "ImageNet example is falling apart in multiple ways", "body": "I am experimenting with Soumith's ImageNet example, but it is crashing or deadlocking in three different ways. I have added a bunch of \"print\" statements to it to figure out where it is crashing, and here is the GIST of full script: (as you can see, there are almost no significant modifications to the original code.) All code is running on 2x NVidia Titan X 12 GB cards with 96 GB RAM. \r\n\r\nhttps://gist.github.com/FuriouslyCurious/81742b8126f07f919522a588147e6086\r\n\r\n## Issue 1: transforms.Scale(512) fails in THCTensorMathBlas.cu:241\r\n\r\nHow to reproduce: \r\n1. Images are being fed with transforms.Scale(512) or transforms.Scale(1024) \r\n2. Source images are 2048x2048.\r\n3. Workers >= 1\r\n4. Batchsize >= 2\r\n5. Script will crash on its own in few minutes\r\n\r\nOutput\r\n```\r\n python train.py -a resnet18 -j 1 -b 2 /home/FC/data/P/\r\n=> Parsing complete...\r\n=> creating model 'resnet18'\r\n=> Using CUDA DataParallel\r\n=> Starting training images loading...\r\n=> Starting validation images loading...\r\n=> Loss criterion and optimizer setup\r\n=> Starting training...\r\n=> Training Epoch 0\r\nTraceback (most recent call last):\r\n File \"train.py\", line 299, in <module>\r\n main()\r\n File \"train.py\", line 140, in main\r\n train(train_loader, model, criterion, optimizer, epoch)\r\n File \"train.py\", line 177, in train\r\n output = model(input_var)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 202, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py\", line 92, in forward\r\n outputs = self.parallel_apply(replicas, scattered, gpu_dicts)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py\", line 102, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py\", line 50, in parallel_apply\r\n raise output\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py\", line 30, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 202, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torchvision-0.1.6-py3.5.egg/torchvision/models/resnet.py\", line 150, in forward\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 202, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/linear.py\", line 54, in forward\r\n return self._backend.Linear()(input, self.weight, self.bias)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/_functions/linear.py\", line 10, in forward\r\n output.addmm_(0, 1, input, weight.t())\r\nRuntimeError: size mismatch at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/THC/generic/THCTensorMathBlas.cu:241\r\n```\r\n\r\n\r\n\r\n## Issue 2: Multiple worker threads deadlock in index_queue.get() and waiter.acquire()\r\n\r\nHow to reproduce: \r\n1. Images are being fed with default crop: transforms.RandomSizedCrop(224) \r\n2. Source images are 2048x2048.\r\n3. Workers > 2\r\n4. Batchsize > 40\r\n5. When you see GPU clock speed fall to resting MHz on NVidia-smi, script has deadlocked in waiter.acquire() and index_queue.get(). Abort the script manually.\r\n\r\n```\r\npython train.py -a resnet18 /home/FC/data/P\r\n=> Parsing complete...\r\n=> creating model 'resnet18'\r\n=> Using CUDA DataParallel\r\n=> Starting training images loading...\r\n=> Starting validation images loading...\r\n=> Loss criterion and optimizer setup\r\n=> Starting training...\r\n=> Training Epoch 0\r\n^CProcess Process-4:\r\nProcess Process-3:\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"train.py\", line 299, in <module>\r\n main()\r\n File \"train.py\", line 140, in main\r\n train(train_loader, model, criterion, optimizer, epoch)\r\n File \"train.py\", line 168, in train\r\n for i, (input, target) in enumerate(train_loader):\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py\", line 168, in __next__\r\n idx, batch = self.data_queue.get()\r\n File \"/conda3/envs/idp/lib/python3.5/queue.py\", line 164, in get\r\n self.not_empty.wait()\r\n File \"/conda3/envs/idp/lib/python3.5/threading.py\", line 293, in wait\r\n waiter.acquire()\r\nTraceback (most recent call last):\r\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\r\n self.run()\r\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/process.py\", line 93, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py\", line 26, in _worker_loop\r\n r = index_queue.get()\r\n File \"/conda3/envs/idp/lib/python3.5/multiprocessing/queues.py\", line 342, in get\r\n with self._rlock:\r", "url": "https://github.com/pytorch/examples/issues/126", "state": "closed", "labels": [], "created_at": "2017-03-28T01:07:36Z", "updated_at": "2017-03-28T01:08:39Z", "comments": 1, "user": "FuriouslyCurious" }, { "repo": "pytorch/examples", "number": 116, "title": "why is detach necessary ", "body": "Hi, I am wondering why is detach necessary in this line:\r\nhttps://github.com/pytorch/examples/blob/a60bd4e261afc091004ea3cf582d0ad3b2e01259/dcgan/main.py#L230\r\n\r\nI understand that we want to update the gradients of netD without changin the ones of netG. But if the optimizer is only using the parameters of netD, then only its weight will be updated. Am I missing something here?\r\nThanks in advance!\r\n", "url": "https://github.com/pytorch/examples/issues/116", "state": "closed", "labels": [], "created_at": "2017-03-20T22:12:36Z", "updated_at": "2022-04-16T07:20:21Z", "comments": 17, "user": "rogertrullo" }, { "repo": "pytorch/tutorials", "number": 47, "title": "Web page for Tutorials ", "body": "Hi,\r\n\r\nI've been working on beautifying/integrating all the tutorials on pytorch into one. see https://github.com/pytorch/pytorch/pull/778. These tutorials are based on [sphinx-gallery](http://sphinx-gallery.readthedocs.io) and tutorials are executed during build time. \r\n\r\nI've created a [separate repo](https://github.com/chsasank/pytorch-tutorials) for the tutorials and used gh-pages to host them: http://chsasank.github.io/pytorch-tutorials. I also added my own [transfer learning tutorial](https://chsasank.github.io/pytorch-tutorials/tutorials/transfer_learning_tutorial.html) \r\n\r\nAfter a discussion with @soumith, he suggested we should host these tutorials at tutorials.pytorch.org. He also requested a change:\r\n\r\n- [x] Categorize tutorials by level instead of source\r\n\r\nIf indeed this is to be front face of tutorials, we'll need to figure out \r\n- [x] how to modify the repo\r\n- [x] how to build the sources and host \r\n\r\n\r\nFor hosting, we shouldn't probably use github pages as it will mess up the git history with all the html files. \r\n\r\nSince these tutorials are executed at build, we might need a decently powered build environment. Most tutorials take may be 5 min on my macbook air. Except [seq2seq](https://chsasank.github.io/pytorch-tutorials/practical-pytorch/seq2seq-translation-tutorial.html) tutorial, which took 40 min on CPU/25 min on GPU. Note that a tutorial is re-excuted only if changes are made to the tutorial file.\r\n\r\n\r\nThanks,\r\nSasank.", "url": "https://github.com/pytorch/tutorials/issues/47", "state": "closed", "labels": [], "created_at": "2017-03-14T12:37:35Z", "updated_at": "2017-04-14T18:46:27Z", "comments": 13, "user": "chsasank" }, { "repo": "pytorch/tutorials", "number": 44, "title": "Where is Variable?", "body": "In `Reinforcement (Q-)Learning with PyTorch2`, the section `Training hyperparameters and utilities` claim the cell providing `Variable` which is \"a simple wrapper around torch.autograd\". But I can't found it in the cell. Then I encounter `NameError: name 'Variable' is not defined`, anyway I import Variable from `torch.autograd` instead. So where is Variable? Or how can I implement it by scratch?", "url": "https://github.com/pytorch/tutorials/issues/44", "state": "closed", "labels": [], "created_at": "2017-03-06T04:57:01Z", "updated_at": "2019-12-02T12:42:11Z", "user": "yiyuezhuo" }, { "repo": "pytorch/tutorials", "number": 41, "title": "Numerically unstable initialized values for uninitialized tensors?", "body": "I was trying to follow the tutorial when I noticed that if I just create an \"uninitialized matrix\", its values are not numerically stable. I guess since we will have to initialize the matrix later, it doesn't really matter, but I'm just wondering if this is intentional.\r\n\r\nI'm running pyTorch with anaconda python 3.6, CUDA v8 on Linux.\r\n\r\n```python\r\nfrom __future__ import print_function\r\nimport torch\r\n```\r\n\r\n\r\n```python\r\nx = torch.Tensor(5, 3)\r\n```\r\n\r\n\r\n```python\r\nx = torch.rand(5, 3)\r\n```\r\n\r\n\r\n```python\r\nx\r\n```\r\n\r\n\r\n\r\n\r\n \r\n -1.6775e+31 4.5895e-41 4.0929e-37\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n [torch.FloatTensor of size 5x3]\r\n\r\n\r\n\r\n\r\n```python\r\ny = torch.Tensor(5, 3); y\r\n```\r\n\r\n\r\n\r\n\r\n \r\n -1.6775e+31 4.5895e-41 4.2770e-37\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n 0.0000e+00 0.0000e+00 0.0000e+00\r\n [torch.FloatTensor of size 5x3]\r\n\r\n\r\n\r\n\r\n```python\r\nx * y\r\n```\r\n\r\n\r\n\r\n\r\n \r\n inf 0 0\r\n 0 0 0\r\n 0 0 0\r\n 0 0 0\r\n 0 0 0\r\n [torch.FloatTensor of size 5x3]\r\n", "url": "https://github.com/pytorch/tutorials/issues/41", "state": "closed", "labels": [], "created_at": "2017-02-27T03:17:09Z", "updated_at": "2017-02-27T03:38:54Z", "comments": 1, "user": "r-luo" }, { "repo": "pytorch/tutorials", "number": 26, "title": "Training on GPU in deep learning notebook - inputs/labels need cuda()", "body": "In working through the deep learning notebook, it's not obvious at first how to get the learning working once you put the net on the GPU.\r\n\r\nAfter some trial and error, this worked \r\n\r\n inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()\r\n\r\nI could make a PR with this addition if desired", "url": "https://github.com/pytorch/tutorials/issues/26", "state": "closed", "labels": [], "created_at": "2017-02-04T21:38:56Z", "updated_at": "2017-05-23T16:37:32Z", "comments": 3, "user": "gojira" }, { "repo": "pytorch/tutorials", "number": 15, "title": "There is any fine tune tutorials?", "body": "Fine tune is very easy in Torch and Caffe, but I can't find how do fine tune in pytorch. Is there any fine tune examples or tutorials? ", "url": "https://github.com/pytorch/tutorials/issues/15", "state": "closed", "labels": [], "created_at": "2017-01-22T09:06:53Z", "updated_at": "2017-10-31T07:24:56Z", "comments": 9, "user": "Teaonly" }, { "repo": "pytorch/tutorials", "number": 14, "title": "Potential improvement to 60 minute blitz for pasteability?", "body": "Hello! I'm very much a newbie to this:\r\n\r\nhttps://github.com/pytorch/tutorials/blob/master/Deep%20Learning%20with%20PyTorch.ipynb\r\n\r\nI followed this guide with Anaconda 3.5 and got to this point: `out = net(input)`\r\nI got a NotImplementedError from the original nn module that the class was supposed to override.\r\n\r\nTurns out I skipped the error messages I got in interactive python where the indentation was wrong (so forward function wasn't implemented in my `Net` class).\r\n\r\nIf we removed the spaces between the functions or used comments we could avoid the issue:\r\n```\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\nclass Net(nn.Module):\r\n def __init__(self):\r\n super(Net, self).__init__()\r\n self.conv1 = nn.Conv2d(1, 6, 5) # 1 input image channel, 6 output channels, 5x5 square convolution kernel\r\n self.conv2 = nn.Conv2d(6, 16, 5)\r\n self.fc1 = nn.Linear(16*5*5, 120) # an affine operation: y = Wx + b\r\n self.fc2 = nn.Linear(120, 84)\r\n self.fc3 = nn.Linear(84, 10)\r\n def forward(self, x):\r\n x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # Max pooling over a (2, 2) window\r\n x = F.max_pool2d(F.relu(self.conv2(x)), 2) # If the size is a square you can only specify a single number\r\n x = x.view(-1, self.num_flat_features(x))\r\n x = F.relu(self.fc1(x))\r\n x = F.relu(self.fc2(x))\r\n x = self.fc3(x)\r\n return x\r\n def num_flat_features(self, x):\r\n size = x.size()[1:] # all dimensions except the batch dimension\r\n num_features = 1\r\n for s in size:\r\n num_features *= s\r\n return num_features\r\n\r\nnet = Net()\r\nnet\r\n```", "url": "https://github.com/pytorch/tutorials/issues/14", "state": "closed", "labels": [], "created_at": "2017-01-22T02:29:52Z", "updated_at": "2017-01-22T03:05:36Z", "comments": 1, "user": "youanden" }, { "repo": "pytorch/tutorials", "number": 7, "title": "Feature Request: tutorial on loading datasets", "body": "A tutorial outlining how to make use of the `torch.utils.data.Dataset` and `torch.utils.data.DataLoader` on your own data (not just the `torchvision.datasets`) would be good. The documentation page is quite obscure, and it is not entirely clear how these can be made use of on your own data. \r\n\r\nAlso outlining what would be good practices for when your data is: \r\n\r\n- A numpy array\r\n- A folder full of image files\r\n\r\nAnd if pytorch has built in functions for creating queues of data, for when the data is too big to all fit in memory in one go (eg in the case of a folder full of image files). \r\n\r\n", "url": "https://github.com/pytorch/tutorials/issues/7", "state": "closed", "labels": [ "enhancement" ], "created_at": "2017-01-19T11:08:21Z", "updated_at": "2023-05-26T20:43:34Z", "comments": 8, "user": "ronrest" }, { "repo": "pytorch/tutorials", "number": 5, "title": "Initialize with t7 files?", "body": "If I trained a model with Torch and stored the weights using t7 format. Is it possible to use this as initialization in pytorch? Thank you.", "url": "https://github.com/pytorch/tutorials/issues/5", "state": "closed", "labels": [], "created_at": "2017-01-18T18:50:08Z", "updated_at": "2017-01-18T19:20:07Z", "comments": 2, "user": "Yuliang-Zou" } ]