Date: Wed Dec 30 09:02:48 2015 -0600
qnx: removed split, and valid() fail hashes with upper case
```
The john-jumbo 1.8.0-1 I installed with homebrew gives the following error when I try to test pbkdf2-hmac-sha512-opencl:
```
$ john --test --format=pbkdf2-hmac-sha512-opencl --dev=1
Device 1: ATI Radeon HD 6490M
Local worksize (LWS) 64, global worksize (GWS) 256
Benchmarking: pbkdf2-hmac-sha512-opencl, GRUB2 / OS X 10.8+, rounds=10000 [PBKDF2-SHA512 OpenCL]... FAILED (get_hash[1](0))
```
The master branch gives something different:
```
$ /github/JohnTheRipper/run/john --test --format=pbkdf2-hmac-sha512-opencl --dev=1
Device 1: ATI Radeon HD 6490M
Benchmarking: PBKDF2-HMAC-SHA512-opencl, GRUB2 / OS X 10.8+, rounds=1000 [PBKDF2-SHA512 OpenCL]... Options used: -I /Users/aktau/github/JohnTheRipper/run/kernels -cl-mad-enable -D__OS_X__ -D__GPU__ -DDEVICE_INFO=10 -DSIZEOF_SIZE_T=8 -DDEV_VER_MAJOR=1 -DDEV_VER_MINOR=2 -D_OPENCL_COMPILER -DHASH_LOOPS=250 -DPLAINTEXT_LENGTH=110 -DMAX_SALT_SIZE=107 $JOHN/kernels/pbkdf2_hmac_sha512_kernel.cl
Build log: Error returned by cvms_element_build_from_source
Error -11 building kernel $JOHN/kernels/pbkdf2_hmac_sha512_kernel.cl. DEVICE_INFO=10
OpenCL CL_BUILD_PROGRAM_FAILURE error in common-opencl.c:1013 - clBuildProgram failed.
```
I don't know whether this is intended or a regression (though both situations don't really work). But I'm reporting it here just in case.
If I can provide more information that would help debugging this, should that be desired, please tell me.
Before I forget, I compiled with:
```
$ cd src
$ OPENSSL_CFLAGS=-I$(brew --prefix)/opt/openssl/include OPENSSL_LIBS=""-L$(brew --prefix)/opt/openssl/lib -lssl -lcrypto -lz"" ./configure
$ make -s
```
",1, hmac opencl on osx el capitan no longer builds with master not even self test fail i just compiled jtr jumbo from this repo on osx this was the latest commit commit author jfoug date wed dec qnx removed split and valid fail hashes with upper case the john jumbo i installed with homebrew gives the following error when i try to test hmac opencl john test format hmac opencl dev device ati radeon hd local worksize lws global worksize gws benchmarking hmac opencl os x rounds failed get hash the master branch gives something different github johntheripper run john test format hmac opencl dev device ati radeon hd benchmarking hmac opencl os x rounds options used i users aktau github johntheripper run kernels cl mad enable d os x d gpu ddevice info dsizeof size t ddev ver major ddev ver minor d opencl compiler dhash loops dplaintext length dmax salt size john kernels hmac kernel cl build log error returned by cvms element build from source error building kernel john kernels hmac kernel cl device info opencl cl build program failure error in common opencl c clbuildprogram failed i don t know whether this is intended or a regression though both situations don t really work but i m reporting it here just in case if i can provide more information that would help debugging this should that be desired please tell me before i forget i compiled with cd src openssl cflags i brew prefix opt openssl include openssl libs l brew prefix opt openssl lib lssl lcrypto lz configure make s ,1
694,9381694928.0,IssuesEvent,2019-04-04 20:18:12,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Unresolved symbol: amd_bitalign on Beignet 1.3 w/ Intel(R) HD Graphics Kabylake Desktop GT1.5,portability,"```
$ ./john --test=0 --format=md5crypt-opencl --devices=1
Device 1: Intel(R) HD Graphics Kabylake Desktop GT1.5
Testing: md5crypt-opencl, crypt(3) $1$ [MD5 OpenCL]... Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Options used: -I /home/fd/git/JtR/run/kernels -cl-mad-enable -D__GPU__ -DDEVICE_INFO=34 -DSIZEOF_SIZE_T=8 -DDEV_VER_MAJOR=1 -DDEV_VER_MINOR=3 -D_OPENCL_COMPILER -DPLAINTEXT_LENGTH=15 ./kernels/cryptmd5_kernel.cl
Build log: stringInput.cl:153:16: warning: implicit declaration of function 'amd_bitalign' is invalid in OpenCL
/home/fd/git/JtR/run/kernels/opencl_misc.h:153:33: note: expanded from macro 'BITALIGN_IMM'
/home/fd/git/JtR/run/kernels/opencl_misc.h:127:29: note: expanded from macro 'BITALIGN'
function 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlined
Error building kernel ./kernels/cryptmd5_kernel.cl. DEVICE_INFO=34
0: OpenCL CL_BUILD_PROGRAM_FAILURE (-11) error in opencl_common.c:1366 - clBuildProgram
```
This is on a Fedora 29 system.
```
$ ./john --list=build-info
Version: 1.8.0.16-jumbo-1-bleeding-d7d84b79d 2019-04-02 11:29:08 +0200
Build: linux-gnu 64-bit x86_64 AVX2 AC OMP
SIMD: AVX2, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1
CPU tests: AVX2
$JOHN is ./
Format interface version: 14
Max. number of reported tunable costs: 4
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
SINGLE_IDX_MAX: 2147483648
SINGLE_BUF_MAX: 4294967295
Effective limit: Number of salts vs. SingleMaxBufferSize
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 8.3.1
GNU libc version: 2.28 (loaded: 2.28)
OpenCL headers version: 2.2
Crypto library: OpenSSL
OpenSSL library version: 01010102f
OpenSSL 1.1.1b FIPS 26 Feb 2019
GMP library version: 6.1.2
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
```
```
$ ./john --list=opencl-devices --devices=1
Platform #0 name: Intel Gen OCL Driver, version: OpenCL 2.0 beignet 1.3
Device #0 (1) name: Intel(R) HD Graphics Kabylake Desktop GT1.5
Device vendor: Intel
Device type: GPU (LE)
Device version: OpenCL 2.0 beignet 1.3
Driver version: 1.3
Native vector widths: char 8, short 8, int 4, long 2
Preferred vector width: char 16, short 8, int 4, long 2
Global Memory: 4 GB
Global Memory Cache: 8 KB
Local Memory: 64 KB (Local)
Constant Buffer size: 128 MB
Max memory alloc. size: 3 GB
Max clock (MHz): 1000
Profiling timer res.: 80 ns
Max Work Group Size: 512
Parallel compute cores: 24
Stream processors: 192 (24 x 8)
Speed index: 192000
```",True,"Unresolved symbol: amd_bitalign on Beignet 1.3 w/ Intel(R) HD Graphics Kabylake Desktop GT1.5 - ```
$ ./john --test=0 --format=md5crypt-opencl --devices=1
Device 1: Intel(R) HD Graphics Kabylake Desktop GT1.5
Testing: md5crypt-opencl, crypt(3) $1$ [MD5 OpenCL]... Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Unresolved symbol: amd_bitalign
Aborting...
Options used: -I /home/fd/git/JtR/run/kernels -cl-mad-enable -D__GPU__ -DDEVICE_INFO=34 -DSIZEOF_SIZE_T=8 -DDEV_VER_MAJOR=1 -DDEV_VER_MINOR=3 -D_OPENCL_COMPILER -DPLAINTEXT_LENGTH=15 ./kernels/cryptmd5_kernel.cl
Build log: stringInput.cl:153:16: warning: implicit declaration of function 'amd_bitalign' is invalid in OpenCL
/home/fd/git/JtR/run/kernels/opencl_misc.h:153:33: note: expanded from macro 'BITALIGN_IMM'
/home/fd/git/JtR/run/kernels/opencl_misc.h:127:29: note: expanded from macro 'BITALIGN'
function 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlinedfunction 'amd_bitalign' not found or cannot be inlined
Error building kernel ./kernels/cryptmd5_kernel.cl. DEVICE_INFO=34
0: OpenCL CL_BUILD_PROGRAM_FAILURE (-11) error in opencl_common.c:1366 - clBuildProgram
```
This is on a Fedora 29 system.
```
$ ./john --list=build-info
Version: 1.8.0.16-jumbo-1-bleeding-d7d84b79d 2019-04-02 11:29:08 +0200
Build: linux-gnu 64-bit x86_64 AVX2 AC OMP
SIMD: AVX2, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1
CPU tests: AVX2
$JOHN is ./
Format interface version: 14
Max. number of reported tunable costs: 4
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
SINGLE_IDX_MAX: 2147483648
SINGLE_BUF_MAX: 4294967295
Effective limit: Number of salts vs. SingleMaxBufferSize
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 8.3.1
GNU libc version: 2.28 (loaded: 2.28)
OpenCL headers version: 2.2
Crypto library: OpenSSL
OpenSSL library version: 01010102f
OpenSSL 1.1.1b FIPS 26 Feb 2019
GMP library version: 6.1.2
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
```
```
$ ./john --list=opencl-devices --devices=1
Platform #0 name: Intel Gen OCL Driver, version: OpenCL 2.0 beignet 1.3
Device #0 (1) name: Intel(R) HD Graphics Kabylake Desktop GT1.5
Device vendor: Intel
Device type: GPU (LE)
Device version: OpenCL 2.0 beignet 1.3
Driver version: 1.3
Native vector widths: char 8, short 8, int 4, long 2
Preferred vector width: char 16, short 8, int 4, long 2
Global Memory: 4 GB
Global Memory Cache: 8 KB
Local Memory: 64 KB (Local)
Constant Buffer size: 128 MB
Max memory alloc. size: 3 GB
Max clock (MHz): 1000
Profiling timer res.: 80 ns
Max Work Group Size: 512
Parallel compute cores: 24
Stream processors: 192 (24 x 8)
Speed index: 192000
```",1,unresolved symbol amd bitalign on beignet w intel r hd graphics kabylake desktop john test format opencl devices device intel r hd graphics kabylake desktop testing opencl crypt unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting unresolved symbol amd bitalign aborting options used i home fd git jtr run kernels cl mad enable d gpu ddevice info dsizeof size t ddev ver major ddev ver minor d opencl compiler dplaintext length kernels kernel cl build log stringinput cl warning implicit declaration of function amd bitalign is invalid in opencl home fd git jtr run kernels opencl misc h note expanded from macro bitalign imm home fd git jtr run kernels opencl misc h note expanded from macro bitalign function amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlinedfunction amd bitalign not found or cannot be inlined error building kernel kernels kernel cl device info opencl cl build program failure error in opencl common c clbuildprogram this is on a fedora system john list build info version jumbo bleeding build linux gnu bit ac omp simd interleaving cpu tests john is format interface version max number of reported tunable costs rec file version charset file version charset min charset max charset length salt hash size single idx max single buf max effective limit number of salts vs singlemaxbuffersize max markov mode level max markov mode password length gcc version gnu libc version loaded opencl headers version crypto library openssl openssl library version openssl fips feb gmp library version file locking fcntl fseek fseek ftell ftell fopen fopen memmem system s john list opencl devices devices platform name intel gen ocl driver version opencl beignet device name intel r hd graphics kabylake desktop device vendor intel device type gpu le device version opencl beignet driver version native vector widths char short int long preferred vector width char short int long global memory gb global memory cache kb local memory kb local constant buffer size mb max memory alloc size gb max clock mhz profiling timer res ns max work group size parallel compute cores stream processors x speed index ,1
921,12065934463.0,IssuesEvent,2020-04-16 10:50:43,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,reopened,Repair support for no-byte-addressable OpenCL devices,bug portability,"Hello, I am posting this here and not at the mailing list because I think this can be a bug. I used the command suggested by @solardiz john 343.in --format=wpapsk-openc, I get the following (I am copying from my cmd window as much as allowed as there are lines of text before this which are not showing, probably because there are too many lines in total):
```
""kernels\opencl_sha2_ctx.h"", line 179: error: write to < 32 bits via pointer
not allowed unless cl_khr_byte_addressable_store is enabled
PUT_UINT32BE(ctx->state[5], output, 20);
^
```
Edit by @solardiz: many occurrences of the above dropped.
```
Error limit reached.
100 errors detected in the compilation of "".\OCL9868.tmp.cl"".
Compilation terminated.
Internal error: clc compiler invocation failed.
Error building kernel kernels/wpapsk_kernel.cl. DEVICE_INFO=4194826
0: OpenCL CL_BUILD_PROGRAM_FAILURE (-11) error in opencl_common.c:1386 - clBuild
Program
C:\john\run>
```",True,"Repair support for no-byte-addressable OpenCL devices - Hello, I am posting this here and not at the mailing list because I think this can be a bug. I used the command suggested by @solardiz john 343.in --format=wpapsk-openc, I get the following (I am copying from my cmd window as much as allowed as there are lines of text before this which are not showing, probably because there are too many lines in total):
```
""kernels\opencl_sha2_ctx.h"", line 179: error: write to < 32 bits via pointer
not allowed unless cl_khr_byte_addressable_store is enabled
PUT_UINT32BE(ctx->state[5], output, 20);
^
```
Edit by @solardiz: many occurrences of the above dropped.
```
Error limit reached.
100 errors detected in the compilation of "".\OCL9868.tmp.cl"".
Compilation terminated.
Internal error: clc compiler invocation failed.
Error building kernel kernels/wpapsk_kernel.cl. DEVICE_INFO=4194826
0: OpenCL CL_BUILD_PROGRAM_FAILURE (-11) error in opencl_common.c:1386 - clBuild
Program
C:\john\run>
```",1,repair support for no byte addressable opencl devices hello i am posting this here and not at the mailing list because i think this can be a bug i used the command suggested by solardiz john in format wpapsk openc i get the following i am copying from my cmd window as much as allowed as there are lines of text before this which are not showing probably because there are too many lines in total kernels opencl ctx h line error write to bits via pointer not allowed unless cl khr byte addressable store is enabled put ctx state output edit by solardiz many occurrences of the above dropped error limit reached errors detected in the compilation of tmp cl compilation terminated internal error clc compiler invocation failed error building kernel kernels wpapsk kernel cl device info opencl cl build program failure error in opencl common c clbuild program c john run ,1
96099,19899770916.0,IssuesEvent,2022-01-25 06:07:51,flutter/website,https://api.github.com/repos/flutter/website,closed,"[PAGE ISSUE]: 'Write your first Flutter app, part 1'",p2-medium e0-minutes codelab,"### Page URL
https://docs.flutter.dev/get-started/codelab/
### Page source
https://github.com/flutter/website/tree/main/src/get-started/codelab.md
### Describe the problem
The page says the following:
> The main() method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
Yet the referenced code does not appear to actually use arrow notation.
### Expected fix
_No response_
### Additional context
_No response_",1.0,"[PAGE ISSUE]: 'Write your first Flutter app, part 1' - ### Page URL
https://docs.flutter.dev/get-started/codelab/
### Page source
https://github.com/flutter/website/tree/main/src/get-started/codelab.md
### Describe the problem
The page says the following:
> The main() method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
Yet the referenced code does not appear to actually use arrow notation.
### Expected fix
_No response_
### Additional context
_No response_",0, write your first flutter app part page url page source describe the problem the page says the following the main method uses arrow notation use arrow notation for one line functions or methods yet the referenced code does not appear to actually use arrow notation expected fix no response additional context no response ,0
288052,31856946666.0,IssuesEvent,2023-09-15 08:10:30,nidhi7598/linux-4.19.72_CVE-2022-3564,https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564,closed,CVE-2020-29370 (High) detected in linuxlinux-4.19.294 - autoclosed,Mend: dependency security vulnerability,"## CVE-2020-29370 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.294
The Linux Kernel
Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux
Found in HEAD commit: 454c7dacf6fa9a6de86d4067f5a08f25cffa519b
Found in base branch: main
Vulnerable Source Files (2)
/mm/slub.c
/mm/slub.c
Vulnerability Details
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
Publish Date: 2020-11-28
URL: CVE-2020-29370
CVSS 3 Score Details (7.0)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29370
Release Date: 2020-11-28
Fix Resolution: v5.6-rc7,v5.5.11,v5.4.28
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-29370 (High) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2020-29370 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.294
The Linux Kernel
Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux
Found in HEAD commit: 454c7dacf6fa9a6de86d4067f5a08f25cffa519b
Found in base branch: main
Vulnerable Source Files (2)
/mm/slub.c
/mm/slub.c
Vulnerability Details
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
Publish Date: 2020-11-28
URL: CVE-2020-29370
CVSS 3 Score Details (7.0)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29370
Release Date: 2020-11-28
Fix Resolution: v5.6-rc7,v5.5.11,v5.4.28
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files mm slub c mm slub c vulnerability details an issue was discovered in kmem cache alloc bulk in mm slub c in the linux kernel before the slowpath lacks the required tid increment aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
726074,24987140682.0,IssuesEvent,2022-11-02 15:50:31,bcgov/entity,https://api.github.com/repos/bcgov/entity,closed, Message Error when Registering a DBA / Oct 24 2022/ SP GP Business Registry,bug Priority1 ENTITY,"A product team ticket to resolved this Ops ticket - https://app.zenhub.com/workspaces/ops-60f8556e05d25b0011468870/issues/bcgov-registries/ops-support/1583
""I have had two citizens call regarding registering a DBA. They are getting a “no matches found” message when trying to put the corporation in. I’ve asked them to try the registration and business numbers as well as the name, nothing works.""",1.0," Message Error when Registering a DBA / Oct 24 2022/ SP GP Business Registry - A product team ticket to resolved this Ops ticket - https://app.zenhub.com/workspaces/ops-60f8556e05d25b0011468870/issues/bcgov-registries/ops-support/1583
""I have had two citizens call regarding registering a DBA. They are getting a “no matches found” message when trying to put the corporation in. I’ve asked them to try the registration and business numbers as well as the name, nothing works.""",0, message error when registering a dba oct sp gp business registry a product team ticket to resolved this ops ticket i have had two citizens call regarding registering a dba they are getting a “no matches found” message when trying to put the corporation in i’ve asked them to try the registration and business numbers as well as the name nothing works ,0
142178,19074163373.0,IssuesEvent,2021-11-27 13:06:07,atlsecsrv-net-atlsecsrv-com/code.visualstudio,https://api.github.com/repos/atlsecsrv-net-atlsecsrv-com/code.visualstudio,closed,"WS-2019-0063 (High) detected in js-yaml-3.7.0.tgz, js-yaml-3.12.1.tgz",security vulnerability,"## WS-2019-0063 - High Severity Vulnerability
Vulnerable Libraries - js-yaml-3.7.0.tgz, js-yaml-3.12.1.tgz
js-yaml-3.7.0.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.7.0.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-cssnano-2.1.3.tgz (Root Library)
- cssnano-3.10.0.tgz
- postcss-svgo-2.1.6.tgz
- svgo-0.7.2.tgz
- :x: **js-yaml-3.7.0.tgz** (Vulnerable Library)
js-yaml-3.12.1.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.1.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-eslint-5.0.0.tgz (Root Library)
- eslint-5.13.0.tgz
- :x: **js-yaml-3.12.1.tgz** (Vulnerable Library)
Found in HEAD commit: a1479f17f72992a58ef6c45317028a2b0f60a97a
Found in base branch: master
Vulnerability Details
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
Publish Date: 2019-04-05
URL: WS-2019-0063
CVSS 3 Score Details (8.1)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://www.npmjs.com/advisories/813
Release Date: 2019-04-05
Fix Resolution: js-yaml - 3.13.1
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2019-0063 (High) detected in js-yaml-3.7.0.tgz, js-yaml-3.12.1.tgz - ## WS-2019-0063 - High Severity Vulnerability
Vulnerable Libraries - js-yaml-3.7.0.tgz, js-yaml-3.12.1.tgz
js-yaml-3.7.0.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.7.0.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-cssnano-2.1.3.tgz (Root Library)
- cssnano-3.10.0.tgz
- postcss-svgo-2.1.6.tgz
- svgo-0.7.2.tgz
- :x: **js-yaml-3.7.0.tgz** (Vulnerable Library)
js-yaml-3.12.1.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.1.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-eslint-5.0.0.tgz (Root Library)
- eslint-5.13.0.tgz
- :x: **js-yaml-3.12.1.tgz** (Vulnerable Library)
Found in HEAD commit: a1479f17f72992a58ef6c45317028a2b0f60a97a
Found in base branch: master
Vulnerability Details
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
Publish Date: 2019-04-05
URL: WS-2019-0063
CVSS 3 Score Details (8.1)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://www.npmjs.com/advisories/813
Release Date: 2019-04-05
Fix Resolution: js-yaml - 3.13.1
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws high detected in js yaml tgz js yaml tgz ws high severity vulnerability vulnerable libraries js yaml tgz js yaml tgz js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm atlsecsrv net a atlsecsrv com package json path to vulnerable library tmp ws scm atlsecsrv net a atlsecsrv com node modules js yaml dependency hierarchy gulp cssnano tgz root library cssnano tgz postcss svgo tgz svgo tgz x js yaml tgz vulnerable library js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm atlsecsrv net a atlsecsrv com package json path to vulnerable library tmp ws scm atlsecsrv net a atlsecsrv com node modules js yaml dependency hierarchy gulp eslint tgz root library eslint tgz x js yaml tgz vulnerable library found in head commit a href found in base branch master vulnerability details js yaml prior to are vulnerable to code injection the load function may execute arbitrary code injected through a malicious yaml file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution js yaml step up your open source security game with whitesource ,0
12,2566324443.0,IssuesEvent,2015-02-08 12:06:09,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Cygwin fails (deadlock) for -fork=x,bug non-trivial portability,"Currently, we disable (forcibly) within the configure script because it is well know to be broken.
However, I did some work (I will document here), and it was also documented here: #1043 and have actually gotten cygwin to work 'better' with -fork, but it is still busted. Now with some of the other debugging we have in the logger.c, it is becoming pretty appearent that with our current code, cygwin is failing with some form of race condition / deadlock within the flock() function, interaction between 2 processes. Possibly we can get this working in some other way (there are other things than flock), or possibly we can escalate this up to the cygwin folks, showing that there IS a problem on their end.",True,"Cygwin fails (deadlock) for -fork=x - Currently, we disable (forcibly) within the configure script because it is well know to be broken.
However, I did some work (I will document here), and it was also documented here: #1043 and have actually gotten cygwin to work 'better' with -fork, but it is still busted. Now with some of the other debugging we have in the logger.c, it is becoming pretty appearent that with our current code, cygwin is failing with some form of race condition / deadlock within the flock() function, interaction between 2 processes. Possibly we can get this working in some other way (there are other things than flock), or possibly we can escalate this up to the cygwin folks, showing that there IS a problem on their end.",1,cygwin fails deadlock for fork x currently we disable forcibly within the configure script because it is well know to be broken however i did some work i will document here and it was also documented here and have actually gotten cygwin to work better with fork but it is still busted now with some of the other debugging we have in the logger c it is becoming pretty appearent that with our current code cygwin is failing with some form of race condition deadlock within the flock function interaction between processes possibly we can get this working in some other way there are other things than flock or possibly we can escalate this up to the cygwin folks showing that there is a problem on their end ,1
121,3371108436.0,IssuesEvent,2015-11-23 17:41:53,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,OpenCL WPAPSK fails with R9 290X and Cat 15.9,portability,Apparently it works with HD7970 and Cat 15.7. We need to figure out whether the problem is with Cat 15.9 or specific to the 290X (a bit unlikely).,True,OpenCL WPAPSK fails with R9 290X and Cat 15.9 - Apparently it works with HD7970 and Cat 15.7. We need to figure out whether the problem is with Cat 15.9 or specific to the 290X (a bit unlikely).,1,opencl wpapsk fails with and cat apparently it works with and cat we need to figure out whether the problem is with cat or specific to the a bit unlikely ,1
20414,3354495079.0,IssuesEvent,2015-11-18 12:29:10,hazelcast/hazelcast,https://api.github.com/repos/hazelcast/hazelcast,opened,Data loss occurs when member restarts for merge,Team: Core Type: Defect,"Here is the scenario for the issue:
1. A cluster with 7 members (6 lite, 1 regular) sets up. Configured merge policy for maps is hz.ADD_NEW_ENTRY.
2. Split brain occurs with 4 lite members to 2 lite + 1 regular members.
3. After join, smaller cluster with three members join to larger one. All three members restart for merge.
4. Data loss occurs after merge.
The expected behaviour is to recover all data after merge operations.",1.0,"Data loss occurs when member restarts for merge - Here is the scenario for the issue:
1. A cluster with 7 members (6 lite, 1 regular) sets up. Configured merge policy for maps is hz.ADD_NEW_ENTRY.
2. Split brain occurs with 4 lite members to 2 lite + 1 regular members.
3. After join, smaller cluster with three members join to larger one. All three members restart for merge.
4. Data loss occurs after merge.
The expected behaviour is to recover all data after merge operations.",0,data loss occurs when member restarts for merge here is the scenario for the issue a cluster with members lite regular sets up configured merge policy for maps is hz add new entry split brain occurs with lite members to lite regular members after join smaller cluster with three members join to larger one all three members restart for merge data loss occurs after merge the expected behaviour is to recover all data after merge operations ,0
562294,16656277075.0,IssuesEvent,2021-06-05 15:31:44,panix-os/Panix,https://api.github.com/repos/panix-os/Panix,closed,Move Panix Build Container to Appropriate Repo,enhancement high-priority,Also means `.scuba.yml` will need to be updated to reference the new location and version.,1.0,Move Panix Build Container to Appropriate Repo - Also means `.scuba.yml` will need to be updated to reference the new location and version.,0,move panix build container to appropriate repo also means scuba yml will need to be updated to reference the new location and version ,0
714240,24555317990.0,IssuesEvent,2022-10-12 15:26:19,AY2223S1-CS2103T-W17-2/tp,https://api.github.com/repos/AY2223S1-CS2103T-W17-2/tp,closed,Delete income,type.story priority.HIGH,"As a student with a source of income, I can delete income so that I can remove any wrong income records",1.0,"Delete income - As a student with a source of income, I can delete income so that I can remove any wrong income records",0,delete income as a student with a source of income i can delete income so that i can remove any wrong income records,0
491763,14170740167.0,IssuesEvent,2020-11-12 14:54:04,googleapis/repo-automation-bots,https://api.github.com/repos/googleapis/repo-automation-bots,closed,MoG: not merging an approved PR,bot: merge on green priority: p2 type: bug,"https://github.com/GoogleCloudPlatform/golang-samples/pull/1821 is approved and all checks have passed. MoG reacted with :eyes:. But, the PR hasn't been merged.",1.0,"MoG: not merging an approved PR - https://github.com/GoogleCloudPlatform/golang-samples/pull/1821 is approved and all checks have passed. MoG reacted with :eyes:. But, the PR hasn't been merged.",0,mog not merging an approved pr is approved and all checks have passed mog reacted with eyes but the pr hasn t been merged ,0
257675,27563807527.0,IssuesEvent,2023-03-08 01:07:55,LynRodWS/alcor,https://api.github.com/repos/LynRodWS/alcor,opened,CVE-2020-36188 (High) detected in jackson-databind-2.9.9.jar,security vulnerability,"## CVE-2020-36188 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar
General data-binding functionality for Jackson: works on core streaming API
Library home page: http://github.com/FasterXML/jackson
Path to dependency file: /services/api_gateway/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar
Dependency Hierarchy:
- spring-cloud-starter-netflix-hystrix-2.1.2.RELEASE.jar (Root Library)
- hystrix-serialization-1.5.18.jar
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource.
Publish Date: 2021-01-06
URL: CVE-2020-36188
CVSS 3 Score Details (8.1)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2021-01-06
Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10.8
Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-netflix-hystrix): 2.1.3.RELEASE
***
:rescue_worker_helmet: Automatic Remediation is available for this issue",True,"CVE-2020-36188 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-36188 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar
General data-binding functionality for Jackson: works on core streaming API
Library home page: http://github.com/FasterXML/jackson
Path to dependency file: /services/api_gateway/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar
Dependency Hierarchy:
- spring-cloud-starter-netflix-hystrix-2.1.2.RELEASE.jar (Root Library)
- hystrix-serialization-1.5.18.jar
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource.
Publish Date: 2021-01-06
URL: CVE-2020-36188
CVSS 3 Score Details (8.1)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2021-01-06
Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10.8
Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-netflix-hystrix): 2.1.3.RELEASE
***
:rescue_worker_helmet: Automatic Remediation is available for this issue",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file services api gateway pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud starter netflix hystrix release jar root library hystrix serialization jar x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db jndiconnectionsource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution org springframework cloud spring cloud starter netflix hystrix release rescue worker helmet automatic remediation is available for this issue,0
742,9981035956.0,IssuesEvent,2019-07-10 06:14:55,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Simplify or avoid OpenCL kernel copying,Maintenance/cleanup portability,"As discussed in https://github.com/magnumripper/JohnTheRipper/issues/4037#issuecomment-506002082 we currently copy OpenCL kernels from `src/opencl` to `run/kernels` using unnecessarily complicated `make` magic. We should do it simpler, or maybe not do it at all but instead maintain those kernels under `run/kernels` (or call it `run/opencl`?) right away. We already have some other source files reside under `run` in our repository - e.g. `john.conf`.",True,"Simplify or avoid OpenCL kernel copying - As discussed in https://github.com/magnumripper/JohnTheRipper/issues/4037#issuecomment-506002082 we currently copy OpenCL kernels from `src/opencl` to `run/kernels` using unnecessarily complicated `make` magic. We should do it simpler, or maybe not do it at all but instead maintain those kernels under `run/kernels` (or call it `run/opencl`?) right away. We already have some other source files reside under `run` in our repository - e.g. `john.conf`.",1,simplify or avoid opencl kernel copying as discussed in we currently copy opencl kernels from src opencl to run kernels using unnecessarily complicated make magic we should do it simpler or maybe not do it at all but instead maintain those kernels under run kernels or call it run opencl right away we already have some other source files reside under run in our repository e g john conf ,1
262970,19849452683.0,IssuesEvent,2022-01-21 10:36:13,timoast/signac,https://api.github.com/repos/timoast/signac,opened,CreateChromatinAssay,documentation,"Hey,
like in issue #937 I am still trying to create a chromatin assay. I checked that I am using now using the count matrix as input. My new row- and colnames are the following
`head(rownames(ATAC_subset_S1D1$X))
head(colnames(ATAC_subset_S1D1$X))`
`'TAGTTGTCACCCTCAC-1-s1d1''CTATGGCCATAACGGG-1-s1d1''CCGCACACAGGTTAAA-1-s1d1''TCATTTGGTAATGGAA-1-s1d1''ACCACATAGGTGTCCA-1-s1d1''TGGATTGGTTTGCGAA-1-s1d1'
'chr1-9776-10668''chr1-180726-181005''chr1-181117-181803''chr1-191133-192055''chr1-267562-268456''chr1-629497-630394'`
But I am still getting the same error when creating the chromatin assay
`ATAC_Seu<-CreateChromatinAssay(counts=ATAC_subset_S1D1$X)`
```
Error in .get_data_frame_col_as_numeric(df, granges_cols[[""end""]]): some values in the ""end"" column cannot be turned into numeric values
Traceback:
1. CreateChromatinAssay(counts = ATAC_subset_S1D1$X)
2. StringToGRanges(regions = rownames(x = data.use), sep = sep)
3. makeGRangesFromDataFrame(df = ranges.df, ...)
4. .get_data_frame_col_as_numeric(df, granges_cols[[""end""]])
5. stop(wmsg(""some values in the "", ""\"""", names(df)[[col]], ""\"" "",
. ""column cannot be turned into numeric values""))
```
I would really appreciate your help again.
",1.0,"CreateChromatinAssay - Hey,
like in issue #937 I am still trying to create a chromatin assay. I checked that I am using now using the count matrix as input. My new row- and colnames are the following
`head(rownames(ATAC_subset_S1D1$X))
head(colnames(ATAC_subset_S1D1$X))`
`'TAGTTGTCACCCTCAC-1-s1d1''CTATGGCCATAACGGG-1-s1d1''CCGCACACAGGTTAAA-1-s1d1''TCATTTGGTAATGGAA-1-s1d1''ACCACATAGGTGTCCA-1-s1d1''TGGATTGGTTTGCGAA-1-s1d1'
'chr1-9776-10668''chr1-180726-181005''chr1-181117-181803''chr1-191133-192055''chr1-267562-268456''chr1-629497-630394'`
But I am still getting the same error when creating the chromatin assay
`ATAC_Seu<-CreateChromatinAssay(counts=ATAC_subset_S1D1$X)`
```
Error in .get_data_frame_col_as_numeric(df, granges_cols[[""end""]]): some values in the ""end"" column cannot be turned into numeric values
Traceback:
1. CreateChromatinAssay(counts = ATAC_subset_S1D1$X)
2. StringToGRanges(regions = rownames(x = data.use), sep = sep)
3. makeGRangesFromDataFrame(df = ranges.df, ...)
4. .get_data_frame_col_as_numeric(df, granges_cols[[""end""]])
5. stop(wmsg(""some values in the "", ""\"""", names(df)[[col]], ""\"" "",
. ""column cannot be turned into numeric values""))
```
I would really appreciate your help again.
",0,createchromatinassay hey like in issue i am still trying to create a chromatin assay i checked that i am using now using the count matrix as input my new row and colnames are the following head rownames atac subset x head colnames atac subset x tagttgtcaccctcac ctatggccataacggg ccgcacacaggttaaa tcatttggtaatggaa accacataggtgtcca tggattggtttgcgaa but i am still getting the same error when creating the chromatin assay atac seu createchromatinassay counts atac subset x error in get data frame col as numeric df granges cols some values in the end column cannot be turned into numeric values traceback createchromatinassay counts atac subset x stringtogranges regions rownames x data use sep sep makegrangesfromdataframe df ranges df get data frame col as numeric df granges cols stop wmsg some values in the names df column cannot be turned into numeric values i would really appreciate your help again ,0
287901,21676489913.0,IssuesEvent,2022-05-08 19:55:22,bounswe/bounswe2022group9,https://api.github.com/repos/bounswe/bounswe2022group9,closed,Combining Milestone Report,Documentation Priority: High In Progress,"TODO:
- [x] Everyone shall put their parts on the Milestone Report",1.0,"Combining Milestone Report - TODO:
- [x] Everyone shall put their parts on the Milestone Report",0,combining milestone report todo everyone shall put their parts on the milestone report,0
19921,11348737176.0,IssuesEvent,2020-01-24 01:33:42,Azure/azure-cli,https://api.github.com/repos/Azure/azure-cli,reopened,"az commands can trigger 429 / ""too many requests"" failures and provides no recourse for recovery.",AKS Core Service Attention,"## Describe the bug
Running az commands can generate 429 ""too many requests"" exceptions from Azure (possibly related to `az aks`? or possibly all commands -- I've definitely seen this at random from Azure before). It seems this happens with long running commands after they have already executed and az is polling for a result from Azure.
Ideally, when this happens, az should just [exponentially backoff](https://en.wikipedia.org/wiki/Exponential_backoff) (i.e. increase the timeout and try again). (Sometimes in the 429 response, there is even a `Retry-After` header that tells you exactly how long to wait!)
IMO, the *REAL* issue is that, you get back a failure message, and the command aborts, with no results -- **even if the command was successful** (e.g. you can't even just try to rerun the command at that point). -- Basically, the command shouldn't throw a perma-error unless it has actually failed. If the command is still running and might possibly succeed but you just failed to poll for a result, you should do a backoff and retry.
**Command Name**
`az aks nodepool add --resource-group MyResourceGroup --cluster-name MyClusterName --os-type Windows --node-vm-size ""Standard_B2s"" --name window --node-count 2 --kubernetes-version 1.13.12 --min-count 2 --max-count 6 --enable-cluster-autoscaler`
**Errors:**
```
WARNING: The behavior of this command has been altered by the following extension: aks-preview
ERROR: Deployment failed. Correlation ID: de22582b-9a0c-462b-b15a-7fd3d85d07e2. VMSSAgentPoolReconciler retry failed: autorest/azure: Service returned an error. Status=429
Code=""OperationNotAllowed"" Message=""The server rejected the request because too many requests have been received for this subscription."" Details=[{""code"":""TooManyRequests"",""message"":""{\""operationGroup\"":\
""HighCostGetVMScaleSet30Min\"",\""startTime\"":\""2020-01-17T17:29:36.1768987+00:00\"",\""endTime\"":\""2020-01-17T17:44:36.1768987+00:00\"",\""allowedRequestCount\"":1329,\""measuredRequestCount\"":1419}"",""target"":""H
ighCostGetVMScaleSet30Min""}] InnerError={""internalErrorCode"":""TooManyRequestsReceived""}
```
## To Reproduce:
Steps to reproduce the behavior.
- Run a long running command that continually polls azure for a result while your subscription is under heavy load (possibly from other such commands running in parallel?), until an http response with a status of 429 (""Too many requests"") is returned by the Azure API that is being called.
## Expected Behavior
- Az.exe shouldn't fail when the initial command turns out to be successful -- because it leaves the user in an unrecoverable state (e.g. the initial command appears to have failed, there is no output results, and re-running the command also fails, because now the resource exists! -- so you not only don't handle the 429 yourself, but you prevent the user from handling it too!).
- Specifically, calls to Azure made by Az.exe which return a 429 status should have transient fault handling baked in -- as specified by MSDN, [best practices for cloud applications](https://docs.microsoft.com/en-us/azure/architecture/best-practices/transient-faults): `All applications that communicate with remote services and resources must be sensitive to transient faults.`
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: cmd.exe
azure-cli 2.0.80
Extensions:
aks-preview 0.4.27
application-insights 0.1.1
```
## Additional Context
",1.0,"az commands can trigger 429 / ""too many requests"" failures and provides no recourse for recovery. - ## Describe the bug
Running az commands can generate 429 ""too many requests"" exceptions from Azure (possibly related to `az aks`? or possibly all commands -- I've definitely seen this at random from Azure before). It seems this happens with long running commands after they have already executed and az is polling for a result from Azure.
Ideally, when this happens, az should just [exponentially backoff](https://en.wikipedia.org/wiki/Exponential_backoff) (i.e. increase the timeout and try again). (Sometimes in the 429 response, there is even a `Retry-After` header that tells you exactly how long to wait!)
IMO, the *REAL* issue is that, you get back a failure message, and the command aborts, with no results -- **even if the command was successful** (e.g. you can't even just try to rerun the command at that point). -- Basically, the command shouldn't throw a perma-error unless it has actually failed. If the command is still running and might possibly succeed but you just failed to poll for a result, you should do a backoff and retry.
**Command Name**
`az aks nodepool add --resource-group MyResourceGroup --cluster-name MyClusterName --os-type Windows --node-vm-size ""Standard_B2s"" --name window --node-count 2 --kubernetes-version 1.13.12 --min-count 2 --max-count 6 --enable-cluster-autoscaler`
**Errors:**
```
WARNING: The behavior of this command has been altered by the following extension: aks-preview
ERROR: Deployment failed. Correlation ID: de22582b-9a0c-462b-b15a-7fd3d85d07e2. VMSSAgentPoolReconciler retry failed: autorest/azure: Service returned an error. Status=429
Code=""OperationNotAllowed"" Message=""The server rejected the request because too many requests have been received for this subscription."" Details=[{""code"":""TooManyRequests"",""message"":""{\""operationGroup\"":\
""HighCostGetVMScaleSet30Min\"",\""startTime\"":\""2020-01-17T17:29:36.1768987+00:00\"",\""endTime\"":\""2020-01-17T17:44:36.1768987+00:00\"",\""allowedRequestCount\"":1329,\""measuredRequestCount\"":1419}"",""target"":""H
ighCostGetVMScaleSet30Min""}] InnerError={""internalErrorCode"":""TooManyRequestsReceived""}
```
## To Reproduce:
Steps to reproduce the behavior.
- Run a long running command that continually polls azure for a result while your subscription is under heavy load (possibly from other such commands running in parallel?), until an http response with a status of 429 (""Too many requests"") is returned by the Azure API that is being called.
## Expected Behavior
- Az.exe shouldn't fail when the initial command turns out to be successful -- because it leaves the user in an unrecoverable state (e.g. the initial command appears to have failed, there is no output results, and re-running the command also fails, because now the resource exists! -- so you not only don't handle the 429 yourself, but you prevent the user from handling it too!).
- Specifically, calls to Azure made by Az.exe which return a 429 status should have transient fault handling baked in -- as specified by MSDN, [best practices for cloud applications](https://docs.microsoft.com/en-us/azure/architecture/best-practices/transient-faults): `All applications that communicate with remote services and resources must be sensitive to transient faults.`
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: cmd.exe
azure-cli 2.0.80
Extensions:
aks-preview 0.4.27
application-insights 0.1.1
```
## Additional Context
",0,az commands can trigger too many requests failures and provides no recourse for recovery describe the bug running az commands can generate too many requests exceptions from azure possibly related to az aks or possibly all commands i ve definitely seen this at random from azure before it seems this happens with long running commands after they have already executed and az is polling for a result from azure ideally when this happens az should just i e increase the timeout and try again sometimes in the response there is even a retry after header that tells you exactly how long to wait imo the real issue is that you get back a failure message and the command aborts with no results even if the command was successful e g you can t even just try to rerun the command at that point basically the command shouldn t throw a perma error unless it has actually failed if the command is still running and might possibly succeed but you just failed to poll for a result you should do a backoff and retry command name az aks nodepool add resource group myresourcegroup cluster name myclustername os type windows node vm size standard name window node count kubernetes version min count max count enable cluster autoscaler errors warning the behavior of this command has been altered by the following extension aks preview error deployment failed correlation id vmssagentpoolreconciler retry failed autorest azure service returned an error status code operationnotallowed message the server rejected the request because too many requests have been received for this subscription details code toomanyrequests message operationgroup starttime endtime allowedrequestcount measuredrequestcount target h innererror internalerrorcode toomanyrequestsreceived to reproduce steps to reproduce the behavior run a long running command that continually polls azure for a result while your subscription is under heavy load possibly from other such commands running in parallel until an http response with a status of too many requests is returned by the azure api that is being called expected behavior az exe shouldn t fail when the initial command turns out to be successful because it leaves the user in an unrecoverable state e g the initial command appears to have failed there is no output results and re running the command also fails because now the resource exists so you not only don t handle the yourself but you prevent the user from handling it too specifically calls to azure made by az exe which return a status should have transient fault handling baked in as specified by msdn all applications that communicate with remote services and resources must be sensitive to transient faults environment summary windows python shell cmd exe azure cli extensions aks preview application insights additional context ,0
31083,4683188869.0,IssuesEvent,2016-10-09 17:18:04,fossasia/susi_android,https://api.github.com/repos/fossasia/susi_android,closed,Unit test case - MainActivity view visibility,tests,"Can I write test cases ?
I will use Espresso for writing unit test cases.
Will start from checking presence of UI views
then gradually add methods to test different UI and navigation and features of the app.
This way I will also learn writing test cases.",1.0,"Unit test case - MainActivity view visibility - Can I write test cases ?
I will use Espresso for writing unit test cases.
Will start from checking presence of UI views
then gradually add methods to test different UI and navigation and features of the app.
This way I will also learn writing test cases.",0,unit test case mainactivity view visibility can i write test cases i will use espresso for writing unit test cases will start from checking presence of ui views then gradually add methods to test different ui and navigation and features of the app this way i will also learn writing test cases ,0
311679,9537542658.0,IssuesEvent,2019-04-30 12:48:23,HGustavs/LenaSYS,https://api.github.com/repos/HGustavs/LenaSYS,closed,Email vulnerability allows an attacker to retrieve all emails associated with a given course,GruppB2019 activeGruppB2019 bug highPriority vulnerability,"The following code runs without checking if the user has a valid login:
https://github.com/HGustavs/LenaSYS/blob/0b3972bc464df6240a4088ec74e41fcd70bad63f/DuggaSys/resultedservice.php#L72-L104
This allows a malicious actor to retrieve all emails for a course (students and teachers alike) using only publicly known information, by simply sending a POST request with the correct parameters.
This may be a breach of GDPR, as email addresses may be used to identify individuals.
This is mitigated by the ability to block direct access to this page via webserver configuration, but that should not be relied upon.",1.0,"Email vulnerability allows an attacker to retrieve all emails associated with a given course - The following code runs without checking if the user has a valid login:
https://github.com/HGustavs/LenaSYS/blob/0b3972bc464df6240a4088ec74e41fcd70bad63f/DuggaSys/resultedservice.php#L72-L104
This allows a malicious actor to retrieve all emails for a course (students and teachers alike) using only publicly known information, by simply sending a POST request with the correct parameters.
This may be a breach of GDPR, as email addresses may be used to identify individuals.
This is mitigated by the ability to block direct access to this page via webserver configuration, but that should not be relied upon.",0,email vulnerability allows an attacker to retrieve all emails associated with a given course the following code runs without checking if the user has a valid login this allows a malicious actor to retrieve all emails for a course students and teachers alike using only publicly known information by simply sending a post request with the correct parameters this may be a breach of gdpr as email addresses may be used to identify individuals this is mitigated by the ability to block direct access to this page via webserver configuration but that should not be relied upon ,0
80401,10172133328.0,IssuesEvent,2019-08-08 09:57:22,links-lang/links,https://api.github.com/repos/links-lang/links,opened,User guide / language documentation,documentation,"We currently don't have any kind of User Guide/language documentation that would be maintained and up to date. I've been thinking we could start pretending that we have one. The idea is that we actually create a stub of a guide and every time we add a new feature to the language, change existing syntax, add new compiler setting, etc., we document it in the guide. We almost do this already: most of the features we implement get a fairly detailed description in PRs, tickets, or changelogs. There would be some extra effort needed to add this into the guide but it seems like we're already doing 80% of the documenting effort (which then sadly gets buried in old PRs that nobody reads). With time that should give us a fairly good documentation and perhaps then filling in the missing gaps won't be too hard.
Does that sound like a good idea? Would all the devs be willing to document new features they are adding?
As for the technical side I'm not sure what would be the best, but I'd be for some markup language. Maybe Sphinx? Seems fairly lightweight.
Related: #474",1.0,"User guide / language documentation - We currently don't have any kind of User Guide/language documentation that would be maintained and up to date. I've been thinking we could start pretending that we have one. The idea is that we actually create a stub of a guide and every time we add a new feature to the language, change existing syntax, add new compiler setting, etc., we document it in the guide. We almost do this already: most of the features we implement get a fairly detailed description in PRs, tickets, or changelogs. There would be some extra effort needed to add this into the guide but it seems like we're already doing 80% of the documenting effort (which then sadly gets buried in old PRs that nobody reads). With time that should give us a fairly good documentation and perhaps then filling in the missing gaps won't be too hard.
Does that sound like a good idea? Would all the devs be willing to document new features they are adding?
As for the technical side I'm not sure what would be the best, but I'd be for some markup language. Maybe Sphinx? Seems fairly lightweight.
Related: #474",0,user guide language documentation we currently don t have any kind of user guide language documentation that would be maintained and up to date i ve been thinking we could start pretending that we have one the idea is that we actually create a stub of a guide and every time we add a new feature to the language change existing syntax add new compiler setting etc we document it in the guide we almost do this already most of the features we implement get a fairly detailed description in prs tickets or changelogs there would be some extra effort needed to add this into the guide but it seems like we re already doing of the documenting effort which then sadly gets buried in old prs that nobody reads with time that should give us a fairly good documentation and perhaps then filling in the missing gaps won t be too hard does that sound like a good idea would all the devs be willing to document new features they are adding as for the technical side i m not sure what would be the best but i d be for some markup language maybe sphinx seems fairly lightweight related ,0
211,4417773164.0,IssuesEvent,2016-08-15 07:45:49,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,"memory.c:286:2: error: #error No suitable alligned alloc found, please report to john-dev mailing list (state your OS details).",portability,"Hello everyone, I am in compiling JohnTheRipper-bleeding-jumbo version of the program.
Error prompt: memory.c: In function 'mem_alloc_align_func':
memory.c: 286: 2: error: #error No suitable alligned alloc found, please report to john-dev mailing list (state your OS det
ails).
make [1]: *** [memory.o] Error 1
make [1]: Leaving directory `/ opt / Johnb / src '
make: *** [default] Error 2
Environment configuration is as follows:
Operating systems: AIX Version 5.2
make version: GNU Make 3.81
gcc version: gcc version 4.6.1 (GCC)",True,"memory.c:286:2: error: #error No suitable alligned alloc found, please report to john-dev mailing list (state your OS details). - Hello everyone, I am in compiling JohnTheRipper-bleeding-jumbo version of the program.
Error prompt: memory.c: In function 'mem_alloc_align_func':
memory.c: 286: 2: error: #error No suitable alligned alloc found, please report to john-dev mailing list (state your OS det
ails).
make [1]: *** [memory.o] Error 1
make [1]: Leaving directory `/ opt / Johnb / src '
make: *** [default] Error 2
Environment configuration is as follows:
Operating systems: AIX Version 5.2
make version: GNU Make 3.81
gcc version: gcc version 4.6.1 (GCC)",1,memory c error error no suitable alligned alloc found please report to john dev mailing list state your os details hello everyone i am in compiling johntheripper bleeding jumbo version of the program error prompt memory c in function mem alloc align func memory c error error no suitable alligned alloc found please report to john dev mailing list state your os det ails make error make leaving directory opt johnb src make error environment configuration is as follows operating systems aix version make version gnu make gcc version gcc version gcc ,1
163508,13920506105.0,IssuesEvent,2020-10-21 10:32:17,dry-python/returns,https://api.github.com/repos/dry-python/returns,closed,Interfaces documentation,documentation,"This issue is to track what interface we have to document yet, and to split the PRs as well into multiples instead of one with all interfaces:
- [x] Mappable
- [x] Bindable
- [ ] Applicative
- [ ] Container",1.0,"Interfaces documentation - This issue is to track what interface we have to document yet, and to split the PRs as well into multiples instead of one with all interfaces:
- [x] Mappable
- [x] Bindable
- [ ] Applicative
- [ ] Container",0,interfaces documentation this issue is to track what interface we have to document yet and to split the prs as well into multiples instead of one with all interfaces mappable bindable applicative container,0
51,2795450372.0,IssuesEvent,2015-05-11 22:05:46,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,Possibly bad code in sunmd5,portability,"This might lead to same problems as seen in dynamic (eg. #1127):
```
#if !defined(MD5_SSE_PARA) || MD5_SSE_PARA==1
#define MAX_KEYS_PER_CRYPT 48
#elif MD5_SSE_PARA==2
#define MAX_KEYS_PER_CRYPT 64
#elif MD5_SSE_PARA==3 || MD5_SSE_PARA==4
#define MAX_KEYS_PER_CRYPT 96
#elif MD5_SSE_PARA==5
#define MAX_KEYS_PER_CRYPT 100
#endif
```
Will this work for a SIMD_COEF of 8, 16 and 32 too? Or even higher? Maybe it will (we haven't seen problems with AVX2 but not many interleaving factors are tested), but I'm pretty sure Jim assumed 4 and only counted with that.",True,"Possibly bad code in sunmd5 - This might lead to same problems as seen in dynamic (eg. #1127):
```
#if !defined(MD5_SSE_PARA) || MD5_SSE_PARA==1
#define MAX_KEYS_PER_CRYPT 48
#elif MD5_SSE_PARA==2
#define MAX_KEYS_PER_CRYPT 64
#elif MD5_SSE_PARA==3 || MD5_SSE_PARA==4
#define MAX_KEYS_PER_CRYPT 96
#elif MD5_SSE_PARA==5
#define MAX_KEYS_PER_CRYPT 100
#endif
```
Will this work for a SIMD_COEF of 8, 16 and 32 too? Or even higher? Maybe it will (we haven't seen problems with AVX2 but not many interleaving factors are tested), but I'm pretty sure Jim assumed 4 and only counted with that.",1,possibly bad code in this might lead to same problems as seen in dynamic eg if defined sse para sse para define max keys per crypt elif sse para define max keys per crypt elif sse para sse para define max keys per crypt elif sse para define max keys per crypt endif will this work for a simd coef of and too or even higher maybe it will we haven t seen problems with but not many interleaving factors are tested but i m pretty sure jim assumed and only counted with that ,1
111,3285440475.0,IssuesEvent,2015-10-28 20:37:28,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,sha512crypt-opencl fails with cmp_all(1),portability,"When I run `../run/john --format=sha512crypt-opencl --test -v=5` I get
```
initUnicode(UNICODE, ASCII/ASCII)
ASCII -> ASCII -> ASCII
Benchmarking: sha512crypt-opencl, crypt(3) $6$ (rounds=5000) [SHA512 OpenCL]... Device 0: GeForce 9500 GT
Options used: -I ../run/kernels -cl-mad-enable -DSM_MAJOR=1 -DSM_MINOR=1 -cl-nv-verbose -D__GPU__ -DDEVICE_INFO=18 -DDEV_VER_MAJOR=340 -DDEV_VER_MINOR=93 -D_OPENCL_COMPILER $JOHN/kernels/cryptsha512_kernel_GPU.cl
Calculating best GWS for LWS=32; max. 150ms single kernel invocation.
Raw speed figures including buffer transfers:
xfer: 29.344us, crypt: 119x21.271ms, xfer back: 43.808us, prep: 25.136ms, pp: 244.832us, final: 1.148ms, var: 21.266ms/21.257ms
gws: 512 200c/s 1000000 rounds/s 2.558s per crypt_all()!
xfer: 53.056us, crypt: 119x42.503ms, xfer back: 81.856us, prep: 48.948ms, pp: 287.136us, final: 2.267ms, var: 42.533ms/42.521ms
gws: 1024 200c/s 1000000 rounds/s 5.113s per crypt_all()
xfer: 101.632us, crypt: 119x84.975ms, xfer back: 161.216us, prep: 98.554ms, pp: 378.144us, final: 4.506ms, var: 84.968ms/84.973ms
gws: 2048 200c/s 1000000 rounds/s 10.219s per crypt_all()
xfer: 199.040us, crypt: 119x169.952ms (exceeds 150ms)
Calculating best LWS for GWS=512
Testing LWS=32 GWS=512 ... 106.315ms+
Testing LWS=64 GWS=512 ... 56.638ms+
Testing LWS=128 GWS=512 ... 56.542ms
Calculating best GWS for LWS=64; max. 300ms single kernel invocation.
Raw speed figures including buffer transfers:
xfer: 16.576us, crypt: 119x10.671ms, xfer back: 22.528us, prep: 12.409ms, pp: 239.136us, final: 585.504us, var: 10.674ms/10.675ms
gws: 256 199c/s 995000 rounds/s 1.283s per crypt_all()!
xfer: 29.312us, crypt: 119x11.315ms, xfer back: 42.816us, prep: 13.129ms, pp: 246.080us, final: 625.728us, var: 11.312ms/11.301ms
gws: 512 376c/s 1880000 rounds/s 1.360s per crypt_all()+
xfer: 52.576us, crypt: 119x22.597ms, xfer back: 81.888us, prep: 26.109ms, pp: 315.232us, final: 1.224ms, var: 22.598ms/22.612ms
gws: 1024 376c/s 1880000 rounds/s 2.718s per crypt_all()
xfer: 101.696us, crypt: 119x45.120ms, xfer back: 157.088us, prep: 52.124ms, pp: 382.240us, final: 2.430ms, var: 45.231ms/45.074ms
gws: 2048 377c/s 1885000 rounds/s 5.429s per crypt_all()
xfer: 199.072us, crypt: 119x90.231ms, xfer back: 309.568us, prep: 104.296ms, pp: 528.096us, final: 4.835ms, var: 90.216ms/90.159ms
gws: 4096 377c/s 1885000 rounds/s 10.848s per crypt_all()
xfer: 393.184us, crypt: 119x180.494ms, xfer back: 613.536us, prep: 208.275ms, pp: 809.504us, final: 9.644ms, var: 180.208ms/180.113ms
gws: 8192 377c/s 1885000 rounds/s 21.680s per crypt_all()
xfer: 780.192us, crypt: 119x360.470ms (exceeds 300ms)
Local worksize (LWS) 64, global worksize (GWS) 512
FAILED (cmp_all(1))
```
I am running on x86_64 Linux with a GeForce 9500 GT.
https://github.com/magnumripper/JohnTheRipper/issues/1794 seems similar, though I am not familiar enough with the code to know whether it is.
",True,"sha512crypt-opencl fails with cmp_all(1) - When I run `../run/john --format=sha512crypt-opencl --test -v=5` I get
```
initUnicode(UNICODE, ASCII/ASCII)
ASCII -> ASCII -> ASCII
Benchmarking: sha512crypt-opencl, crypt(3) $6$ (rounds=5000) [SHA512 OpenCL]... Device 0: GeForce 9500 GT
Options used: -I ../run/kernels -cl-mad-enable -DSM_MAJOR=1 -DSM_MINOR=1 -cl-nv-verbose -D__GPU__ -DDEVICE_INFO=18 -DDEV_VER_MAJOR=340 -DDEV_VER_MINOR=93 -D_OPENCL_COMPILER $JOHN/kernels/cryptsha512_kernel_GPU.cl
Calculating best GWS for LWS=32; max. 150ms single kernel invocation.
Raw speed figures including buffer transfers:
xfer: 29.344us, crypt: 119x21.271ms, xfer back: 43.808us, prep: 25.136ms, pp: 244.832us, final: 1.148ms, var: 21.266ms/21.257ms
gws: 512 200c/s 1000000 rounds/s 2.558s per crypt_all()!
xfer: 53.056us, crypt: 119x42.503ms, xfer back: 81.856us, prep: 48.948ms, pp: 287.136us, final: 2.267ms, var: 42.533ms/42.521ms
gws: 1024 200c/s 1000000 rounds/s 5.113s per crypt_all()
xfer: 101.632us, crypt: 119x84.975ms, xfer back: 161.216us, prep: 98.554ms, pp: 378.144us, final: 4.506ms, var: 84.968ms/84.973ms
gws: 2048 200c/s 1000000 rounds/s 10.219s per crypt_all()
xfer: 199.040us, crypt: 119x169.952ms (exceeds 150ms)
Calculating best LWS for GWS=512
Testing LWS=32 GWS=512 ... 106.315ms+
Testing LWS=64 GWS=512 ... 56.638ms+
Testing LWS=128 GWS=512 ... 56.542ms
Calculating best GWS for LWS=64; max. 300ms single kernel invocation.
Raw speed figures including buffer transfers:
xfer: 16.576us, crypt: 119x10.671ms, xfer back: 22.528us, prep: 12.409ms, pp: 239.136us, final: 585.504us, var: 10.674ms/10.675ms
gws: 256 199c/s 995000 rounds/s 1.283s per crypt_all()!
xfer: 29.312us, crypt: 119x11.315ms, xfer back: 42.816us, prep: 13.129ms, pp: 246.080us, final: 625.728us, var: 11.312ms/11.301ms
gws: 512 376c/s 1880000 rounds/s 1.360s per crypt_all()+
xfer: 52.576us, crypt: 119x22.597ms, xfer back: 81.888us, prep: 26.109ms, pp: 315.232us, final: 1.224ms, var: 22.598ms/22.612ms
gws: 1024 376c/s 1880000 rounds/s 2.718s per crypt_all()
xfer: 101.696us, crypt: 119x45.120ms, xfer back: 157.088us, prep: 52.124ms, pp: 382.240us, final: 2.430ms, var: 45.231ms/45.074ms
gws: 2048 377c/s 1885000 rounds/s 5.429s per crypt_all()
xfer: 199.072us, crypt: 119x90.231ms, xfer back: 309.568us, prep: 104.296ms, pp: 528.096us, final: 4.835ms, var: 90.216ms/90.159ms
gws: 4096 377c/s 1885000 rounds/s 10.848s per crypt_all()
xfer: 393.184us, crypt: 119x180.494ms, xfer back: 613.536us, prep: 208.275ms, pp: 809.504us, final: 9.644ms, var: 180.208ms/180.113ms
gws: 8192 377c/s 1885000 rounds/s 21.680s per crypt_all()
xfer: 780.192us, crypt: 119x360.470ms (exceeds 300ms)
Local worksize (LWS) 64, global worksize (GWS) 512
FAILED (cmp_all(1))
```
I am running on x86_64 Linux with a GeForce 9500 GT.
https://github.com/magnumripper/JohnTheRipper/issues/1794 seems similar, though I am not familiar enough with the code to know whether it is.
",1, opencl fails with cmp all when i run run john format opencl test v i get initunicode unicode ascii ascii ascii ascii ascii benchmarking opencl crypt rounds device geforce gt options used i run kernels cl mad enable dsm major dsm minor cl nv verbose d gpu ddevice info ddev ver major ddev ver minor d opencl compiler john kernels kernel gpu cl calculating best gws for lws max single kernel invocation raw speed figures including buffer transfers xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt exceeds calculating best lws for gws testing lws gws testing lws gws testing lws gws calculating best gws for lws max single kernel invocation raw speed figures including buffer transfers xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt xfer back prep pp final var gws s rounds s per crypt all xfer crypt exceeds local worksize lws global worksize gws failed cmp all i am running on linux with a geforce gt seems similar though i am not familiar enough with the code to know whether it is ,1
109,3258653670.0,IssuesEvent,2015-10-20 23:43:11,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Claudio's SHA-512 formats fail on Bull AMD (old 13.4 driver),portability,"Not sure if we care about it at all (but Myrice's formats work fine with the same ror64 macro)
```
$ ../run/john -test -dev=1 -form:sha512crypt-opencl
Benchmarking: sha512crypt-opencl, crypt(3) $6$ (rounds=5000) [SHA512 OpenCL]... Device 1: Tahiti [AMD Radeon HD 7900 Series]
Building the kernel, this could take a while
Options used: -I ../run/kernels -cl-mad-enable -D__GPU__ -DDEVICE_INFO=138 -DDEV_VER_MAJOR=1124 -DDEV_VER_MINOR=2 -D_OPENCL_COMPILER $JOHN/kernels/cryptsha512_kernel_GCN.cl
Build log: ""/tmp/OCLwPgB6k.cl"", line 70: error: function ""amd_bitalign"" declared
implicitly
t1 = k[i] + w[i] + h + Sigma1(e) + Ch(e, f, g);
^
""/tmp/OCLwPgB6k.cl"", line 85: error: function ""amd_bitalign"" declared
implicitly
w[i & 15] = sigma1(w[(i - 2) & 15]) + sigma0(w[(i - 15) & 15]) + w[(i - 16) & 15] + w[(i - 7) & 15];
^
2 errors detected in the compilation of ""/tmp/OCLwPgB6k.cl"".
Frontend phase failed compilation.
Error -11 building kernel $JOHN/kernels/cryptsha512_kernel_GCN.cl. DEVICE_INFO=138
OpenCL CL_BUILD_PROGRAM_FAILURE error in common-opencl.c:1004 - clBuildProgram failed.
```
Similar output for raw-sha512 and xsha512",True,"Claudio's SHA-512 formats fail on Bull AMD (old 13.4 driver) - Not sure if we care about it at all (but Myrice's formats work fine with the same ror64 macro)
```
$ ../run/john -test -dev=1 -form:sha512crypt-opencl
Benchmarking: sha512crypt-opencl, crypt(3) $6$ (rounds=5000) [SHA512 OpenCL]... Device 1: Tahiti [AMD Radeon HD 7900 Series]
Building the kernel, this could take a while
Options used: -I ../run/kernels -cl-mad-enable -D__GPU__ -DDEVICE_INFO=138 -DDEV_VER_MAJOR=1124 -DDEV_VER_MINOR=2 -D_OPENCL_COMPILER $JOHN/kernels/cryptsha512_kernel_GCN.cl
Build log: ""/tmp/OCLwPgB6k.cl"", line 70: error: function ""amd_bitalign"" declared
implicitly
t1 = k[i] + w[i] + h + Sigma1(e) + Ch(e, f, g);
^
""/tmp/OCLwPgB6k.cl"", line 85: error: function ""amd_bitalign"" declared
implicitly
w[i & 15] = sigma1(w[(i - 2) & 15]) + sigma0(w[(i - 15) & 15]) + w[(i - 16) & 15] + w[(i - 7) & 15];
^
2 errors detected in the compilation of ""/tmp/OCLwPgB6k.cl"".
Frontend phase failed compilation.
Error -11 building kernel $JOHN/kernels/cryptsha512_kernel_GCN.cl. DEVICE_INFO=138
OpenCL CL_BUILD_PROGRAM_FAILURE error in common-opencl.c:1004 - clBuildProgram failed.
```
Similar output for raw-sha512 and xsha512",1,claudio s sha formats fail on bull amd old driver not sure if we care about it at all but myrice s formats work fine with the same macro run john test dev form opencl benchmarking opencl crypt rounds device tahiti building the kernel this could take a while options used i run kernels cl mad enable d gpu ddevice info ddev ver major ddev ver minor d opencl compiler john kernels kernel gcn cl build log tmp cl line error function amd bitalign declared implicitly k w h e ch e f g tmp cl line error function amd bitalign declared implicitly w w w w w errors detected in the compilation of tmp cl frontend phase failed compilation error building kernel john kernels kernel gcn cl device info opencl cl build program failure error in common opencl c clbuildprogram failed similar output for raw and ,1
132253,18266267517.0,IssuesEvent,2021-10-04 08:49:24,artsking/linux-3.0.35_CVE-2020-15436_withPatch,https://api.github.com/repos/artsking/linux-3.0.35_CVE-2020-15436_withPatch,closed,CVE-2015-5366 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed,security vulnerability,"## CVE-2015-5366 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
Julia Cartwright's fork of linux-stable-rt.git
Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git
Found in HEAD commit: 594a70cb9871ddd73cf61197bb1a2a1b1777a7ae
Found in base branch: master
Vulnerable Source Files (1)
/net/ipv4/udp.c
Vulnerability Details
The (1) udp_recvmsg and (2) udpv6_recvmsg functions in the Linux kernel before 4.0.6 provide inappropriate -EAGAIN return values, which allows remote attackers to cause a denial of service (EPOLLET epoll application read outage) via an incorrect checksum in a UDP packet, a different vulnerability than CVE-2015-5364.
Publish Date: 2015-08-31
URL: CVE-2015-5366
CVSS 3 Score Details (5.5)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2015-5366
Release Date: 2015-08-31
Fix Resolution: v4.1-rc7,v3.12.44,v3.14.45,v3.16.35,v3.18.17,v3.2.70
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2015-5366 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2015-5366 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
Julia Cartwright's fork of linux-stable-rt.git
Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git
Found in HEAD commit: 594a70cb9871ddd73cf61197bb1a2a1b1777a7ae
Found in base branch: master
Vulnerable Source Files (1)
/net/ipv4/udp.c
Vulnerability Details
The (1) udp_recvmsg and (2) udpv6_recvmsg functions in the Linux kernel before 4.0.6 provide inappropriate -EAGAIN return values, which allows remote attackers to cause a denial of service (EPOLLET epoll application read outage) via an incorrect checksum in a UDP packet, a different vulnerability than CVE-2015-5364.
Publish Date: 2015-08-31
URL: CVE-2015-5366
CVSS 3 Score Details (5.5)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2015-5366
Release Date: 2015-08-31
Fix Resolution: v4.1-rc7,v3.12.44,v3.14.45,v3.16.35,v3.18.17,v3.2.70
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net udp c vulnerability details the udp recvmsg and recvmsg functions in the linux kernel before provide inappropriate eagain return values which allows remote attackers to cause a denial of service epollet epoll application read outage via an incorrect checksum in a udp packet a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
711,9629616304.0,IssuesEvent,2019-05-15 10:00:08,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,times() vs. clock() discrepancies,bug portability,"JtR core uses times(2). JtR jumbo will sometimes use clock(3) instead, for systems where times(2) is presumably unavailable. However, JtR jumbo ""forgets"" that while times(2) clocks are in units of `sysconf(_SC_CLK_TCK)`, clock(3) ones are in units of CLOCKS_PER_SEC. Hopefully, these just happen to be the same on systems where jumbo uses clock(3) now (MinGW, etc.) - but we need to carefully review the code and make sure we start using the proper units corresponding to the calls we make.",True,"times() vs. clock() discrepancies - JtR core uses times(2). JtR jumbo will sometimes use clock(3) instead, for systems where times(2) is presumably unavailable. However, JtR jumbo ""forgets"" that while times(2) clocks are in units of `sysconf(_SC_CLK_TCK)`, clock(3) ones are in units of CLOCKS_PER_SEC. Hopefully, these just happen to be the same on systems where jumbo uses clock(3) now (MinGW, etc.) - but we need to carefully review the code and make sure we start using the proper units corresponding to the calls we make.",1,times vs clock discrepancies jtr core uses times jtr jumbo will sometimes use clock instead for systems where times is presumably unavailable however jtr jumbo forgets that while times clocks are in units of sysconf sc clk tck clock ones are in units of clocks per sec hopefully these just happen to be the same on systems where jumbo uses clock now mingw etc but we need to carefully review the code and make sure we start using the proper units corresponding to the calls we make ,1
438915,30668851842.0,IssuesEvent,2023-07-25 20:32:34,jetstream-cloud/js2docs,https://api.github.com/repos/jetstream-cloud/js2docs,opened,[documentation] Add article for extending volume,documentation,"## Opportunity
This seems like a fairly common things users will want to do, and we get a good number of tickets asking how to do this. Here is an example of a ticket, including my reply:
https://access-ci.atlassian.net/browse/ATS-1987
## Resolution
We should add a page on the public docs for how to do this.",1.0,"[documentation] Add article for extending volume - ## Opportunity
This seems like a fairly common things users will want to do, and we get a good number of tickets asking how to do this. Here is an example of a ticket, including my reply:
https://access-ci.atlassian.net/browse/ATS-1987
## Resolution
We should add a page on the public docs for how to do this.",0, add article for extending volume opportunity this seems like a fairly common things users will want to do and we get a good number of tickets asking how to do this here is an example of a ticket including my reply resolution we should add a page on the public docs for how to do this ,0
8156,7255035084.0,IssuesEvent,2018-02-16 13:30:42,raiden-network/raiden,https://api.github.com/repos/raiden-network/raiden,closed,Adapt linux and macOS deployment infrastructure for py3,infrastructure sprint_candidate,"## Problem Definition
The deployment tools for linux and macOS haven't been tested under py3 and updated `ethereum` library.
## Solution
Test deployment tools
## Tasklist
- [x] Test linux deployment
- [x] Test macOS deployment
",1.0,"Adapt linux and macOS deployment infrastructure for py3 - ## Problem Definition
The deployment tools for linux and macOS haven't been tested under py3 and updated `ethereum` library.
## Solution
Test deployment tools
## Tasklist
- [x] Test linux deployment
- [x] Test macOS deployment
",0,adapt linux and macos deployment infrastructure for problem definition the deployment tools for linux and macos haven t been tested under and updated ethereum library solution test deployment tools tasklist test linux deployment test macos deployment ,0
286,5354222136.0,IssuesEvent,2017-02-20 09:16:35,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,bleeding-jumbo does not compile against OpenSSL 1.1.0,bug portability,"Continuing from #2279
bleeding-jumbo as of 4859f4d3aa22c1b8156493b6f96ce3555fa698fe does not compile against OpenSSL 1.1.0
See, for example, https://bugzilla.redhat.com/show_bug.cgi?id=1383995
# Using Debian Stretch, before accepting the upgrade to `libssl1.1`
```text
7:16:17[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% git rev-parse HEAD
4859f4d3aa22c1b8156493b6f96ce3555fa698fe
7:16:30[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% dpkg -l | grep libssl
ii libssl-dev:amd64 1.0.2j-1 amd64 Secure Sockets Layer toolkit - development files
ii libssl-doc 1.0.2j-1 all Secure Sockets Layer toolkit - development documentation
ii libssl1.0.2:amd64 1.0.2j-1 amd64 Secure Sockets Layer toolkit - shared libraries
7:16:43[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% ./configure &>/dev/null && echo ""succeeded""
succeeded
7:17:29[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% make clean && make -s
rm -f ../run/john ../run/unshadow ../run/unafs ../run/unique ../run/undrop ../run/rar2john ../run/zip2john ../run/genmkvpwd ../run/mkvcalcproba ../run/calc_stat ../run/tgtsnarf ../run/racf2john ../run/hccap2john ../run/raw2dyna ../run/keepass2john ../run/dmg2john ../run/putty2john ../run/uaf2john ../run/wpapcap2john ../run/gpg2john ../run/cprepair ../run/base64conv ../run/pfx2john ../run/SIPdump ../run/vncpcap2john
rm -f john-macosx-* *.o escrypt/*.o *.bak core
rm -f ../run/kernels/*
rm -f detect bench generic.h tmp.s
rm -f *~
cp /dev/null Makefile.dep
make[1]: Entering directory '/home/justin/john/src/aes'
/usr/bin/find . -name \*.a -exec /bin/rm -f {} \;
/usr/bin/find . -name \*.o -exec /bin/rm -f {} \;
make[1]: Leaving directory '/home/justin/john/src/aes'
make[1]: Entering directory '/home/justin/john/src/escrypt'
/bin/rm -f tests crypto_scrypt-best.o crypto_scrypt-common.o sha256.o tests.o crypto_scrypt-*.o
make[1]: Leaving directory '/home/justin/john/src/escrypt'
ar: creating aes.a
Make process completed.
```
# Using Debian Stretch, after accepting the upgrade to `libssl1.1`
```text
7:16:23[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% git rev-parse HEAD
4859f4d3aa22c1b8156493b6f96ce3555fa698fe
7:16:24[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% dpkg -l | grep libssl
ii libssl-dev:amd64 1.1.0c-2 amd64 Secure Sockets Layer toolkit - development files
ii libssl-doc 1.1.0c-2 all Secure Sockets Layer toolkit - development documentation
ii libssl1.0.2:amd64 1.0.2j-4 amd64 Secure Sockets Layer toolkit - shared libraries
ii libssl1.1:amd64 1.1.0c-2 amd64 Secure Sockets Layer toolkit - shared libraries
7:16:27[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% ./configure &>/dev/null && echo ""succeeded""
succeeded
7:17:34[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% make clean && make -s
rm -f ../run/john ../run/unshadow ../run/unafs ../run/unique ../run/undrop ../run/rar2john ../run/zip2john ../run/genmkvpwd ../run/mkvcalcproba ../run/calc_stat ../run/tgtsnarf ../run/racf2john ../run/hccap2john ../run/raw2dyna ../run/keepass2john ../run/dmg2john ../run/putty2john ../run/uaf2john ../run/wpapcap2john ../run/gpg2john ../run/cprepair ../run/base64conv ../run/pfx2john ../run/SIPdump ../run/vncpcap2john
rm -f john-macosx-* *.o escrypt/*.o *.bak core
rm -f ../run/kernels/*
rm -f detect bench generic.h tmp.s
rm -f *~
cp /dev/null Makefile.dep
make[1]: Entering directory '/home/justin/john/src/aes'
/usr/bin/find . -name \*.a -exec /bin/rm -f {} \;
/usr/bin/find . -name \*.o -exec /bin/rm -f {} \;
make[1]: Leaving directory '/home/justin/john/src/aes'
make[1]: Entering directory '/home/justin/john/src/escrypt'
/bin/rm -f tests crypto_scrypt-best.o crypto_scrypt-common.o sha256.o tests.o crypto_scrypt-*.o
make[1]: Leaving directory '/home/justin/john/src/escrypt'
encfs_common_plug.c: In function 'encfs_common_streamDecode':
encfs_common_plug.c:212:17: error: storage size of 'stream_dec' isn't known
EVP_CIPHER_CTX stream_dec;
^~~~~~~~~~
encfs_common_plug.c:212:17: warning: unused variable 'stream_dec' [-Wunused-variable]
Makefile:1507: recipe for target 'encfs_common_plug.o' failed
make[1]: *** [encfs_common_plug.o] Error 1
Makefile:176: recipe for target 'default' failed
make: *** [default] Error 2
```",True,"bleeding-jumbo does not compile against OpenSSL 1.1.0 - Continuing from #2279
bleeding-jumbo as of 4859f4d3aa22c1b8156493b6f96ce3555fa698fe does not compile against OpenSSL 1.1.0
See, for example, https://bugzilla.redhat.com/show_bug.cgi?id=1383995
# Using Debian Stretch, before accepting the upgrade to `libssl1.1`
```text
7:16:17[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% git rev-parse HEAD
4859f4d3aa22c1b8156493b6f96ce3555fa698fe
7:16:30[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% dpkg -l | grep libssl
ii libssl-dev:amd64 1.0.2j-1 amd64 Secure Sockets Layer toolkit - development files
ii libssl-doc 1.0.2j-1 all Secure Sockets Layer toolkit - development documentation
ii libssl1.0.2:amd64 1.0.2j-1 amd64 Secure Sockets Layer toolkit - shared libraries
7:16:43[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% ./configure &>/dev/null && echo ""succeeded""
succeeded
7:17:29[justin@3e155c35d42b D ~/john/src](bleeding-jumbo)% make clean && make -s
rm -f ../run/john ../run/unshadow ../run/unafs ../run/unique ../run/undrop ../run/rar2john ../run/zip2john ../run/genmkvpwd ../run/mkvcalcproba ../run/calc_stat ../run/tgtsnarf ../run/racf2john ../run/hccap2john ../run/raw2dyna ../run/keepass2john ../run/dmg2john ../run/putty2john ../run/uaf2john ../run/wpapcap2john ../run/gpg2john ../run/cprepair ../run/base64conv ../run/pfx2john ../run/SIPdump ../run/vncpcap2john
rm -f john-macosx-* *.o escrypt/*.o *.bak core
rm -f ../run/kernels/*
rm -f detect bench generic.h tmp.s
rm -f *~
cp /dev/null Makefile.dep
make[1]: Entering directory '/home/justin/john/src/aes'
/usr/bin/find . -name \*.a -exec /bin/rm -f {} \;
/usr/bin/find . -name \*.o -exec /bin/rm -f {} \;
make[1]: Leaving directory '/home/justin/john/src/aes'
make[1]: Entering directory '/home/justin/john/src/escrypt'
/bin/rm -f tests crypto_scrypt-best.o crypto_scrypt-common.o sha256.o tests.o crypto_scrypt-*.o
make[1]: Leaving directory '/home/justin/john/src/escrypt'
ar: creating aes.a
Make process completed.
```
# Using Debian Stretch, after accepting the upgrade to `libssl1.1`
```text
7:16:23[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% git rev-parse HEAD
4859f4d3aa22c1b8156493b6f96ce3555fa698fe
7:16:24[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% dpkg -l | grep libssl
ii libssl-dev:amd64 1.1.0c-2 amd64 Secure Sockets Layer toolkit - development files
ii libssl-doc 1.1.0c-2 all Secure Sockets Layer toolkit - development documentation
ii libssl1.0.2:amd64 1.0.2j-4 amd64 Secure Sockets Layer toolkit - shared libraries
ii libssl1.1:amd64 1.1.0c-2 amd64 Secure Sockets Layer toolkit - shared libraries
7:16:27[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% ./configure &>/dev/null && echo ""succeeded""
succeeded
7:17:34[justin@878c46c24880 D ~/john/src](bleeding-jumbo)% make clean && make -s
rm -f ../run/john ../run/unshadow ../run/unafs ../run/unique ../run/undrop ../run/rar2john ../run/zip2john ../run/genmkvpwd ../run/mkvcalcproba ../run/calc_stat ../run/tgtsnarf ../run/racf2john ../run/hccap2john ../run/raw2dyna ../run/keepass2john ../run/dmg2john ../run/putty2john ../run/uaf2john ../run/wpapcap2john ../run/gpg2john ../run/cprepair ../run/base64conv ../run/pfx2john ../run/SIPdump ../run/vncpcap2john
rm -f john-macosx-* *.o escrypt/*.o *.bak core
rm -f ../run/kernels/*
rm -f detect bench generic.h tmp.s
rm -f *~
cp /dev/null Makefile.dep
make[1]: Entering directory '/home/justin/john/src/aes'
/usr/bin/find . -name \*.a -exec /bin/rm -f {} \;
/usr/bin/find . -name \*.o -exec /bin/rm -f {} \;
make[1]: Leaving directory '/home/justin/john/src/aes'
make[1]: Entering directory '/home/justin/john/src/escrypt'
/bin/rm -f tests crypto_scrypt-best.o crypto_scrypt-common.o sha256.o tests.o crypto_scrypt-*.o
make[1]: Leaving directory '/home/justin/john/src/escrypt'
encfs_common_plug.c: In function 'encfs_common_streamDecode':
encfs_common_plug.c:212:17: error: storage size of 'stream_dec' isn't known
EVP_CIPHER_CTX stream_dec;
^~~~~~~~~~
encfs_common_plug.c:212:17: warning: unused variable 'stream_dec' [-Wunused-variable]
Makefile:1507: recipe for target 'encfs_common_plug.o' failed
make[1]: *** [encfs_common_plug.o] Error 1
Makefile:176: recipe for target 'default' failed
make: *** [default] Error 2
```",1,bleeding jumbo does not compile against openssl continuing from bleeding jumbo as of does not compile against openssl see for example using debian stretch before accepting the upgrade to text bleeding jumbo git rev parse head bleeding jumbo dpkg l grep libssl ii libssl dev secure sockets layer toolkit development files ii libssl doc all secure sockets layer toolkit development documentation ii secure sockets layer toolkit shared libraries bleeding jumbo configure dev null echo succeeded succeeded bleeding jumbo make clean make s rm f run john run unshadow run unafs run unique run undrop run run run genmkvpwd run mkvcalcproba run calc stat run tgtsnarf run run run run run run run run run run cprepair run run run sipdump run rm f john macosx o escrypt o bak core rm f run kernels rm f detect bench generic h tmp s rm f cp dev null makefile dep make entering directory home justin john src aes usr bin find name a exec bin rm f usr bin find name o exec bin rm f make leaving directory home justin john src aes make entering directory home justin john src escrypt bin rm f tests crypto scrypt best o crypto scrypt common o o tests o crypto scrypt o make leaving directory home justin john src escrypt ar creating aes a make process completed using debian stretch after accepting the upgrade to text bleeding jumbo git rev parse head bleeding jumbo dpkg l grep libssl ii libssl dev secure sockets layer toolkit development files ii libssl doc all secure sockets layer toolkit development documentation ii secure sockets layer toolkit shared libraries ii secure sockets layer toolkit shared libraries bleeding jumbo configure dev null echo succeeded succeeded bleeding jumbo make clean make s rm f run john run unshadow run unafs run unique run undrop run run run genmkvpwd run mkvcalcproba run calc stat run tgtsnarf run run run run run run run run run run cprepair run run run sipdump run rm f john macosx o escrypt o bak core rm f run kernels rm f detect bench generic h tmp s rm f cp dev null makefile dep make entering directory home justin john src aes usr bin find name a exec bin rm f usr bin find name o exec bin rm f make leaving directory home justin john src aes make entering directory home justin john src escrypt bin rm f tests crypto scrypt best o crypto scrypt common o o tests o crypto scrypt o make leaving directory home justin john src escrypt encfs common plug c in function encfs common streamdecode encfs common plug c error storage size of stream dec isn t known evp cipher ctx stream dec encfs common plug c warning unused variable stream dec makefile recipe for target encfs common plug o failed make error makefile recipe for target default failed make error ,1
50,2791283026.0,IssuesEvent,2015-05-10 00:28:20,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Fix autoconfig for LibreSSL,invalid / PEBCAK portability,"There are some flaws. This is how `configure --help` **says** it should be done, but it did not work for me (did not link):
```
./configure OPENSSL_LIBS=-L/usr/local/opt/libressl/lib OPENSSL_CFLAGS=-I/usr/local/opt/libressl/include
```
This is what actually worked (and really did use LibreSSL even though I have OpenSSL in system paths too):
```
./configure LDFLAGS=-L/usr/local/opt/libressl/lib CPPFLAGS=-I/usr/local/opt/libressl/include
```",True,"Fix autoconfig for LibreSSL - There are some flaws. This is how `configure --help` **says** it should be done, but it did not work for me (did not link):
```
./configure OPENSSL_LIBS=-L/usr/local/opt/libressl/lib OPENSSL_CFLAGS=-I/usr/local/opt/libressl/include
```
This is what actually worked (and really did use LibreSSL even though I have OpenSSL in system paths too):
```
./configure LDFLAGS=-L/usr/local/opt/libressl/lib CPPFLAGS=-I/usr/local/opt/libressl/include
```",1,fix autoconfig for libressl there are some flaws this is how configure help says it should be done but it did not work for me did not link configure openssl libs l usr local opt libressl lib openssl cflags i usr local opt libressl include this is what actually worked and really did use libressl even though i have openssl in system paths too configure ldflags l usr local opt libressl lib cppflags i usr local opt libressl include ,1
26,2666575453.0,IssuesEvent,2015-03-21 18:16:33,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,AVX asm mnemonic xgetbv not supported on OSX?,portability,"```
$ ./configure CC=""gcc -m32"" --host=i686-mac-darwin
```
For some reason, when compiling with -m32 on OSX, I get this
```
x86.S:1346:no such instruction: `xgetbv'
make[1]: *** [x86.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make: *** [default] Error 2
```
A workaround is using `-mno-avx` in CC together with --disable-native-tests.
https://gnuradio.org/redmine/issues/589 seems to describe the same problem. Not sure why this is an OS problem at all.",True,"AVX asm mnemonic xgetbv not supported on OSX? - ```
$ ./configure CC=""gcc -m32"" --host=i686-mac-darwin
```
For some reason, when compiling with -m32 on OSX, I get this
```
x86.S:1346:no such instruction: `xgetbv'
make[1]: *** [x86.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make: *** [default] Error 2
```
A workaround is using `-mno-avx` in CC together with --disable-native-tests.
https://gnuradio.org/redmine/issues/589 seems to describe the same problem. Not sure why this is an OS problem at all.",1,avx asm mnemonic xgetbv not supported on osx configure cc gcc host mac darwin for some reason when compiling with on osx i get this s no such instruction xgetbv make error make waiting for unfinished jobs make error a workaround is using mno avx in cc together with disable native tests seems to describe the same problem not sure why this is an os problem at all ,1
207,4345897394.0,IssuesEvent,2016-07-29 14:20:11,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Create a Snappy Ubuntu package for JtR,enhancement portability,"* Snappy works on: Raspberry Pi 2, Intel NUC, Azure, Google Compute Engine cloud, Amazon Elastic Compute Cloud, Beagle, Duovero, ...
* Snappy packages are going to be supported on Ubuntu 16.04 LTS [1].
* It provides transactional updates with rigorous application isolation.
* It has an improved security model.
So, when we have this available, users can install JtR via Ubuntu Store.
[1] Ubuntu 16.04 LTS introduces the snappy Ubuntu Core experience to the desktop by allowing you to create, install and distribute snaps (snappy apps).",True,"Create a Snappy Ubuntu package for JtR - * Snappy works on: Raspberry Pi 2, Intel NUC, Azure, Google Compute Engine cloud, Amazon Elastic Compute Cloud, Beagle, Duovero, ...
* Snappy packages are going to be supported on Ubuntu 16.04 LTS [1].
* It provides transactional updates with rigorous application isolation.
* It has an improved security model.
So, when we have this available, users can install JtR via Ubuntu Store.
[1] Ubuntu 16.04 LTS introduces the snappy Ubuntu Core experience to the desktop by allowing you to create, install and distribute snaps (snappy apps).",1,create a snappy ubuntu package for jtr snappy works on raspberry pi intel nuc azure google compute engine cloud amazon elastic compute cloud beagle duovero snappy packages are going to be supported on ubuntu lts it provides transactional updates with rigorous application isolation it has an improved security model so when we have this available users can install jtr via ubuntu store ubuntu lts introduces the snappy ubuntu core experience to the desktop by allowing you to create install and distribute snaps snappy apps ,1
615,8307568630.0,IssuesEvent,2018-09-23 11:01:15,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,AVX512 tweaks,enhancement portability,"I could get my hands on some gear with Skylake Xeon (silver something).
```
$ gccmacros -march=native | grep AVX512
#define __AVX512F__ 1
#define __AVX512BW__ 1
#define __AVX512CD__ 1
#define __AVX512DQ__ 1
```
Any of BW, CD or DQ implies F, but only that. So to get it complete w/o ""native"", we need `-mavx512bw -mavx512cd -mavx512dq`. Not sure if that would make any difference at all right now but anyway we'll probably want to add checks for CD and DQ in autoconf sooner or later.
Edit for anyone stumbling in here:
```
$ alias gccmacros
alias gccmacros='gcc -dM -E -x c /dev/null'
```",True,"AVX512 tweaks - I could get my hands on some gear with Skylake Xeon (silver something).
```
$ gccmacros -march=native | grep AVX512
#define __AVX512F__ 1
#define __AVX512BW__ 1
#define __AVX512CD__ 1
#define __AVX512DQ__ 1
```
Any of BW, CD or DQ implies F, but only that. So to get it complete w/o ""native"", we need `-mavx512bw -mavx512cd -mavx512dq`. Not sure if that would make any difference at all right now but anyway we'll probably want to add checks for CD and DQ in autoconf sooner or later.
Edit for anyone stumbling in here:
```
$ alias gccmacros
alias gccmacros='gcc -dM -E -x c /dev/null'
```",1, tweaks i could get my hands on some gear with skylake xeon silver something gccmacros march native grep define define define define any of bw cd or dq implies f but only that so to get it complete w o native we need not sure if that would make any difference at all right now but anyway we ll probably want to add checks for cd and dq in autoconf sooner or later edit for anyone stumbling in here alias gccmacros alias gccmacros gcc dm e x c dev null ,1
662,8759620560.0,IssuesEvent,2018-12-15 18:09:17,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,OpenCL CL_INVALID_DEVICE (-33) error in opencl_common.c:452 - Error querying PLATFORM_NAME,portability,"**Environment Setup**
OS: Ubuntu 16.04.5
JohnTheRipper
https://github.com/magnumripper/JohnTheRipper/blob/bleeding-jumbo/doc/INSTALL-UBUNTU
***configure and make using this install guide.
**What was used:**
sudo apt-get install build-essential libssl-dev git zlib1g-dev
sudo apt-get install yasm libgmp-dev libpcap-dev pkg-config libbz2-dev
sudo apt-get install nvidia-opencl-dev
sudo apt-get install libopenmpi-dev openmpi-bin
./configure --enable-mpi
make -s clean && make -sj4
**Sample test file:**
./tezos2john.py 'put guide flat machine express cave hello connect stay local spike ski romance express brass' 'jbzbdybr.vpbdbxnn@tezos.example.org' 'tz1eTjPtwYjdcBMStwVdEcwY2YE3th1bXyMR' > tezos
**Running John**
tezos@tezos-Desktop:~/JohnTheRipper/run$ ./john --devices=gpu --fork=2 --format=tezos-opencl --session=tezos tezos
Using default input encoding: UTF-8
Loaded 1 password hash (tezos-opencl, Tezos Key [PBKDF2-SHA512 OpenCL])
Cost 1 (iteration count) is 2048 for all loaded hashes
Will run 4 OpenMP threads per process (8 total across 2 processes)
Node numbers 1-2 of 2 (fork)
**OpenCL CL_INVALID_DEVICE (-33) error in opencl_common.c:452 - Error querying PLATFORM_NAME**
Device 0@tezos-Desktop: GeForce GTX 1070 Ti
Press 'q' or Ctrl-C to abort, almost any other key for status
1 0g 0:00:00:04 3/3 0g/s 38678p/s 38678c/s 38678C/s GPU:61°C clmom..mia196
1 0g 0:00:00:09 3/3 0g/s 51592p/s 51592c/s 51592C/s GPU:61°C 196642..biliyou
1 0g 0:00:00:11 3/3 0g/s 54131p/s 54131c/s 54131C/s GPU:60°C 115la..0998am
Waiting for 1 child to terminate
Session aborted
**BUG/Issues:** It can't run across multiple GPUs
**System Information:**
tezos@tezos-Desktop:~/JohnTheRipper/run$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 21
Model: 2
Model name: AMD FX(tm)-8350 Eight-Core Processor
Stepping: 0
CPU MHz: 1411.029
CPU max MHz: 4000.0000
CPU min MHz: 1400.0000
BogoMIPS: 8037.07
Virtualization: AMD-V
L1d cache: 16K
L1i cache: 64K
L2 cache: 2048K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
tezos@tezos-Desktop:~/JohnTheRipper/run$ ./john --list=build-info
Version: 1.8.0.13-jumbo-1-bleeding-9c715c6 2018-11-04 18:00:16 +0530
Build: linux-gnu 64-bit x86_64 XOP AC MPI + OMP
SIMD: XOP, interleaving: MD4:2 MD5:2 SHA1:1 SHA256:2 SHA512:1
CPU tests: XOP
$JOHN is ./
Format interface version: 14
Max. number of reported tunable costs: 4
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 5.4.0
GNU libc version: 2.23 (loaded: 2.23)
OpenCL headers version: 2.0
Crypto library: OpenSSL
OpenSSL library version: 01000207f
OpenSSL 1.0.2g 1 Mar 2016
GMP library version: 6.1.0
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
tezos@tezos-Desktop:~/JohnTheRipper/run$ ./john --list=opencl-devices
Platform #0 name: NVIDIA CUDA, version: OpenCL 1.2 CUDA 10.0.185
Device #0 (0) name: GeForce GTX 1070 Ti
Device vendor: NVIDIA Corporation
Device type: GPU (LE)
Device version: OpenCL 1.2 CUDA
Driver version: 410.73 [recommended]
Native vector widths: char 1, short 1, int 1, long 1
Preferred vector width: char 1, short 1, int 1, long 1
Global Memory: 7.9 GB
Global Memory Cache: 304 KB
Local Memory: 48 KB (Local)
Constant Buffer size: 64 KB
Max memory alloc. size: 2 GB
Max clock (MHz): 1683
Profiling timer res.: 1000 ns
Max Work Group Size: 1024
Parallel compute cores: 19
CUDA cores: 2432 (19 x 128)
Speed index: 4093056
Warp size: 32
Max. GPRs/work-group: 65536
Compute capability: 6.1 (sm_61)
Kernel exec. timeout: yes
NVML id: 0
PCI device topology: 01:00.0
PCI lanes: 16/16
Fan speed: 22%
Temperature: 59°C
Utilization: 4%
Device #1 (1) name: GeForce GTX 1070 Ti
Device vendor: NVIDIA Corporation
Device type: GPU (LE)
Device version: OpenCL 1.2 CUDA
Driver version: 410.73 [recommended]
Native vector widths: char 1, short 1, int 1, long 1
Preferred vector width: char 1, short 1, int 1, long 1
Global Memory: 8 GB
Global Memory Cache: 304 KB
Local Memory: 48 KB (Local)
Constant Buffer size: 64 KB
Max memory alloc. size: 2 GB
Max clock (MHz): 1683
Profiling timer res.: 1000 ns
Max Work Group Size: 1024
Parallel compute cores: 19
CUDA cores: 2432 (19 x 128)
Speed index: 4093056
Warp size: 32
Max. GPRs/work-group: 65536
Compute capability: 6.1 (sm_61)
Kernel exec. timeout: no
NVML id: 1
PCI device topology: 05:00.0
PCI lanes: 4/16
Fan speed: 0%
Temperature: 34°C
Utilization: 9%",True,"OpenCL CL_INVALID_DEVICE (-33) error in opencl_common.c:452 - Error querying PLATFORM_NAME - **Environment Setup**
OS: Ubuntu 16.04.5
JohnTheRipper
https://github.com/magnumripper/JohnTheRipper/blob/bleeding-jumbo/doc/INSTALL-UBUNTU
***configure and make using this install guide.
**What was used:**
sudo apt-get install build-essential libssl-dev git zlib1g-dev
sudo apt-get install yasm libgmp-dev libpcap-dev pkg-config libbz2-dev
sudo apt-get install nvidia-opencl-dev
sudo apt-get install libopenmpi-dev openmpi-bin
./configure --enable-mpi
make -s clean && make -sj4
**Sample test file:**
./tezos2john.py 'put guide flat machine express cave hello connect stay local spike ski romance express brass' 'jbzbdybr.vpbdbxnn@tezos.example.org' 'tz1eTjPtwYjdcBMStwVdEcwY2YE3th1bXyMR' > tezos
**Running John**
tezos@tezos-Desktop:~/JohnTheRipper/run$ ./john --devices=gpu --fork=2 --format=tezos-opencl --session=tezos tezos
Using default input encoding: UTF-8
Loaded 1 password hash (tezos-opencl, Tezos Key [PBKDF2-SHA512 OpenCL])
Cost 1 (iteration count) is 2048 for all loaded hashes
Will run 4 OpenMP threads per process (8 total across 2 processes)
Node numbers 1-2 of 2 (fork)
**OpenCL CL_INVALID_DEVICE (-33) error in opencl_common.c:452 - Error querying PLATFORM_NAME**
Device 0@tezos-Desktop: GeForce GTX 1070 Ti
Press 'q' or Ctrl-C to abort, almost any other key for status
1 0g 0:00:00:04 3/3 0g/s 38678p/s 38678c/s 38678C/s GPU:61°C clmom..mia196
1 0g 0:00:00:09 3/3 0g/s 51592p/s 51592c/s 51592C/s GPU:61°C 196642..biliyou
1 0g 0:00:00:11 3/3 0g/s 54131p/s 54131c/s 54131C/s GPU:60°C 115la..0998am
Waiting for 1 child to terminate
Session aborted
**BUG/Issues:** It can't run across multiple GPUs
**System Information:**
tezos@tezos-Desktop:~/JohnTheRipper/run$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 21
Model: 2
Model name: AMD FX(tm)-8350 Eight-Core Processor
Stepping: 0
CPU MHz: 1411.029
CPU max MHz: 4000.0000
CPU min MHz: 1400.0000
BogoMIPS: 8037.07
Virtualization: AMD-V
L1d cache: 16K
L1i cache: 64K
L2 cache: 2048K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
tezos@tezos-Desktop:~/JohnTheRipper/run$ ./john --list=build-info
Version: 1.8.0.13-jumbo-1-bleeding-9c715c6 2018-11-04 18:00:16 +0530
Build: linux-gnu 64-bit x86_64 XOP AC MPI + OMP
SIMD: XOP, interleaving: MD4:2 MD5:2 SHA1:1 SHA256:2 SHA512:1
CPU tests: XOP
$JOHN is ./
Format interface version: 14
Max. number of reported tunable costs: 4
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 5.4.0
GNU libc version: 2.23 (loaded: 2.23)
OpenCL headers version: 2.0
Crypto library: OpenSSL
OpenSSL library version: 01000207f
OpenSSL 1.0.2g 1 Mar 2016
GMP library version: 6.1.0
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
tezos@tezos-Desktop:~/JohnTheRipper/run$ ./john --list=opencl-devices
Platform #0 name: NVIDIA CUDA, version: OpenCL 1.2 CUDA 10.0.185
Device #0 (0) name: GeForce GTX 1070 Ti
Device vendor: NVIDIA Corporation
Device type: GPU (LE)
Device version: OpenCL 1.2 CUDA
Driver version: 410.73 [recommended]
Native vector widths: char 1, short 1, int 1, long 1
Preferred vector width: char 1, short 1, int 1, long 1
Global Memory: 7.9 GB
Global Memory Cache: 304 KB
Local Memory: 48 KB (Local)
Constant Buffer size: 64 KB
Max memory alloc. size: 2 GB
Max clock (MHz): 1683
Profiling timer res.: 1000 ns
Max Work Group Size: 1024
Parallel compute cores: 19
CUDA cores: 2432 (19 x 128)
Speed index: 4093056
Warp size: 32
Max. GPRs/work-group: 65536
Compute capability: 6.1 (sm_61)
Kernel exec. timeout: yes
NVML id: 0
PCI device topology: 01:00.0
PCI lanes: 16/16
Fan speed: 22%
Temperature: 59°C
Utilization: 4%
Device #1 (1) name: GeForce GTX 1070 Ti
Device vendor: NVIDIA Corporation
Device type: GPU (LE)
Device version: OpenCL 1.2 CUDA
Driver version: 410.73 [recommended]
Native vector widths: char 1, short 1, int 1, long 1
Preferred vector width: char 1, short 1, int 1, long 1
Global Memory: 8 GB
Global Memory Cache: 304 KB
Local Memory: 48 KB (Local)
Constant Buffer size: 64 KB
Max memory alloc. size: 2 GB
Max clock (MHz): 1683
Profiling timer res.: 1000 ns
Max Work Group Size: 1024
Parallel compute cores: 19
CUDA cores: 2432 (19 x 128)
Speed index: 4093056
Warp size: 32
Max. GPRs/work-group: 65536
Compute capability: 6.1 (sm_61)
Kernel exec. timeout: no
NVML id: 1
PCI device topology: 05:00.0
PCI lanes: 4/16
Fan speed: 0%
Temperature: 34°C
Utilization: 9%",1,opencl cl invalid device error in opencl common c error querying platform name environment setup os ubuntu johntheripper configure and make using this install guide what was used sudo apt get install build essential libssl dev git dev sudo apt get install yasm libgmp dev libpcap dev pkg config dev sudo apt get install nvidia opencl dev sudo apt get install libopenmpi dev openmpi bin configure enable mpi make s clean make sample test file py put guide flat machine express cave hello connect stay local spike ski romance express brass jbzbdybr vpbdbxnn tezos example org tezos running john tezos tezos desktop johntheripper run john devices gpu fork format tezos opencl session tezos tezos using default input encoding utf loaded password hash tezos opencl tezos key cost iteration count is for all loaded hashes will run openmp threads per process total across processes node numbers of fork opencl cl invalid device error in opencl common c error querying platform name device tezos desktop geforce gtx ti press q or ctrl c to abort almost any other key for status s s s s gpu °c clmom s s s s gpu °c biliyou s s s s gpu °c waiting for child to terminate session aborted bug issues it can t run across multiple gpus system information tezos tezos desktop johntheripper run lscpu architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s numa node s vendor id authenticamd cpu family model model name amd fx tm eight core processor stepping cpu mhz cpu max mhz cpu min mhz bogomips virtualization amd v cache cache cache cache numa cpu s flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush mmx fxsr sse ht syscall nx mmxext fxsr opt rdtscp lm constant tsc rep good nopl nonstop tsc cpuid extd apicid aperfmperf pni pclmulqdq monitor fma popcnt aes xsave avx lahf lm cmp legacy svm extapic legacy abm misalignsse osvw ibs xop skinit wdt lwp tce nodeid msr tbm topoext perfctr core perfctr nb cpb hw pstate ssbd ibpb vmmcall arat npt lbrv svm lock nrip save tsc scale vmcb clean flushbyasid decodeassists pausefilter pfthreshold tezos tezos desktop johntheripper run john list build info version jumbo bleeding build linux gnu bit xop ac mpi omp simd xop interleaving cpu tests xop john is format interface version max number of reported tunable costs rec file version charset file version charset min charset max charset length salt hash size max markov mode level max markov mode password length gcc version gnu libc version loaded opencl headers version crypto library openssl openssl library version openssl mar gmp library version file locking fcntl fseek fseek ftell ftell fopen fopen memmem system s tezos tezos desktop johntheripper run john list opencl devices platform name nvidia cuda version opencl cuda device name geforce gtx ti device vendor nvidia corporation device type gpu le device version opencl cuda driver version native vector widths char short int long preferred vector width char short int long global memory gb global memory cache kb local memory kb local constant buffer size kb max memory alloc size gb max clock mhz profiling timer res ns max work group size parallel compute cores cuda cores x speed index warp size max gprs work group compute capability sm kernel exec timeout yes nvml id pci device topology pci lanes fan speed temperature °c utilization device name geforce gtx ti device vendor nvidia corporation device type gpu le device version opencl cuda driver version native vector widths char short int long preferred vector width char short int long global memory gb global memory cache kb local memory kb local constant buffer size kb max memory alloc size gb max clock mhz profiling timer res ns max work group size parallel compute cores cuda cores x speed index warp size max gprs work group compute capability sm kernel exec timeout no nvml id pci device topology pci lanes fan speed temperature °c utilization ,1
88487,10572714114.0,IssuesEvent,2019-10-07 10:11:46,StefanNieuwenhuis/databindr,https://api.github.com/repos/StefanNieuwenhuis/databindr,closed,Add proper readme,documentation,"As user I want to know how to use this library, so I want to be informed through an elaborate readme.",1.0,"Add proper readme - As user I want to know how to use this library, so I want to be informed through an elaborate readme.",0,add proper readme as user i want to know how to use this library so i want to be informed through an elaborate readme ,0
626,8433670530.0,IssuesEvent,2018-10-17 08:02:43,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,OpenCL formats failing on macOS with Intel HD Graphics,portability,"See also #3235, #3434
```
Device 1: Intel(R) HD Graphics 630
Testing: ansible-opencl, Ansible Vault [PBKDF2-SHA256 HMAC-SHA256 OpenCL]... FAILED (cmp_all(49))
Testing: axcrypt-opencl [SHA1 AES OpenCL]... FAILED (get_key(6))
Testing: dmg-opencl, Apple DMG [PBKDF2-SHA1 3DES/AES OpenCL]... FAILED (cmp_all(1))
Testing: EncFS-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(1))
Testing: krb5pa-sha1-opencl, Kerberos 5 AS-REQ Pre-Auth etype 17/18 [PBKDF2-SHA1 OpenCL]... FAILED (cmp_all(1))
Testing: krb5asrep-aes-opencl, Kerberos 5 AS-REP etype 17/18 [PBKDF2-SHA1 OpenCL]... Abort trap: 6
```",True,"OpenCL formats failing on macOS with Intel HD Graphics - See also #3235, #3434
```
Device 1: Intel(R) HD Graphics 630
Testing: ansible-opencl, Ansible Vault [PBKDF2-SHA256 HMAC-SHA256 OpenCL]... FAILED (cmp_all(49))
Testing: axcrypt-opencl [SHA1 AES OpenCL]... FAILED (get_key(6))
Testing: dmg-opencl, Apple DMG [PBKDF2-SHA1 3DES/AES OpenCL]... FAILED (cmp_all(1))
Testing: EncFS-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(1))
Testing: krb5pa-sha1-opencl, Kerberos 5 AS-REQ Pre-Auth etype 17/18 [PBKDF2-SHA1 OpenCL]... FAILED (cmp_all(1))
Testing: krb5asrep-aes-opencl, Kerberos 5 AS-REP etype 17/18 [PBKDF2-SHA1 OpenCL]... Abort trap: 6
```",1,opencl formats failing on macos with intel hd graphics see also device intel r hd graphics testing ansible opencl ansible vault failed cmp all testing axcrypt opencl failed get key testing dmg opencl apple dmg failed cmp all testing encfs opencl failed cmp all testing opencl kerberos as req pre auth etype failed cmp all testing aes opencl kerberos as rep etype abort trap ,1
173752,14436483356.0,IssuesEvent,2020-12-07 10:11:38,cemac/SWIFT-Testbed3,https://api.github.com/repos/cemac/SWIFT-Testbed3,opened,Documentation ,documentation,"Document, license and add DOI
1. READMEs for each section
2. Wiki
3. Userguides for tools
4. Readme overview",1.0,"Documentation - Document, license and add DOI
1. READMEs for each section
2. Wiki
3. Userguides for tools
4. Readme overview",0,documentation document license and add doi readmes for each section wiki userguides for tools readme overview,0
314,5775760768.0,IssuesEvent,2017-04-28 11:11:53,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,"Travis CI, build and test JtR jumbo on OS X / macOS",enhancement portability,"https://docs.travis-ci.com/user/osx-ci-environment/ is helpful. https://docs.travis-ci.com/user/multi-os/ is another useful link.
@claudioandre is this something which interests you?",True,"Travis CI, build and test JtR jumbo on OS X / macOS - https://docs.travis-ci.com/user/osx-ci-environment/ is helpful. https://docs.travis-ci.com/user/multi-os/ is another useful link.
@claudioandre is this something which interests you?",1,travis ci build and test jtr jumbo on os x macos is helpful is another useful link claudioandre is this something which interests you ,1
708,9602613278.0,IssuesEvent,2019-05-10 14:58:16,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,Detect too old OpenCL headers (or lib),enhancement portability,"The configure script should ensure we have OpenCL >= 1.2 in headers **and** lib, and then some of our headers should enforce the actually sourced OpenCL header also is >= 1.2",True,"Detect too old OpenCL headers (or lib) - The configure script should ensure we have OpenCL >= 1.2 in headers **and** lib, and then some of our headers should enforce the actually sourced OpenCL header also is >= 1.2",1,detect too old opencl headers or lib the configure script should ensure we have opencl in headers and lib and then some of our headers should enforce the actually sourced opencl header also is ,1
10,2546580172.0,IssuesEvent,2015-01-30 01:26:06,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,Add int128 support for 32-bit builds using 64-bit lo/hi structs,portability,This is just the same as math.h's 64-bit functions using 32-bit lo/hi structs. We should add it to mpz_int128.h with #ifdef's so it's used instead of int128 when needed. This could also be contributed to [upstream PRINCE](https://github.com/jsteube/princeprocessor).,True,Add int128 support for 32-bit builds using 64-bit lo/hi structs - This is just the same as math.h's 64-bit functions using 32-bit lo/hi structs. We should add it to mpz_int128.h with #ifdef's so it's used instead of int128 when needed. This could also be contributed to [upstream PRINCE](https://github.com/jsteube/princeprocessor).,1,add support for bit builds using bit lo hi structs this is just the same as math h s bit functions using bit lo hi structs we should add it to mpz h with ifdef s so it s used instead of when needed this could also be contributed to ,1
710,9607008651.0,IssuesEvent,2019-05-11 15:19:42,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,"Misreported ""virtual"" speeds on Linux/sparc64",portability,"In testing on Linux/sparc64, I notice that `--test` reports for ""c/s virtual"" are 500 times higher than expected (e.g., 500 times higher than ""c/s real"" when running one thread). I don't yet know what causes this - could be our bug, could be a Linux kernel bug, or maybe something else.",True,"Misreported ""virtual"" speeds on Linux/sparc64 - In testing on Linux/sparc64, I notice that `--test` reports for ""c/s virtual"" are 500 times higher than expected (e.g., 500 times higher than ""c/s real"" when running one thread). I don't yet know what causes this - could be our bug, could be a Linux kernel bug, or maybe something else.",1,misreported virtual speeds on linux in testing on linux i notice that test reports for c s virtual are times higher than expected e g times higher than c s real when running one thread i don t yet know what causes this could be our bug could be a linux kernel bug or maybe something else ,1
433048,30308663004.0,IssuesEvent,2023-07-10 11:17:20,hwchase17/langchain,https://api.github.com/repos/hwchase17/langchain,closed,DOC: Bug in loading Chroma from disk (vectorstores/integrations/chroma),auto:bug auto:documentation,"### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-including-saving-to-disk
## Environment
- macOS
- Python 3.10.9
- langchain 0.0.228
- chromadb 0.3.26
Use https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/state_of_the_union.txt
## Procedure
1. Run the following Python script
ref: https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/data_connection/vectorstores/integrations/chroma.ipynb
```diff
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader(""../../../state_of_the_union.txt"")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name=""all-MiniLM-L6-v2"")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = ""What did the president say about Ketanji Brown Jackson""
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory=""./chroma_db"")
db2.persist()
-docs = db.similarity_search(query)
+docs = db2.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory=""./chroma_db"")
-docs = db.similarity_search(query)
+docs = db3.similarity_search(query) # ValueError raised
print(docs[0].page_content)
```
## Expected behavior
`print(docs[0].page_content)` with db3
## Actual behavior
>ValueError: You must provide embeddings or a function to compute them
```
Traceback (most recent call last):
File ""/.../issue_report.py"", line 35, in
docs = db3.similarity_search(query)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 174, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 242, in similarity_search_with_score
results = self.__query_collection(
File ""/.../venv/lib/python3.10/site-packages/langchain/utils.py"", line 55, in wrapper
return func(*args, **kwargs)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 121, in __query_collection
return self._collection.query(
File ""/.../venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py"", line 209, in query
raise ValueError(
ValueError: You must provide embeddings or a function to compute them
```
### Idea or request for content:
Fixed by specifying the `embedding_function` parameter.
```diff
-db3 = Chroma(persist_directory=""./chroma_db"")
+db3 = Chroma(persist_directory=""./chroma_db"", embedding_function=embedding_function)
docs = db3.similarity_search(query)
print(docs[0].page_content)
```
(Added) ref: https://github.com/hwchase17/langchain/blob/v0.0.228/langchain/vectorstores/chroma.py#L62",1.0,"DOC: Bug in loading Chroma from disk (vectorstores/integrations/chroma) - ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-including-saving-to-disk
## Environment
- macOS
- Python 3.10.9
- langchain 0.0.228
- chromadb 0.3.26
Use https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/state_of_the_union.txt
## Procedure
1. Run the following Python script
ref: https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/data_connection/vectorstores/integrations/chroma.ipynb
```diff
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader(""../../../state_of_the_union.txt"")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name=""all-MiniLM-L6-v2"")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = ""What did the president say about Ketanji Brown Jackson""
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory=""./chroma_db"")
db2.persist()
-docs = db.similarity_search(query)
+docs = db2.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory=""./chroma_db"")
-docs = db.similarity_search(query)
+docs = db3.similarity_search(query) # ValueError raised
print(docs[0].page_content)
```
## Expected behavior
`print(docs[0].page_content)` with db3
## Actual behavior
>ValueError: You must provide embeddings or a function to compute them
```
Traceback (most recent call last):
File ""/.../issue_report.py"", line 35, in
docs = db3.similarity_search(query)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 174, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 242, in similarity_search_with_score
results = self.__query_collection(
File ""/.../venv/lib/python3.10/site-packages/langchain/utils.py"", line 55, in wrapper
return func(*args, **kwargs)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 121, in __query_collection
return self._collection.query(
File ""/.../venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py"", line 209, in query
raise ValueError(
ValueError: You must provide embeddings or a function to compute them
```
### Idea or request for content:
Fixed by specifying the `embedding_function` parameter.
```diff
-db3 = Chroma(persist_directory=""./chroma_db"")
+db3 = Chroma(persist_directory=""./chroma_db"", embedding_function=embedding_function)
docs = db3.similarity_search(query)
print(docs[0].page_content)
```
(Added) ref: https://github.com/hwchase17/langchain/blob/v0.0.228/langchain/vectorstores/chroma.py#L62",0,doc bug in loading chroma from disk vectorstores integrations chroma issue with current documentation environment macos python langchain chromadb use procedure run the following python script ref diff import from langchain embeddings sentence transformer import sentencetransformerembeddings from langchain text splitter import charactertextsplitter from langchain vectorstores import chroma from langchain document loaders import textloader load the document and split it into chunks loader textloader state of the union txt documents loader load split it into chunks text splitter charactertextsplitter chunk size chunk overlap docs text splitter split documents documents create the open source embedding function embedding function sentencetransformerembeddings model name all minilm load it into chroma db chroma from documents docs embedding function query it query what did the president say about ketanji brown jackson docs db similarity search query print results print docs page content save to disk chroma from documents docs embedding function persist directory chroma db persist docs db similarity search query docs similarity search query load from disk chroma persist directory chroma db docs db similarity search query docs similarity search query valueerror raised print docs page content expected behavior print docs page content with actual behavior valueerror you must provide embeddings or a function to compute them traceback most recent call last file issue report py line in docs similarity search query file venv lib site packages langchain vectorstores chroma py line in similarity search docs and scores self similarity search with score query k filter filter file venv lib site packages langchain vectorstores chroma py line in similarity search with score results self query collection file venv lib site packages langchain utils py line in wrapper return func args kwargs file venv lib site packages langchain vectorstores chroma py line in query collection return self collection query file venv lib site packages chromadb api models collection py line in query raise valueerror valueerror you must provide embeddings or a function to compute them idea or request for content fixed by specifying the embedding function parameter diff chroma persist directory chroma db chroma persist directory chroma db embedding function embedding function docs similarity search query print docs page content added ref ,0
83898,3644692607.0,IssuesEvent,2016-02-15 11:06:01,MinetestForFun/server-minetestforfun-skyblock,https://api.github.com/repos/MinetestForFun/server-minetestforfun-skyblock,closed,Protector bug,Modding ➤ BugFix Priority: High,"This only applies to the normal protector logo, not the blocks or the 3x protectors, just the normal protector logos(protector:protect2).
Logos currently **cannot** be removed. When attempting to dig them an ""Unknown Object"" texture appears next to it and the logo remains.
I copied this error from the server log right after i found this bug:
2016-02-12 21:41:41: ERROR[ServerThread]: LuaEntity name ""protector:display"" not defined
:small_orange_diamond:",1.0,"Protector bug - This only applies to the normal protector logo, not the blocks or the 3x protectors, just the normal protector logos(protector:protect2).
Logos currently **cannot** be removed. When attempting to dig them an ""Unknown Object"" texture appears next to it and the logo remains.
I copied this error from the server log right after i found this bug:
2016-02-12 21:41:41: ERROR[ServerThread]: LuaEntity name ""protector:display"" not defined
:small_orange_diamond:",0,protector bug this only applies to the normal protector logo not the blocks or the protectors just the normal protector logos protector logos currently cannot be removed when attempting to dig them an unknown object texture appears next to it and the logo remains i copied this error from the server log right after i found this bug error luaentity name protector display not defined small orange diamond ,0
763274,26749934421.0,IssuesEvent,2023-01-30 18:44:50,GoogleCloudPlatform/alloydb-auth-proxy,https://api.github.com/repos/GoogleCloudPlatform/alloydb-auth-proxy,closed,cmd: TestPProfServer failed,type: bug priority: p2 flakybot: issue,"This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 7ccd62a356d3a2af21958163f7814a085a2eb365
buildURL: https://github.com/GoogleCloudPlatform/alloydb-auth-proxy/actions/runs/4045923608
status: failed
Test output
2023/01/30 16:29:08 SIGINT signal received. Shutting down...
2023/01/30 16:29:08 The proxy has encountered a terminal error: unable to start: [proj.region.clust.inst] Unable to mount socket:
root_test.go:936: failed to dial endpoint: Get ""http://localhost:9191/debug/pprof/"": dial tcp [::1]:9191: connectex: No connection could be made because the target machine actively refused it.
",1.0,"cmd: TestPProfServer failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 7ccd62a356d3a2af21958163f7814a085a2eb365
buildURL: https://github.com/GoogleCloudPlatform/alloydb-auth-proxy/actions/runs/4045923608
status: failed
Test output
2023/01/30 16:29:08 SIGINT signal received. Shutting down...
2023/01/30 16:29:08 The proxy has encountered a terminal error: unable to start: [proj.region.clust.inst] Unable to mount socket:
root_test.go:936: failed to dial endpoint: Get ""http://localhost:9191/debug/pprof/"": dial tcp [::1]:9191: connectex: No connection could be made because the target machine actively refused it.
",0,cmd testpprofserver failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output sigint signal received shutting down the proxy has encountered a terminal error unable to start unable to mount socket root test go failed to dial endpoint get dial tcp connectex no connection could be made because the target machine actively refused it ,0
49524,12369550692.0,IssuesEvent,2020-05-18 15:25:07,tensorflow/tensorflow,https://api.github.com/repos/tensorflow/tensorflow,opened,Tensorflow containers are missing from Docker Hub,type:build/install,"Your docker pages point at
https://hub.docker.com/r/tensorflow/tensorflow
Today that returns ""404"" Oops! Page not found.",1.0,"Tensorflow containers are missing from Docker Hub - Your docker pages point at
https://hub.docker.com/r/tensorflow/tensorflow
Today that returns ""404"" Oops! Page not found.",0,tensorflow containers are missing from docker hub your docker pages point at today that returns oops page not found ,0
58596,14292097415.0,IssuesEvent,2020-11-24 00:12:19,devtron-labs/devtron,https://api.github.com/repos/devtron-labs/devtron,opened,"CVE-2019-18658 (High) detected in k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e, k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e",security vulnerability,"## CVE-2019-18658 - High Severity Vulnerability
Vulnerable Libraries - k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e, k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- :x: **k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e (Root Library)
- :x: **k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
Found in HEAD commit: f7db3d4b83b1d3b0008f56e9a649b36ed2ae830d
Found in base branch: main
Vulnerability Details
In Helm 2.x before 2.15.2, commands that deal with loading a chart as a directory or packaging a chart provide an opportunity for a maliciously designed chart to include sensitive content such as /etc/passwd, or to execute a denial of service (DoS) via a special file such as /dev/urandom, via symlinks. No version of Tiller is known to be impacted. This is a client-only issue.
Publish Date: 2019-11-12
URL: CVE-2019-18658
CVSS 3 Score Details (9.8)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18658
Release Date: 2019-11-12
Fix Resolution: v2.15.2
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-18658 (High) detected in k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e, k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e - ## CVE-2019-18658 - High Severity Vulnerability
Vulnerable Libraries - k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e, k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- :x: **k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e (Root Library)
- :x: **k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
Found in HEAD commit: f7db3d4b83b1d3b0008f56e9a649b36ed2ae830d
Found in base branch: main
Vulnerability Details
In Helm 2.x before 2.15.2, commands that deal with loading a chart as a directory or packaging a chart provide an opportunity for a maliciously designed chart to include sensitive content such as /etc/passwd, or to execute a denial of service (DoS) via a special file such as /dev/urandom, via symlinks. No version of Tiller is known to be impacted. This is a client-only issue.
Publish Date: 2019-11-12
URL: CVE-2019-18658
CVSS 3 Score Details (9.8)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18658
Release Date: 2019-11-12
Fix Resolution: v2.15.2
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in io helm pkg chartutil io helm pkg sympath cve high severity vulnerability vulnerable libraries io helm pkg chartutil io helm pkg sympath io helm pkg chartutil the kubernetes package manager dependency hierarchy x io helm pkg chartutil vulnerable library io helm pkg sympath the kubernetes package manager dependency hierarchy io helm pkg chartutil root library x io helm pkg sympath vulnerable library found in head commit a href found in base branch main vulnerability details in helm x before commands that deal with loading a chart as a directory or packaging a chart provide an opportunity for a maliciously designed chart to include sensitive content such as etc passwd or to execute a denial of service dos via a special file such as dev urandom via symlinks no version of tiller is known to be impacted this is a client only issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
542,7656893084.0,IssuesEvent,2018-05-10 17:49:08,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,Formats failing on macOS w/ Intel(R) HD Graphics 630,portability,"See also #3235
This one fails now (I think it passes with `--test=0` - this was a `--test` run)
```
Benchmarking: PBKDF2-HMAC-MD5-opencl [PBKDF2-MD5 OpenCL]... Abort trap: 6
```",True,"Formats failing on macOS w/ Intel(R) HD Graphics 630 - See also #3235
This one fails now (I think it passes with `--test=0` - this was a `--test` run)
```
Benchmarking: PBKDF2-HMAC-MD5-opencl [PBKDF2-MD5 OpenCL]... Abort trap: 6
```",1,formats failing on macos w intel r hd graphics see also this one fails now i think it passes with test this was a test run benchmarking hmac opencl abort trap ,1
250,4860119220.0,IssuesEvent,2016-11-13 23:45:26,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,clang build fails on well and bull,portability,"On super, clang isn't installed.
On well:
```
frank@well:~$ make -s distclean; ./configure CC=clang --disable-cuda && make -s clean && make -s && ../run/john --test=0 --format=opencl
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking whether to compile using MPI... no
checking for gcc... clang
checking whether the C compiler works... yes
[...]
clang: warning: argument unused during compilation: '-arch host'
configure: creating ./dynamic_big_crypt.c
checking for john.local.conf... exists
Configured for building John the Ripper jumbo:
Target CPU .................................. x86_64 AVX, 64-bit LE
AES-NI support .............................. run-time detection
Target OS ................................... linux-gnu
Cross compiling ............................. no
Legacy arch header .......................... x86-64.h
Optional libraries/features found:
Experimental code ........................... no
OpenMPI support (default disabled) .......... no
Fork support ................................ yes
OpenMP support .............................. no
OpenCL support .............................. yes
CUDA support ................................ no
Generic crypt(3) format ..................... yes
Rexgen (extra cracking mode) ................ no
GMP (PRINCE mode and faster SRP formats) .... yes
PCAP (vncpcap2john and SIPdump) ............. yes
Z (pkzip format, gpg2john) .................. yes
BZ2 (gpg2john extra decompression logic) .... yes
128-bit integer (faster PRINCE mode) ........ yes
Memory map (share/page large files) ......... yes
Development options (these may hurt performance when enabled):
Memdbg memory debugging settings ............ disabled
AddressSanitizer (""ASan"") ................... disabled
UndefinedBehaviorSanitizer (""UbSan"") ........ disabled
Install missing libraries to get any needed features that were omitted.
Configure finished. Now 'make clean && make -s' to compile.
clang: warning: argument unused during compilation: '-arch host'
[...]
clang: warning: argument unused during compilation: '-arch host'
In file included from dynamic_big_crypt.c:88:
./gost.h:80:10: warning: 'bswap_32' macro redefined
# define bswap_32(x) _JtR_Swap_32(x)
^
/usr/include/byteswap.h:33:9: note: previous definition is here
#define bswap_32(x) __bswap_32 (x)
^
In file included from dynamic_big_crypt.c:88:
./gost.h:102:10: warning: 'bswap_64' macro redefined
# define bswap_64(x) _JtR_Swap_64(x)
^
/usr/include/byteswap.h:37:10: note: previous definition is here
# define bswap_64(x) __bswap_64 (x)
^
2 warnings generated.
clang: warning: argument unused during compilation: '-arch host'
In file included from dynamic_compiler.c:149:
./gost.h:80:10: warning: 'bswap_32' macro redefined
# define bswap_32(x) _JtR_Swap_32(x)
^
/usr/include/byteswap.h:33:9: note: previous definition is here
#define bswap_32(x) __bswap_32 (x)
^
In file included from dynamic_compiler.c:149:
./gost.h:102:10: warning: 'bswap_64' macro redefined
# define bswap_64(x) _JtR_Swap_64(x)
^
/usr/include/byteswap.h:37:10: note: previous definition is here
# define bswap_64(x) __bswap_64 (x)
^
2 warnings generated.
clang: warning: argument unused during compilation: '-arch host'
[...]
clang: warning: argument unused during compilation: '-arch host'
fatal error: error in backend: Cannot select: 0x1c998e0: ch = store 0x1c987d0, 0x1c70080, 0x1c988d0, 0x1c1fb50 [ID=182] dbg:HDAA_fmt_plug.c:373:17
0x1c70080: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c74b10, 0x1c6fb80 [ORD=488] [ID=180]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c74b10: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c74310, 0x1c241c0 [ORD=455] [ID=170]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c74310: x86mmx = llvm.x86.mmx.punpckhbw 0x1c229a0, 0x1c8a240, 0x1c8a440 [ORD=445] [ID=168]
0x1c229a0: i64 = Constant<663> [ORD=368] [ID=34]
0x1c8a240: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c8a140, 0x1c20550 [ORD=433] [ID=166]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c8a140: x86mmx = llvm.x86.mmx.psrli.q 0x1c242c0, 0x1c49ad0, 0x1c243c0 [ORD=432] [ID=164]
0x1c242c0: i64 = Constant<653> [ORD=354] [ID=31]
0x1c49ad0: x86mmx,ch = load 0x1c89d40, 0x1c8a640, 0x1c1fb50 [ID=163]
0x1c8a640: i64 = or 0x1c268f0, 0x1c89a40 [ID=61] dbg:HDAA_fmt_plug.c:345:20
0x1c268f0: i64 = FrameIndex<1> [ORD=342] [ID=25]
0x1c89a40: i64 = Constant<8> [ORD=426] [ID=37]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c243c0: i32 = Constant<4> [ORD=354] [ID=30]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c2cf50: i64 = FrameIndex<5> [ID=46]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c8a440: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c49ad0, 0x1c20550 [ORD=435] [ID=165]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c49ad0: x86mmx,ch = load 0x1c89d40, 0x1c8a640, 0x1c1fb50 [ID=163]
0x1c8a640: i64 = or 0x1c268f0, 0x1c89a40 [ID=61] dbg:HDAA_fmt_plug.c:345:20
0x1c268f0: i64 = FrameIndex<1> [ORD=342] [ID=25]
0x1c89a40: i64 = Constant<8> [ORD=426] [ID=37]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c2cf50: i64 = FrameIndex<5> [ID=46]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c241c0: x86mmx = bitcast 0x1c245c0 [ID=76]
0x1c245c0: i64,ch = load 0x1b8cb58, 0x1c49ed0, 0x1c1fb50 [ID=70]
0x1c49ed0: i64 = X86ISD::Wrapper 0x1c4a1d0 [ID=64]
0x1c4a1d0: i64 = TargetConstantPool<<4 x i32> > 0 [ID=49]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c6fb80: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c6f680, 0x1c255e0 [ORD=482] [ID=178]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c6f680: x86mmx = llvm.x86.mmx.pmull.w 0x1c2a630, 0x1c8b250, 0x1c26ff0 [ORD=475] [ID=176]
0x1c2a630: i64 = Constant<635> [ORD=402] [ID=36]
0x1c8b250: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c8b150, 0x1c22aa0 [ORD=465] [ID=174]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c8b150: x86mmx = llvm.x86.mmx.psrli.q 0x1c242c0, 0x1c74b10, 0x1c243c0 [ORD=464] [ID=172]
0x1c242c0: i64 = Constant<653> [ORD=354] [ID=31]
0x1c74b10: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c74310, 0x1c241c0 [ORD=455] [ID=170]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c74310: x86mmx = llvm.x86.mmx.punpckhbw 0x1c229a0, 0x1c8a240, 0x1c8a440 [ORD=445] [ID=168]
0x1c229a0: i64 = Constant<663> [ORD=368] [ID=34]
0x1c8a240: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c8a140, 0x1c20550 [ORD=433] [ID=166]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c8a140: x86mmx = llvm.x86.mmx.psrli.q 0x1c242c0, 0x1c49ad0, 0x1c243c0 [ORD=432] [ID=164]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c8a440: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c49ad0, 0x1c20550 [ORD=435] [ID=165]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c49ad0: x86mmx,ch = load 0x1c89d40, 0x1c8a640, 0x1c1fb50 [ID=163]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c241c0: x86mmx = bitcast 0x1c245c0 [ID=76]
0x1c245c0: i64,ch = load 0x1b8cb58, 0x1c49ed0, 0x1c1fb50 [ID=70]
0x1c49ed0: i64 = X86ISD::Wrapper 0x1c4a1d0 [ID=64]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c243c0: i32 = Constant<4> [ORD=354] [ID=30]
0x1c22aa0: x86mmx,ch = load 0x1c20450, 0x1c2dc60, 0x1c1fb50 [ID=82]
0x1c2dc60: i64 = FrameIndex<4> [ID=45]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c26ff0: x86mmx = bitcast 0x1c1ff50 [ID=78]
0x1c1ff50: i64,ch = load 0x1b8cb58, 0x1c21da0, 0x1c1fb50 [ID=72]
0x1c21da0: i64 = X86ISD::Wrapper 0x1c221a0 [ID=66]
0x1c221a0: i64 = TargetConstantPool<<4 x i32> > 0 [ID=51]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c255e0: x86mmx = bitcast 0x1c2da60 [ID=79]
0x1c2da60: i64,ch = load 0x1b8cb58, 0x1c20150, 0x1c1fb50 [ID=73]
0x1c20150: i64 = X86ISD::Wrapper 0x1c20350 [ID=67]
0x1c20350: i64 = TargetConstantPool<<4 x i32> > 0 [ID=52]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c988d0: i64 = add 0x1c25ce0, 0x1c2d760 [ORD=491] [ID=56] dbg:HDAA_fmt_plug.c:373:17
0x1c25ce0: i64 = FrameIndex<2> [ORD=347] [ID=27]
0x1c2d760: i64 = Constant<24> [ORD=318] [ID=2]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
make[1]: *** [HDAA_fmt_plug.o] Error 1
make: *** [default] Error 2
```
May be clang is among the
```
443 packages can be updated.
280 updates are security updates.
```
On bull:
```
909 packages can be updated.
677 updates are security updates.
[...]
frank@bull:~$ clang --version
Ubuntu clang version 3.0-6ubuntu3 (tags/RELEASE_30/final) (based on LLVM 3.0)
Target: x86_64-pc-linux-gnu
Thread model: posix
```
Trying to build on bull produces the same results as on well.
",True,"clang build fails on well and bull - On super, clang isn't installed.
On well:
```
frank@well:~$ make -s distclean; ./configure CC=clang --disable-cuda && make -s clean && make -s && ../run/john --test=0 --format=opencl
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking whether to compile using MPI... no
checking for gcc... clang
checking whether the C compiler works... yes
[...]
clang: warning: argument unused during compilation: '-arch host'
configure: creating ./dynamic_big_crypt.c
checking for john.local.conf... exists
Configured for building John the Ripper jumbo:
Target CPU .................................. x86_64 AVX, 64-bit LE
AES-NI support .............................. run-time detection
Target OS ................................... linux-gnu
Cross compiling ............................. no
Legacy arch header .......................... x86-64.h
Optional libraries/features found:
Experimental code ........................... no
OpenMPI support (default disabled) .......... no
Fork support ................................ yes
OpenMP support .............................. no
OpenCL support .............................. yes
CUDA support ................................ no
Generic crypt(3) format ..................... yes
Rexgen (extra cracking mode) ................ no
GMP (PRINCE mode and faster SRP formats) .... yes
PCAP (vncpcap2john and SIPdump) ............. yes
Z (pkzip format, gpg2john) .................. yes
BZ2 (gpg2john extra decompression logic) .... yes
128-bit integer (faster PRINCE mode) ........ yes
Memory map (share/page large files) ......... yes
Development options (these may hurt performance when enabled):
Memdbg memory debugging settings ............ disabled
AddressSanitizer (""ASan"") ................... disabled
UndefinedBehaviorSanitizer (""UbSan"") ........ disabled
Install missing libraries to get any needed features that were omitted.
Configure finished. Now 'make clean && make -s' to compile.
clang: warning: argument unused during compilation: '-arch host'
[...]
clang: warning: argument unused during compilation: '-arch host'
In file included from dynamic_big_crypt.c:88:
./gost.h:80:10: warning: 'bswap_32' macro redefined
# define bswap_32(x) _JtR_Swap_32(x)
^
/usr/include/byteswap.h:33:9: note: previous definition is here
#define bswap_32(x) __bswap_32 (x)
^
In file included from dynamic_big_crypt.c:88:
./gost.h:102:10: warning: 'bswap_64' macro redefined
# define bswap_64(x) _JtR_Swap_64(x)
^
/usr/include/byteswap.h:37:10: note: previous definition is here
# define bswap_64(x) __bswap_64 (x)
^
2 warnings generated.
clang: warning: argument unused during compilation: '-arch host'
In file included from dynamic_compiler.c:149:
./gost.h:80:10: warning: 'bswap_32' macro redefined
# define bswap_32(x) _JtR_Swap_32(x)
^
/usr/include/byteswap.h:33:9: note: previous definition is here
#define bswap_32(x) __bswap_32 (x)
^
In file included from dynamic_compiler.c:149:
./gost.h:102:10: warning: 'bswap_64' macro redefined
# define bswap_64(x) _JtR_Swap_64(x)
^
/usr/include/byteswap.h:37:10: note: previous definition is here
# define bswap_64(x) __bswap_64 (x)
^
2 warnings generated.
clang: warning: argument unused during compilation: '-arch host'
[...]
clang: warning: argument unused during compilation: '-arch host'
fatal error: error in backend: Cannot select: 0x1c998e0: ch = store 0x1c987d0, 0x1c70080, 0x1c988d0, 0x1c1fb50 [ID=182] dbg:HDAA_fmt_plug.c:373:17
0x1c70080: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c74b10, 0x1c6fb80 [ORD=488] [ID=180]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c74b10: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c74310, 0x1c241c0 [ORD=455] [ID=170]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c74310: x86mmx = llvm.x86.mmx.punpckhbw 0x1c229a0, 0x1c8a240, 0x1c8a440 [ORD=445] [ID=168]
0x1c229a0: i64 = Constant<663> [ORD=368] [ID=34]
0x1c8a240: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c8a140, 0x1c20550 [ORD=433] [ID=166]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c8a140: x86mmx = llvm.x86.mmx.psrli.q 0x1c242c0, 0x1c49ad0, 0x1c243c0 [ORD=432] [ID=164]
0x1c242c0: i64 = Constant<653> [ORD=354] [ID=31]
0x1c49ad0: x86mmx,ch = load 0x1c89d40, 0x1c8a640, 0x1c1fb50 [ID=163]
0x1c8a640: i64 = or 0x1c268f0, 0x1c89a40 [ID=61] dbg:HDAA_fmt_plug.c:345:20
0x1c268f0: i64 = FrameIndex<1> [ORD=342] [ID=25]
0x1c89a40: i64 = Constant<8> [ORD=426] [ID=37]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c243c0: i32 = Constant<4> [ORD=354] [ID=30]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c2cf50: i64 = FrameIndex<5> [ID=46]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c8a440: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c49ad0, 0x1c20550 [ORD=435] [ID=165]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c49ad0: x86mmx,ch = load 0x1c89d40, 0x1c8a640, 0x1c1fb50 [ID=163]
0x1c8a640: i64 = or 0x1c268f0, 0x1c89a40 [ID=61] dbg:HDAA_fmt_plug.c:345:20
0x1c268f0: i64 = FrameIndex<1> [ORD=342] [ID=25]
0x1c89a40: i64 = Constant<8> [ORD=426] [ID=37]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c2cf50: i64 = FrameIndex<5> [ID=46]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c241c0: x86mmx = bitcast 0x1c245c0 [ID=76]
0x1c245c0: i64,ch = load 0x1b8cb58, 0x1c49ed0, 0x1c1fb50 [ID=70]
0x1c49ed0: i64 = X86ISD::Wrapper 0x1c4a1d0 [ID=64]
0x1c4a1d0: i64 = TargetConstantPool<<4 x i32> > 0 [ID=49]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c6fb80: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c6f680, 0x1c255e0 [ORD=482] [ID=178]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c6f680: x86mmx = llvm.x86.mmx.pmull.w 0x1c2a630, 0x1c8b250, 0x1c26ff0 [ORD=475] [ID=176]
0x1c2a630: i64 = Constant<635> [ORD=402] [ID=36]
0x1c8b250: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c8b150, 0x1c22aa0 [ORD=465] [ID=174]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c8b150: x86mmx = llvm.x86.mmx.psrli.q 0x1c242c0, 0x1c74b10, 0x1c243c0 [ORD=464] [ID=172]
0x1c242c0: i64 = Constant<653> [ORD=354] [ID=31]
0x1c74b10: x86mmx = llvm.x86.mmx.padd.d 0x1c224a0, 0x1c74310, 0x1c241c0 [ORD=455] [ID=170]
0x1c224a0: i64 = Constant<607> [ORD=376] [ID=35]
0x1c74310: x86mmx = llvm.x86.mmx.punpckhbw 0x1c229a0, 0x1c8a240, 0x1c8a440 [ORD=445] [ID=168]
0x1c229a0: i64 = Constant<663> [ORD=368] [ID=34]
0x1c8a240: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c8a140, 0x1c20550 [ORD=433] [ID=166]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c8a140: x86mmx = llvm.x86.mmx.psrli.q 0x1c242c0, 0x1c49ad0, 0x1c243c0 [ORD=432] [ID=164]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c8a440: x86mmx = llvm.x86.mmx.pand 0x1c249c0, 0x1c49ad0, 0x1c20550 [ORD=435] [ID=165]
0x1c249c0: i64 = Constant<615> [ORD=356] [ID=32]
0x1c49ad0: x86mmx,ch = load 0x1c89d40, 0x1c8a640, 0x1c1fb50 [ID=163]
0x1c20550: x86mmx,ch = load 0x1c246c0, 0x1c2cf50, 0x1c1fb50 [ID=81]
0x1c241c0: x86mmx = bitcast 0x1c245c0 [ID=76]
0x1c245c0: i64,ch = load 0x1b8cb58, 0x1c49ed0, 0x1c1fb50 [ID=70]
0x1c49ed0: i64 = X86ISD::Wrapper 0x1c4a1d0 [ID=64]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c243c0: i32 = Constant<4> [ORD=354] [ID=30]
0x1c22aa0: x86mmx,ch = load 0x1c20450, 0x1c2dc60, 0x1c1fb50 [ID=82]
0x1c2dc60: i64 = FrameIndex<4> [ID=45]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c26ff0: x86mmx = bitcast 0x1c1ff50 [ID=78]
0x1c1ff50: i64,ch = load 0x1b8cb58, 0x1c21da0, 0x1c1fb50 [ID=72]
0x1c21da0: i64 = X86ISD::Wrapper 0x1c221a0 [ID=66]
0x1c221a0: i64 = TargetConstantPool<<4 x i32> > 0 [ID=51]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c255e0: x86mmx = bitcast 0x1c2da60 [ID=79]
0x1c2da60: i64,ch = load 0x1b8cb58, 0x1c20150, 0x1c1fb50 [ID=73]
0x1c20150: i64 = X86ISD::Wrapper 0x1c20350 [ID=67]
0x1c20350: i64 = TargetConstantPool<<4 x i32> > 0 [ID=52]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
0x1c988d0: i64 = add 0x1c25ce0, 0x1c2d760 [ORD=491] [ID=56] dbg:HDAA_fmt_plug.c:373:17
0x1c25ce0: i64 = FrameIndex<2> [ORD=347] [ID=27]
0x1c2d760: i64 = Constant<24> [ORD=318] [ID=2]
0x1c1fb50: i64 = undef [ORD=319] [ID=4]
make[1]: *** [HDAA_fmt_plug.o] Error 1
make: *** [default] Error 2
```
May be clang is among the
```
443 packages can be updated.
280 updates are security updates.
```
On bull:
```
909 packages can be updated.
677 updates are security updates.
[...]
frank@bull:~$ clang --version
Ubuntu clang version 3.0-6ubuntu3 (tags/RELEASE_30/final) (based on LLVM 3.0)
Target: x86_64-pc-linux-gnu
Thread model: posix
```
Trying to build on bull produces the same results as on well.
",1,clang build fails on well and bull on super clang isn t installed on well frank well make s distclean configure cc clang disable cuda make s clean make s run john test format opencl checking build system type unknown linux gnu checking host system type unknown linux gnu checking whether to compile using mpi no checking for gcc clang checking whether the c compiler works yes clang warning argument unused during compilation arch host configure creating dynamic big crypt c checking for john local conf exists configured for building john the ripper jumbo target cpu avx bit le aes ni support run time detection target os linux gnu cross compiling no legacy arch header h optional libraries features found experimental code no openmpi support default disabled no fork support yes openmp support no opencl support yes cuda support no generic crypt format yes rexgen extra cracking mode no gmp prince mode and faster srp formats yes pcap and sipdump yes z pkzip format yes extra decompression logic yes bit integer faster prince mode yes memory map share page large files yes development options these may hurt performance when enabled memdbg memory debugging settings disabled addresssanitizer asan disabled undefinedbehaviorsanitizer ubsan disabled install missing libraries to get any needed features that were omitted configure finished now make clean make s to compile clang warning argument unused during compilation arch host clang warning argument unused during compilation arch host in file included from dynamic big crypt c gost h warning bswap macro redefined define bswap x jtr swap x usr include byteswap h note previous definition is here define bswap x bswap x in file included from dynamic big crypt c gost h warning bswap macro redefined define bswap x jtr swap x usr include byteswap h note previous definition is here define bswap x bswap x warnings generated clang warning argument unused during compilation arch host in file included from dynamic compiler c gost h warning bswap macro redefined define bswap x jtr swap x usr include byteswap h note previous definition is here define bswap x bswap x in file included from dynamic compiler c gost h warning bswap macro redefined define bswap x jtr swap x usr include byteswap h note previous definition is here define bswap x bswap x warnings generated clang warning argument unused during compilation arch host clang warning argument unused during compilation arch host fatal error error in backend cannot select ch store dbg hdaa fmt plug c llvm mmx padd d constant llvm mmx padd d constant llvm mmx punpckhbw constant llvm mmx pand constant llvm mmx psrli q constant ch load or dbg hdaa fmt plug c frameindex constant undef constant ch load frameindex undef llvm mmx pand constant ch load or dbg hdaa fmt plug c frameindex constant undef ch load frameindex undef bitcast ch load wrapper targetconstantpool undef llvm mmx padd d constant llvm mmx pmull w constant llvm mmx pand constant llvm mmx psrli q constant llvm mmx padd d constant llvm mmx punpckhbw constant llvm mmx pand constant llvm mmx psrli q ch load llvm mmx pand constant ch load ch load bitcast ch load wrapper undef constant ch load frameindex undef bitcast ch load wrapper targetconstantpool undef bitcast ch load wrapper targetconstantpool undef add dbg hdaa fmt plug c frameindex constant undef make error make error may be clang is among the packages can be updated updates are security updates on bull packages can be updated updates are security updates frank bull clang version ubuntu clang version tags release final based on llvm target pc linux gnu thread model posix trying to build on bull produces the same results as on well ,1
436017,12544135055.0,IssuesEvent,2020-06-05 16:43:04,oppia/oppia-android,https://api.github.com/repos/oppia/oppia-android,closed, HomeFragment - Tablet (Landscape) (Lowfi),Priority: Essential Status: Pending verification Type: Task Where: Starting flows Workstream: Lowfi UI,"Mocks: https://xd.adobe.com/view/d405de00-a871-4f0f-73a0-f8acef30349b-a234/screen/5434c52d-b32b-4666-8b28-cf03b3cbd4cd/L-Home-Screen
Implement low-fi UI for **HomeFragment** tablet landscape mode
**Target PR date**: 7 June 2020
**Target completion date**: 10 June 2020",1.0," HomeFragment - Tablet (Landscape) (Lowfi) - Mocks: https://xd.adobe.com/view/d405de00-a871-4f0f-73a0-f8acef30349b-a234/screen/5434c52d-b32b-4666-8b28-cf03b3cbd4cd/L-Home-Screen
Implement low-fi UI for **HomeFragment** tablet landscape mode
**Target PR date**: 7 June 2020
**Target completion date**: 10 June 2020",0, homefragment tablet landscape lowfi mocks implement low fi ui for homefragment tablet landscape mode target pr date june target completion date june ,0
38,2723179239.0,IssuesEvent,2015-04-14 10:38:15,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,compile error on OS X with gcc or icc,portability,"**with gcc:**
gcc-4.9 -DAC_BUILT -DARCH_LITTLE_ENDIAN=1 -march=native -mavx -c -fopenmp -DUNDERSCORES -DBSD -DALIGN_LOG x86-64.S
x86-64.S:1651:no such instruction: `xgetbv'
**with icc:**
icc -DAC_BUILT -march=native -mavx -c -g -O2 -I/usr/local/include -DARCH_LITTLE_ENDIAN=1 -fopenmp -D_THREAD_SAFE -pthread -I/usr/local/include listconf.c -o listconf.o
icc: command line warning #10120: overriding '-march=native' with '-mavx'
listconf.c(193): error: extra text after expected end of number
printf(""clang version: %s\n"", \__clang_version__);
^
The latter seems to be a bug in icc. See this [post](https://software.intel.com/en-us/forums/topic/549663).
Specs:
OS X 10.10.3
gcc 4.9.2
icc 15.0.2
",True,"compile error on OS X with gcc or icc - **with gcc:**
gcc-4.9 -DAC_BUILT -DARCH_LITTLE_ENDIAN=1 -march=native -mavx -c -fopenmp -DUNDERSCORES -DBSD -DALIGN_LOG x86-64.S
x86-64.S:1651:no such instruction: `xgetbv'
**with icc:**
icc -DAC_BUILT -march=native -mavx -c -g -O2 -I/usr/local/include -DARCH_LITTLE_ENDIAN=1 -fopenmp -D_THREAD_SAFE -pthread -I/usr/local/include listconf.c -o listconf.o
icc: command line warning #10120: overriding '-march=native' with '-mavx'
listconf.c(193): error: extra text after expected end of number
printf(""clang version: %s\n"", \__clang_version__);
^
The latter seems to be a bug in icc. See this [post](https://software.intel.com/en-us/forums/topic/549663).
Specs:
OS X 10.10.3
gcc 4.9.2
icc 15.0.2
",1,compile error on os x with gcc or icc with gcc gcc dac built darch little endian march native mavx c fopenmp dunderscores dbsd dalign log s s no such instruction xgetbv with icc icc dac built march native mavx c g i usr local include darch little endian fopenmp d thread safe pthread i usr local include listconf c o listconf o icc command line warning overriding march native with mavx listconf c error extra text after expected end of number printf clang version s n clang version the latter seems to be a bug in icc see this specs os x gcc icc ,1
83,3024227439.0,IssuesEvent,2015-08-02 11:38:10,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,LM-opencl should support OpenCL 1.1,portability,"Super currently doesn't build lm-opencl after `scl enable devtoolset-3 bash`. We should support OpenCL 1.1.
Having said that, I have added `HAVE_OPENCL_1_2` macro that actually tests the *lib* for presence. But please note that the *device* (as in run-time build of kernel) may **still** be only 1.1, even though your headers and driver platform are 1.2. At least this is the case for 1.1 vs. 1.0.",True,"LM-opencl should support OpenCL 1.1 - Super currently doesn't build lm-opencl after `scl enable devtoolset-3 bash`. We should support OpenCL 1.1.
Having said that, I have added `HAVE_OPENCL_1_2` macro that actually tests the *lib* for presence. But please note that the *device* (as in run-time build of kernel) may **still** be only 1.1, even though your headers and driver platform are 1.2. At least this is the case for 1.1 vs. 1.0.",1,lm opencl should support opencl super currently doesn t build lm opencl after scl enable devtoolset bash we should support opencl having said that i have added have opencl macro that actually tests the lib for presence but please note that the device as in run time build of kernel may still be only even though your headers and driver platform are at least this is the case for vs ,1
147307,13205642297.0,IssuesEvent,2020-08-14 18:23:50,DS4PS/cpp-526-sum-2020,https://api.github.com/repos/DS4PS/cpp-526-sum-2020,opened,Chapter 1 - Arithmetic in R and Function sum(),documentation final-dashboard,"I'm having issues with function `sum()` and `NA` values.
**Expectation:** I expected to get the sum of all values in variable `x`.
",1.0,"Chapter 1 - Arithmetic in R and Function sum() - I'm having issues with function `sum()` and `NA` values.
**Expectation:** I expected to get the sum of all values in variable `x`.
",0,chapter arithmetic in r and function sum i m having issues with function sum and na values expectation i expected to get the sum of all values in variable x ,0
27406,21698978381.0,IssuesEvent,2022-05-10 00:24:38,celeritas-project/celeritas,https://api.github.com/repos/celeritas-project/celeritas,closed,Prototype performance portability,infrastructure,Do an initial port of enough core Celeritas components to at least run some demo applications on non-CUDA hardware.,1.0,Prototype performance portability - Do an initial port of enough core Celeritas components to at least run some demo applications on non-CUDA hardware.,0,prototype performance portability do an initial port of enough core celeritas components to at least run some demo applications on non cuda hardware ,0
753768,26360830604.0,IssuesEvent,2023-01-11 13:16:13,Dessia-tech/dessia_common,https://api.github.com/repos/Dessia-tech/dessia_common,closed,Problem with references in case of custom eq,Priority: High Status: To be discussed,"Redefining eq of an object may create side effects in serialization.
Subobjects may not be equal strictly but can be pointed outside of the object, and won't be in the memo
solution: when an object is matched in the memo, keep exploring its subattributes in case they are used elsewhere",1.0,"Problem with references in case of custom eq - Redefining eq of an object may create side effects in serialization.
Subobjects may not be equal strictly but can be pointed outside of the object, and won't be in the memo
solution: when an object is matched in the memo, keep exploring its subattributes in case they are used elsewhere",0,problem with references in case of custom eq redefining eq of an object may create side effects in serialization subobjects may not be equal strictly but can be pointed outside of the object and won t be in the memo solution when an object is matched in the memo keep exploring its subattributes in case they are used elsewhere,0
201,4164691827.0,IssuesEvent,2016-06-19 00:14:53,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,Pseudo intrinsics depending on compiler optimization,portability,"See #2146.
```
#if ARCH_BITS == 32
(...)
#undef _mm_insert_epi64
#define _mm_insert_epi64 my__mm_insert_epi64
static inline __m128i _mm_insert_epi64(__m128i a, uint64_t b, int c) {
c <<= 1;
a = _mm_insert_epi32(a, (unsigned int)b, c);
return _mm_insert_epi32(a, (unsigned int)(b >> 32), c + 1);
}
#endif
```
Without compiler optimizations, the above does not compile.
```
gost3411-2012-sse41_plug.c: In function ‘my__mm_insert_epi64’:
gost3411-2012-sse41_plug.c:25:6: error: selector must be an integer constant in the range 0..3
a = _mm_insert_epi32(a, (unsigned int)b, c);
^
gost3411-2012-sse41_plug.c:26:9: error: selector must be an integer constant in the range 0..3
return _mm_insert_epi32(a, (unsigned int)(b >> 32), c + 1);
^
```
Reason is the third parameter of _mm_insert_epi32 needs to be a constant so this code really relies on optimizations from the compiler. I suppose we could get rid of this problem by using a function-like macro.",True,"Pseudo intrinsics depending on compiler optimization - See #2146.
```
#if ARCH_BITS == 32
(...)
#undef _mm_insert_epi64
#define _mm_insert_epi64 my__mm_insert_epi64
static inline __m128i _mm_insert_epi64(__m128i a, uint64_t b, int c) {
c <<= 1;
a = _mm_insert_epi32(a, (unsigned int)b, c);
return _mm_insert_epi32(a, (unsigned int)(b >> 32), c + 1);
}
#endif
```
Without compiler optimizations, the above does not compile.
```
gost3411-2012-sse41_plug.c: In function ‘my__mm_insert_epi64’:
gost3411-2012-sse41_plug.c:25:6: error: selector must be an integer constant in the range 0..3
a = _mm_insert_epi32(a, (unsigned int)b, c);
^
gost3411-2012-sse41_plug.c:26:9: error: selector must be an integer constant in the range 0..3
return _mm_insert_epi32(a, (unsigned int)(b >> 32), c + 1);
^
```
Reason is the third parameter of _mm_insert_epi32 needs to be a constant so this code really relies on optimizations from the compiler. I suppose we could get rid of this problem by using a function-like macro.",1,pseudo intrinsics depending on compiler optimization see if arch bits undef mm insert define mm insert my mm insert static inline mm insert a t b int c c a mm insert a unsigned int b c return mm insert a unsigned int b c endif without compiler optimizations the above does not compile plug c in function ‘my mm insert ’ plug c error selector must be an integer constant in the range a mm insert a unsigned int b c plug c error selector must be an integer constant in the range return mm insert a unsigned int b c reason is the third parameter of mm insert needs to be a constant so this code really relies on optimizations from the compiler i suppose we could get rid of this problem by using a function like macro ,1
230536,17620507811.0,IssuesEvent,2021-08-18 14:47:58,Angelinaaaaaaa/Lentes,https://api.github.com/repos/Angelinaaaaaaa/Lentes,opened,Avaliação da Proposta de Trabalho,bug documentation enhancement,"EQUIPE
Ok.
PROBLEMA
Ok. Identificar necessidade de lentes de contato e recomendar um tipo de lente.
DATASET
Ok. É o dataset Lenses da UCI: https://archive.ics.uci.edu/ml/datasets/Lenses
TÉCNICA
PARCIALMENTE CORRETO. Problemas:
• Conforme enunciado a equipe já deveria descrever como o problema será modelado para aplicação da técnica. Ou seja, quais são as variáveis consideradas pela árvore de decisão? Quais são os valores possíveis de cada variável? Qual é a saída da árvore de decisão e possíveis valores? Como será encontrada a árvore de decisão adequada? Qual estratégia de validação cruzada pretende utilizar para determinar a melhor árvore de decisão? Qual métrica será utilizada para medir o desempenho destas árvores?
OBSERVAÇÕES
A entrega da proposta do trabalho foi realizada em 14/08 através do compartilhamento do projeto no Github, portanto com 2 semanas de atraso. Conforme enunciado, entregas em atraso estarão sujeitas a desconto na nota final. Neste caso, será aplicado um desconto de 20% na nota final (este desconto é menor do que a pontuação indicada no enunciado).
Quando for realizada a avaliação do trabalho completo, será verificado se a equipe corrigiu os problemas acima descritos. Se desejar, a equipe pode comparecer em alguma aula síncrona para esclarecimentos, ou então agendar horário extra-classe com o professor.
",1.0,"Avaliação da Proposta de Trabalho - EQUIPE
Ok.
PROBLEMA
Ok. Identificar necessidade de lentes de contato e recomendar um tipo de lente.
DATASET
Ok. É o dataset Lenses da UCI: https://archive.ics.uci.edu/ml/datasets/Lenses
TÉCNICA
PARCIALMENTE CORRETO. Problemas:
• Conforme enunciado a equipe já deveria descrever como o problema será modelado para aplicação da técnica. Ou seja, quais são as variáveis consideradas pela árvore de decisão? Quais são os valores possíveis de cada variável? Qual é a saída da árvore de decisão e possíveis valores? Como será encontrada a árvore de decisão adequada? Qual estratégia de validação cruzada pretende utilizar para determinar a melhor árvore de decisão? Qual métrica será utilizada para medir o desempenho destas árvores?
OBSERVAÇÕES
A entrega da proposta do trabalho foi realizada em 14/08 através do compartilhamento do projeto no Github, portanto com 2 semanas de atraso. Conforme enunciado, entregas em atraso estarão sujeitas a desconto na nota final. Neste caso, será aplicado um desconto de 20% na nota final (este desconto é menor do que a pontuação indicada no enunciado).
Quando for realizada a avaliação do trabalho completo, será verificado se a equipe corrigiu os problemas acima descritos. Se desejar, a equipe pode comparecer em alguma aula síncrona para esclarecimentos, ou então agendar horário extra-classe com o professor.
",0,avaliação da proposta de trabalho equipe ok problema ok identificar necessidade de lentes de contato e recomendar um tipo de lente dataset ok é o dataset lenses da uci técnica parcialmente correto problemas • conforme enunciado a equipe já deveria descrever como o problema será modelado para aplicação da técnica ou seja quais são as variáveis consideradas pela árvore de decisão quais são os valores possíveis de cada variável qual é a saída da árvore de decisão e possíveis valores como será encontrada a árvore de decisão adequada qual estratégia de validação cruzada pretende utilizar para determinar a melhor árvore de decisão qual métrica será utilizada para medir o desempenho destas árvores observações a entrega da proposta do trabalho foi realizada em através do compartilhamento do projeto no github portanto com semanas de atraso conforme enunciado entregas em atraso estarão sujeitas a desconto na nota final neste caso será aplicado um desconto de na nota final este desconto é menor do que a pontuação indicada no enunciado quando for realizada a avaliação do trabalho completo será verificado se a equipe corrigiu os problemas acima descritos se desejar a equipe pode comparecer em alguma aula síncrona para esclarecimentos ou então agendar horário extra classe com o professor ,0
33134,27251680184.0,IssuesEvent,2023-02-22 08:36:54,woocommerce/woocommerce,https://api.github.com/repos/woocommerce/woocommerce,closed,Add milestone to PRs based on paths,type: task tool: monorepo infrastructure,"Currently our automated workflow is to add milestones to PRs when it also contains the label `plugins: woocommerce`. However this can sometimes be missed if the PR does not contain this label and gets merged [for example](https://github.com/woocommerce/woocommerce/pull/34382).
The [workflow](https://github.com/woocommerce/woocommerce/blob/trunk/.github/workflows/pull-request-post-merge-processing.yml) had an initial acceptance criteria set [here](36-gh-woocommerce/platform-private). I believe it was so that other items in the monorepo such as tools does not get picked up as a release PR.
The better solution ""if possible"" perhaps is to exclude the paths that are not going into WooCommerce release. For example https://github.com/woocommerce/woocommerce/tree/trunk/tools
Acceptance criteria:
* Add milestone to PRs only if the PR contains work that goes into the WooCommerce release.",1.0,"Add milestone to PRs based on paths - Currently our automated workflow is to add milestones to PRs when it also contains the label `plugins: woocommerce`. However this can sometimes be missed if the PR does not contain this label and gets merged [for example](https://github.com/woocommerce/woocommerce/pull/34382).
The [workflow](https://github.com/woocommerce/woocommerce/blob/trunk/.github/workflows/pull-request-post-merge-processing.yml) had an initial acceptance criteria set [here](36-gh-woocommerce/platform-private). I believe it was so that other items in the monorepo such as tools does not get picked up as a release PR.
The better solution ""if possible"" perhaps is to exclude the paths that are not going into WooCommerce release. For example https://github.com/woocommerce/woocommerce/tree/trunk/tools
Acceptance criteria:
* Add milestone to PRs only if the PR contains work that goes into the WooCommerce release.",0,add milestone to prs based on paths currently our automated workflow is to add milestones to prs when it also contains the label plugins woocommerce however this can sometimes be missed if the pr does not contain this label and gets merged the had an initial acceptance criteria set gh woocommerce platform private i believe it was so that other items in the monorepo such as tools does not get picked up as a release pr the better solution if possible perhaps is to exclude the paths that are not going into woocommerce release for example acceptance criteria add milestone to prs only if the pr contains work that goes into the woocommerce release ,0
663,8759625215.0,IssuesEvent,2018-12-15 18:12:13,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Formats failing on non-X86,bug portability,"**s390x**
Target CPU ................................. s390x, 64-bit BE
Build: linux-gnu 64-bit s390x AC OMP
```
Testing: adxcrypt [IBM/Toshiba 4690 - ADXCRYPT 32/64]... (4xOMP) FAILED (cmp_all(1048576))
Testing: enpass, Enpass Password Manager [PBKDF2-SHA1 32/64]... (4xOMP) FAILED (cmp_all(32))
Testing: monero, monero Wallet [Pseudo-AES / ChaCha / Various 64/64]... (4xOMP) FAILED (cmp_all(4))
Testing: STRIP, Password Manager [PBKDF2-SHA1 32/64]... (4xOMP) FAILED (cmp_all(256))
4 out of 399 tests have FAILED
FAILED: -test-full=0
```",True,"Formats failing on non-X86 - **s390x**
Target CPU ................................. s390x, 64-bit BE
Build: linux-gnu 64-bit s390x AC OMP
```
Testing: adxcrypt [IBM/Toshiba 4690 - ADXCRYPT 32/64]... (4xOMP) FAILED (cmp_all(1048576))
Testing: enpass, Enpass Password Manager [PBKDF2-SHA1 32/64]... (4xOMP) FAILED (cmp_all(32))
Testing: monero, monero Wallet [Pseudo-AES / ChaCha / Various 64/64]... (4xOMP) FAILED (cmp_all(4))
Testing: STRIP, Password Manager [PBKDF2-SHA1 32/64]... (4xOMP) FAILED (cmp_all(256))
4 out of 399 tests have FAILED
FAILED: -test-full=0
```",1,formats failing on non target cpu bit be build linux gnu bit ac omp testing adxcrypt failed cmp all testing enpass enpass password manager failed cmp all testing monero monero wallet failed cmp all testing strip password manager failed cmp all out of tests have failed failed test full ,1
697,9419825970.0,IssuesEvent,2019-04-10 23:26:23,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Add int128 support for 32-bit builds using 64-bit lo/hi structs,portability,"This is trivial and very similar to math.h's 64-bit functions that use 32-bit lo/hi structs. We should add it to mpz_int128.h with #ifdef's so it's used instead of int128 when needed. This could also be contributed to [upstream PRINCE](https://github.com/jsteube/princeprocessor).
",True,"Add int128 support for 32-bit builds using 64-bit lo/hi structs - This is trivial and very similar to math.h's 64-bit functions that use 32-bit lo/hi structs. We should add it to mpz_int128.h with #ifdef's so it's used instead of int128 when needed. This could also be contributed to [upstream PRINCE](https://github.com/jsteube/princeprocessor).
",1,add support for bit builds using bit lo hi structs this is trivial and very similar to math h s bit functions that use bit lo hi structs we should add it to mpz h with ifdef s so it s used instead of when needed this could also be contributed to ,1
282179,8704290844.0,IssuesEvent,2018-12-05 18:59:52,AICrowd/AIcrowd,https://api.github.com/repos/AICrowd/AIcrowd,closed,Drafts challenges are publicly visible (no access check done),high priority,"_From @spMohanty on April 26, 2018 15:39_
https://www.crowdai.org/challenges/marlo-2018
_Copied from original issue: crowdAI/crowdai#724_",1.0,"Drafts challenges are publicly visible (no access check done) - _From @spMohanty on April 26, 2018 15:39_
https://www.crowdai.org/challenges/marlo-2018
_Copied from original issue: crowdAI/crowdai#724_",0,drafts challenges are publicly visible no access check done from spmohanty on april copied from original issue crowdai crowdai ,0
176821,13654490316.0,IssuesEvent,2020-09-27 17:39:40,prokuranepal/DMS_React,https://api.github.com/repos/prokuranepal/DMS_React,opened,Tests for Weather components,good first issue react tests," WeatherDetail, WeatherList need tests
WeatherDetail.js
WeatherList.js
The tests should perform at least
- Component testing for the components present and their numbers
- Simulation for events like Button Press
- Props testing
The project tests are based on jest and enzyme. Tests like test1 or test2 could serve as references.
",1.0,"Tests for Weather components - WeatherDetail, WeatherList need tests
WeatherDetail.js
WeatherList.js
The tests should perform at least
- Component testing for the components present and their numbers
- Simulation for events like Button Press
- Props testing
The project tests are based on jest and enzyme. Tests like test1 or test2 could serve as references.
",0,tests for weather components weatherdetail weatherlist need tests the tests should perform at least component testing for the components present and their numbers simulation for events like button press props testing the project tests are based on jest and enzyme tests like or could serve as references ,0
112875,9604757401.0,IssuesEvent,2019-05-10 21:01:15,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,"Failing test: UI Functional Tests.test/functional/apps/visualize/_input_control_vis·js - visualize app input control visualization chained controls ""after all"" hook",failed-test,"A test failed on a tracked branch
```
{ NoSuchSessionError: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
at promise.finally (node_modules/selenium-webdriver/lib/webdriver.js:726:38)
at Object.thenFinally [as finally] (node_modules/selenium-webdriver/lib/promise.js:124:12)
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=kibana-ciGroup10,node=immutable/70/)
",1.0,"Failing test: UI Functional Tests.test/functional/apps/visualize/_input_control_vis·js - visualize app input control visualization chained controls ""after all"" hook - A test failed on a tracked branch
```
{ NoSuchSessionError: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
at promise.finally (node_modules/selenium-webdriver/lib/webdriver.js:726:38)
at Object.thenFinally [as finally] (node_modules/selenium-webdriver/lib/promise.js:124:12)
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=kibana-ciGroup10,node=immutable/70/)
",0,failing test ui functional tests test functional apps visualize input control vis·js visualize app input control visualization chained controls after all hook a test failed on a tracked branch nosuchsessionerror this driver instance does not have a valid session id did you call webdriver quit and may no longer be used at promise finally node modules selenium webdriver lib webdriver js at object thenfinally node modules selenium webdriver lib promise js at process tickcallback internal process next tick js name nosuchsessionerror remotestacktrace first failure ,0
68808,13183854873.0,IssuesEvent,2020-08-12 18:17:34,robocorp/robotframework-lsp,https://api.github.com/repos/robocorp/robotframework-lsp,closed,Packages listed from the cloud should be sorted.,enhancement robocode,The last one selected for a given directory should be at the top and others should be sorted by the name.,1.0,Packages listed from the cloud should be sorted. - The last one selected for a given directory should be at the top and others should be sorted by the name.,0,packages listed from the cloud should be sorted the last one selected for a given directory should be at the top and others should be sorted by the name ,0
612,8257703405.0,IssuesEvent,2018-09-13 06:35:25,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,clang debug build problem on 32bit linux,notes/external issues portability,"With
```
make distclean; CC=clang ./configure && make debug
```
I get
```
clang -DAC_BUILT -march=native -mssse3 -c -g -O2 -I/usr/local/include -Wall -Wdeclaration-after-statement -fomit-frame-pointer -Wno-deprecated-declarations -Wno-format-extra-args -Qunused-arguments -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -DHAVE_NSS -I/usr/include/nss3 -I/usr/include/nspr4 -pthread -O0 -DDEBUG -funroll-loops gost.c -o gost.o
gost.c:147:4: error: inline assembly requires more registers than available
""movl %%ebx, %13\n\t""
^
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
7 errors generated.
make[2]: *** [gost.o] Error 1
make[2]: Leaving directory `/home/fd/git/JtR/src'
make[1]: *** [default] Error 2
make[1]: Leaving directory `/home/fd/git/JtR/src'
make: *** [debug] Error 2
```
When I change the Makefile generated by ./configure to use -O1 instead of -O0 for debug targets, the build succeeds.
gost.c is the only source which requires -O1, all other object files can be built with -O0.
With gcc, the debug build succeeds with -O0.
I am not sure what would be the best option to resolve the issue.
Generally switching to -O1 for debug builds (like Makefile.legacy does)?
Just using -O1 for 32bit builds, or only for 32bit clang builds?
",True,"clang debug build problem on 32bit linux - With
```
make distclean; CC=clang ./configure && make debug
```
I get
```
clang -DAC_BUILT -march=native -mssse3 -c -g -O2 -I/usr/local/include -Wall -Wdeclaration-after-statement -fomit-frame-pointer -Wno-deprecated-declarations -Wno-format-extra-args -Qunused-arguments -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -DHAVE_NSS -I/usr/include/nss3 -I/usr/include/nspr4 -pthread -O0 -DDEBUG -funroll-loops gost.c -o gost.o
gost.c:147:4: error: inline assembly requires more registers than available
""movl %%ebx, %13\n\t""
^
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
gost.c:147:4: error: inline assembly requires more registers than available
7 errors generated.
make[2]: *** [gost.o] Error 1
make[2]: Leaving directory `/home/fd/git/JtR/src'
make[1]: *** [default] Error 2
make[1]: Leaving directory `/home/fd/git/JtR/src'
make: *** [debug] Error 2
```
When I change the Makefile generated by ./configure to use -O1 instead of -O0 for debug targets, the build succeeds.
gost.c is the only source which requires -O1, all other object files can be built with -O0.
With gcc, the debug build succeeds with -O0.
I am not sure what would be the best option to resolve the issue.
Generally switching to -O1 for debug builds (like Makefile.legacy does)?
Just using -O1 for 32bit builds, or only for 32bit clang builds?
",1,clang debug build problem on linux with make distclean cc clang configure make debug i get clang dac built march native c g i usr local include wall wdeclaration after statement fomit frame pointer wno deprecated declarations wno format extra args qunused arguments d gnu source d file offset bits d source dhave nss i usr include i usr include pthread ddebug funroll loops gost c o gost o gost c error inline assembly requires more registers than available movl ebx n t gost c error inline assembly requires more registers than available gost c error inline assembly requires more registers than available gost c error inline assembly requires more registers than available gost c error inline assembly requires more registers than available gost c error inline assembly requires more registers than available gost c error inline assembly requires more registers than available errors generated make error make leaving directory home fd git jtr src make error make leaving directory home fd git jtr src make error when i change the makefile generated by configure to use instead of for debug targets the build succeeds gost c is the only source which requires all other object files can be built with with gcc the debug build succeeds with i am not sure what would be the best option to resolve the issue generally switching to for debug builds like makefile legacy does just using for builds or only for clang builds ,1
67,2966560234.0,IssuesEvent,2015-07-12 00:45:08,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,"$pass, $salt vs. $p, $s",portability,"Hashcat and InsidePro use $pass and $salt while we use $p and $s. Would it be doable (with moderate effort) for us to support both? I think this mostly or solely applies to self-contained dynamic.
We have to unify them somewhere in the process so they'd still always end up in their short form in eg. pot files (`@dynamic` tags). Basically we could do some (early) string replacements in options.format, and be done with it.",True,"$pass, $salt vs. $p, $s - Hashcat and InsidePro use $pass and $salt while we use $p and $s. Would it be doable (with moderate effort) for us to support both? I think this mostly or solely applies to self-contained dynamic.
We have to unify them somewhere in the process so they'd still always end up in their short form in eg. pot files (`@dynamic` tags). Basically we could do some (early) string replacements in options.format, and be done with it.",1, pass salt vs p s hashcat and insidepro use pass and salt while we use p and s would it be doable with moderate effort for us to support both i think this mostly or solely applies to self contained dynamic we have to unify them somewhere in the process so they d still always end up in their short form in eg pot files dynamic tags basically we could do some early string replacements in options format and be done with it ,1
92,3105742715.0,IssuesEvent,2015-08-31 22:36:44,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,OpenCL: Issues with Intel HD Graphics,portability,"OSX, HD Graphics 4000 (driver version 1.2(Jul 29 2015 02:40:37)):
```
$ ../run/john -test=0 -form:opencl
Device 1: HD Graphics 4000
Testing: agilekeychain-opencl, 1Password Agile Keychain [PBKDF2-SHA1 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
Testing: blockchain-opencl, blockchain My Wallet [PBKDF2-SHA1 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
Testing: dmg-opencl, Apple DMG [PBKDF2-SHA1 OpenCL 3DES/AES]... (8xOMP) FAILED (cmp_all(1))
Testing: keychain-opencl, Mac OS X Keychain [PBKDF2-SHA1 OpenCL 3DES]... (8xOMP) FAILED (cmp_all(1))
Testing: ODF-opencl [SHA1 OpenCL Blowfish]... (8xOMP) FAILED (cmp_all(1))
Testing: ODF-AES-opencl [SHA256 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
Testing: PBKDF2-HMAC-SHA512-opencl, GRUB2 / OS X 10.8+, rounds=10000 [PBKDF2-SHA512 OpenCL]... FAILED (cmp_all(1))
Testing: strip-opencl, STRIP Password Manager [PBKDF2-SHA1 OpenCL]... (8xOMP) FAILED (cmp_all(1))
Testing: sxc-opencl, StarOffice .sxc [PBKDF2-SHA1 OpenCL Blowfish]... (8xOMP) FAILED (cmp_all(1))
Testing: zip-opencl, ZIP [PBKDF2-SHA1 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
10 out of 53 tests have FAILED
```
All failures above are just two kernels; The SHA-1's are the ""unsplit pbkdf2-hmac-sha"" kernel.
Linux, HD Graphics 4600 (driver version 16.4.2.1.39163):
```
$ ../run/john -test=0 -form:opencl -dev=3 | grep -v PASS
Device 3: Intel(R) HD Graphics
Testing: PBKDF2-HMAC-MD4-opencl [PBKDF2-MD4 OpenCL]... FAILED (cmp_all(1))
Testing: PBKDF2-HMAC-MD5-opencl [PBKDF2-MD5 OpenCL]... FAILED (cmp_all(1))
Testing: bcrypt-opencl (""$2a$05"", 32 iterations) [Blowfish OpenCL]... FAILED (cmp_all(1))
Testing: PBKDF2-HMAC-SHA512-opencl, GRUB2 / OS X 10.8+, rounds=10000 [PBKDF2-SHA512 OpenCL]... FAILED (cmp_all(1))
4 out of 53 tests have FAILED
```
And curiously enough:
```
$ ../run/john -test=0 -form:pbkdf2-hmac-m*opencl -dev=3 -force-vector:2
Device 3: Intel(R) HD Graphics
Testing: PBKDF2-HMAC-MD4-opencl [PBKDF2-MD4 OpenCL 2x]... PASS
Testing: PBKDF2-HMAC-MD5-opencl [PBKDF2-MD5 OpenCL 2x]... PASS
All 2 formats passed self-tests!
```",True,"OpenCL: Issues with Intel HD Graphics - OSX, HD Graphics 4000 (driver version 1.2(Jul 29 2015 02:40:37)):
```
$ ../run/john -test=0 -form:opencl
Device 1: HD Graphics 4000
Testing: agilekeychain-opencl, 1Password Agile Keychain [PBKDF2-SHA1 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
Testing: blockchain-opencl, blockchain My Wallet [PBKDF2-SHA1 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
Testing: dmg-opencl, Apple DMG [PBKDF2-SHA1 OpenCL 3DES/AES]... (8xOMP) FAILED (cmp_all(1))
Testing: keychain-opencl, Mac OS X Keychain [PBKDF2-SHA1 OpenCL 3DES]... (8xOMP) FAILED (cmp_all(1))
Testing: ODF-opencl [SHA1 OpenCL Blowfish]... (8xOMP) FAILED (cmp_all(1))
Testing: ODF-AES-opencl [SHA256 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
Testing: PBKDF2-HMAC-SHA512-opencl, GRUB2 / OS X 10.8+, rounds=10000 [PBKDF2-SHA512 OpenCL]... FAILED (cmp_all(1))
Testing: strip-opencl, STRIP Password Manager [PBKDF2-SHA1 OpenCL]... (8xOMP) FAILED (cmp_all(1))
Testing: sxc-opencl, StarOffice .sxc [PBKDF2-SHA1 OpenCL Blowfish]... (8xOMP) FAILED (cmp_all(1))
Testing: zip-opencl, ZIP [PBKDF2-SHA1 OpenCL AES]... (8xOMP) FAILED (cmp_all(1))
10 out of 53 tests have FAILED
```
All failures above are just two kernels; The SHA-1's are the ""unsplit pbkdf2-hmac-sha"" kernel.
Linux, HD Graphics 4600 (driver version 16.4.2.1.39163):
```
$ ../run/john -test=0 -form:opencl -dev=3 | grep -v PASS
Device 3: Intel(R) HD Graphics
Testing: PBKDF2-HMAC-MD4-opencl [PBKDF2-MD4 OpenCL]... FAILED (cmp_all(1))
Testing: PBKDF2-HMAC-MD5-opencl [PBKDF2-MD5 OpenCL]... FAILED (cmp_all(1))
Testing: bcrypt-opencl (""$2a$05"", 32 iterations) [Blowfish OpenCL]... FAILED (cmp_all(1))
Testing: PBKDF2-HMAC-SHA512-opencl, GRUB2 / OS X 10.8+, rounds=10000 [PBKDF2-SHA512 OpenCL]... FAILED (cmp_all(1))
4 out of 53 tests have FAILED
```
And curiously enough:
```
$ ../run/john -test=0 -form:pbkdf2-hmac-m*opencl -dev=3 -force-vector:2
Device 3: Intel(R) HD Graphics
Testing: PBKDF2-HMAC-MD4-opencl [PBKDF2-MD4 OpenCL 2x]... PASS
Testing: PBKDF2-HMAC-MD5-opencl [PBKDF2-MD5 OpenCL 2x]... PASS
All 2 formats passed self-tests!
```",1,opencl issues with intel hd graphics osx hd graphics driver version jul run john test form opencl device hd graphics testing agilekeychain opencl agile keychain failed cmp all testing blockchain opencl blockchain my wallet failed cmp all testing dmg opencl apple dmg failed cmp all testing keychain opencl mac os x keychain failed cmp all testing odf opencl failed cmp all testing odf aes opencl failed cmp all testing hmac opencl os x rounds failed cmp all testing strip opencl strip password manager failed cmp all testing sxc opencl staroffice sxc failed cmp all testing zip opencl zip failed cmp all out of tests have failed all failures above are just two kernels the sha s are the unsplit hmac sha kernel linux hd graphics driver version run john test form opencl dev grep v pass device intel r hd graphics testing hmac opencl failed cmp all testing hmac opencl failed cmp all testing bcrypt opencl iterations failed cmp all testing hmac opencl os x rounds failed cmp all out of tests have failed and curiously enough run john test form hmac m opencl dev force vector device intel r hd graphics testing hmac opencl pass testing hmac opencl pass all formats passed self tests ,1
81,3009755824.0,IssuesEvent,2015-07-28 08:50:36,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,LM-opencl should support OpenCL 1.1,portability,"Super currently doesn't build lm-opencl after `scl enable devtoolset-3 bash`. We should support OpenCL 1.1.
Having said that, I have added `HAVE_OPENCL_1_2` macro that actually tests the *lib* for presence. But please note that the *device* (as in run-time build of kernel) may **still** be only 1.1, even though your headers and driver platform are 1.2.",True,"LM-opencl should support OpenCL 1.1 - Super currently doesn't build lm-opencl after `scl enable devtoolset-3 bash`. We should support OpenCL 1.1.
Having said that, I have added `HAVE_OPENCL_1_2` macro that actually tests the *lib* for presence. But please note that the *device* (as in run-time build of kernel) may **still** be only 1.1, even though your headers and driver platform are 1.2.",1,lm opencl should support opencl super currently doesn t build lm opencl after scl enable devtoolset bash we should support opencl having said that i have added have opencl macro that actually tests the lib for presence but please note that the device as in run time build of kernel may still be only even though your headers and driver platform are ,1
778507,27318748458.0,IssuesEvent,2023-02-24 17:49:18,GoogleContainerTools/skaffold,https://api.github.com/repos/GoogleContainerTools/skaffold,closed,Not able to reference secrets path to home folder,kind/bug priority/p3,"
### Expected behavior
I want to reference a secret file in `build.artifacts.docker.secret.src` in the home directory. This works well if using the realpath but not ~.
### Actual behavior
Using the realpath /home/username/.npmrc works, but using ~/.npmrc doesn't work because skaffold is appending ~/.npmrc to the actual working directory making it search to the wrong place.
I also tried to reference the home directory using $HOME env variable, but it's not a templated field, so doesn't work
### Information
- Skaffold version: skaffold 2.1.0
- Operating system: MacOS Ventura 13
- Installed via: brew
- Contents of skaffold.yaml:
```yaml
build:
local:
useBuildkit: true
artifacts:
- image: image-name
context: .
docker:
secrets:
- id: npmrc
src: ~/.npmrc
```
",1.0,"Not able to reference secrets path to home folder -
### Expected behavior
I want to reference a secret file in `build.artifacts.docker.secret.src` in the home directory. This works well if using the realpath but not ~.
### Actual behavior
Using the realpath /home/username/.npmrc works, but using ~/.npmrc doesn't work because skaffold is appending ~/.npmrc to the actual working directory making it search to the wrong place.
I also tried to reference the home directory using $HOME env variable, but it's not a templated field, so doesn't work
### Information
- Skaffold version: skaffold 2.1.0
- Operating system: MacOS Ventura 13
- Installed via: brew
- Contents of skaffold.yaml:
```yaml
build:
local:
useBuildkit: true
artifacts:
- image: image-name
context: .
docker:
secrets:
- id: npmrc
src: ~/.npmrc
```
",0,not able to reference secrets path to home folder issues without logs and details are more complicated to fix please help us by filling the template below expected behavior i want to reference a secret file in build artifacts docker secret src in the home directory this works well if using the realpath but not actual behavior using the realpath home username npmrc works but using npmrc doesn t work because skaffold is appending npmrc to the actual working directory making it search to the wrong place i also tried to reference the home directory using home env variable but it s not a templated field so doesn t work information skaffold version skaffold operating system macos ventura installed via brew contents of skaffold yaml yaml build local usebuildkit true artifacts image image name context docker secrets id npmrc src npmrc ,0
53,2799322037.0,IssuesEvent,2015-05-12 23:41:10,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,AES-NI compilation broken for OSX 32-bit,portability,"Using gcc 4.9.2
```
make -s clean && ./configure CC=""gcc -m32"" --host=i686-apple-darwin && make -sj4
(...)
intel_aes.c: In function '__cpuid':
intel_aes.c:273:3: error: inconsistent operand constraints in an 'asm'
asm volatile(""cpuid"":""=a""(*where),""=b""(*(where+1)), ""=c""(*(where+2)),""=d""(*(where+3)):""a""(leaf));
^
make[3]: *** [aesni.o] Error 1
make[2]: *** [aesni] Error 2
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [aes/aes.a] Error 2
make[1]: *** Waiting for unfinished jobs....
make: *** [default] Error 2
```",True,"AES-NI compilation broken for OSX 32-bit - Using gcc 4.9.2
```
make -s clean && ./configure CC=""gcc -m32"" --host=i686-apple-darwin && make -sj4
(...)
intel_aes.c: In function '__cpuid':
intel_aes.c:273:3: error: inconsistent operand constraints in an 'asm'
asm volatile(""cpuid"":""=a""(*where),""=b""(*(where+1)), ""=c""(*(where+2)),""=d""(*(where+3)):""a""(leaf));
^
make[3]: *** [aesni.o] Error 1
make[2]: *** [aesni] Error 2
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [aes/aes.a] Error 2
make[1]: *** Waiting for unfinished jobs....
make: *** [default] Error 2
```",1,aes ni compilation broken for osx bit using gcc make s clean configure cc gcc host apple darwin make intel aes c in function cpuid intel aes c error inconsistent operand constraints in an asm asm volatile cpuid a where b where c where d where a leaf make error make error make waiting for unfinished jobs make error make waiting for unfinished jobs make error ,1
230502,25482670019.0,IssuesEvent,2022-11-26 01:10:26,ghuangsnl/spring-boot,https://api.github.com/repos/ghuangsnl/spring-boot,opened,CVE-2022-41946 (Medium) detected in postgresql-42.2.14.jar,security vulnerability,"## CVE-2022-41946 - Medium Severity Vulnerability
Vulnerable Library - postgresql-42.2.14.jar
PostgreSQL JDBC Driver Postgresql
Library home page: https://jdbc.postgresql.org
Path to vulnerable library: /spring-boot-tests/spring-boot-smoke-tests/spring-boot-smoke-test-data-r2dbc-liquibase/build.gradle
Dependency Hierarchy:
- :x: **postgresql-42.2.14.jar** (Vulnerable Library)
Found in HEAD commit: 275c27d9dd5c88d8db426ebfb734d89d3f8e7412
Vulnerability Details
pgjdbc is an open source postgresql JDBC Driver. In affected versions a prepared statement using either `PreparedStatement.setText(int, InputStream)` or `PreparedStatemet.setBytea(int, InputStream)` will create a temporary file if the InputStream is larger than 2k. This will create a temporary file which is readable by other users on Unix like systems, but not MacOS. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. Java 1.7 and higher users: this vulnerability is fixed in 4.5.0. Java 1.6 and lower users: no patch is available. If you are unable to patch, or are stuck running on Java 1.6, specifying the java.io.tmpdir system environment variable to a directory that is exclusively owned by the executing user will mitigate this vulnerability.
Publish Date: 2022-11-23
URL: CVE-2022-41946
CVSS 3 Score Details (4.7)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://github.com/pgjdbc/pgjdbc/security/advisories/GHSA-562r-vg33-8x8h
Release Date: 2022-11-23
Fix Resolution: 42.2.26.jre6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-41946 (Medium) detected in postgresql-42.2.14.jar - ## CVE-2022-41946 - Medium Severity Vulnerability
Vulnerable Library - postgresql-42.2.14.jar
PostgreSQL JDBC Driver Postgresql
Library home page: https://jdbc.postgresql.org
Path to vulnerable library: /spring-boot-tests/spring-boot-smoke-tests/spring-boot-smoke-test-data-r2dbc-liquibase/build.gradle
Dependency Hierarchy:
- :x: **postgresql-42.2.14.jar** (Vulnerable Library)
Found in HEAD commit: 275c27d9dd5c88d8db426ebfb734d89d3f8e7412
Vulnerability Details
pgjdbc is an open source postgresql JDBC Driver. In affected versions a prepared statement using either `PreparedStatement.setText(int, InputStream)` or `PreparedStatemet.setBytea(int, InputStream)` will create a temporary file if the InputStream is larger than 2k. This will create a temporary file which is readable by other users on Unix like systems, but not MacOS. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. Java 1.7 and higher users: this vulnerability is fixed in 4.5.0. Java 1.6 and lower users: no patch is available. If you are unable to patch, or are stuck running on Java 1.6, specifying the java.io.tmpdir system environment variable to a directory that is exclusively owned by the executing user will mitigate this vulnerability.
Publish Date: 2022-11-23
URL: CVE-2022-41946
CVSS 3 Score Details (4.7)
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Origin: https://github.com/pgjdbc/pgjdbc/security/advisories/GHSA-562r-vg33-8x8h
Release Date: 2022-11-23
Fix Resolution: 42.2.26.jre6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in postgresql jar cve medium severity vulnerability vulnerable library postgresql jar postgresql jdbc driver postgresql library home page a href path to vulnerable library spring boot tests spring boot smoke tests spring boot smoke test data liquibase build gradle dependency hierarchy x postgresql jar vulnerable library found in head commit a href vulnerability details pgjdbc is an open source postgresql jdbc driver in affected versions a prepared statement using either preparedstatement settext int inputstream or preparedstatemet setbytea int inputstream will create a temporary file if the inputstream is larger than this will create a temporary file which is readable by other users on unix like systems but not macos on unix like systems the system s temporary directory is shared between all users on that system because of this when files and directories are written into this directory they are by default readable by other users on that same system this vulnerability does not allow other users to overwrite the contents of these directories or files this is purely an information disclosure vulnerability because certain jdk file system apis were only added in jdk this this fix is dependent upon the version of the jdk you are using java and higher users this vulnerability is fixed in java and lower users no patch is available if you are unable to patch or are stuck running on java specifying the java io tmpdir system environment variable to a directory that is exclusively owned by the executing user will mitigate this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
624,8433443246.0,IssuesEvent,2018-10-17 07:18:43,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,OpenCL fails on macOS Mojave using real gcc,portability,"The morons at Apple has deprecated OpenCL (this will eventually drive me away from macOS for sure) but it's still supposed to work under Mojave. At first try however, it just fails - autoconf ends up disabling it. At a quick glance, some header seem to be screwed up.
Hashcat does build OK though (not even any warnings), so I suspect this is more or less an autoconf problem.",True,"OpenCL fails on macOS Mojave using real gcc - The morons at Apple has deprecated OpenCL (this will eventually drive me away from macOS for sure) but it's still supposed to work under Mojave. At first try however, it just fails - autoconf ends up disabling it. At a quick glance, some header seem to be screwed up.
Hashcat does build OK though (not even any warnings), so I suspect this is more or less an autoconf problem.",1,opencl fails on macos mojave using real gcc the morons at apple has deprecated opencl this will eventually drive me away from macos for sure but it s still supposed to work under mojave at first try however it just fails autoconf ends up disabling it at a quick glance some header seem to be screwed up hashcat does build ok though not even any warnings so i suspect this is more or less an autoconf problem ,1
502612,14562866528.0,IssuesEvent,2020-12-17 01:04:52,codeRIT/hackathon-manager,https://api.github.com/repos/codeRIT/hackathon-manager,opened,Allow agreements to be fully customizable,2.1.2 high priority,"Currently we only allow URLs for agreements. To comply with the MLH Member Agreement, members are required to phrase agreements in a certain format. This format is currently not supported with the new agreement model.
Reference: https://docs.google.com/document/d/1K7HSIEO8tA7vbD0dtwvesMOhDAea8nDKJlu3f3kS8pE/edit",1.0,"Allow agreements to be fully customizable - Currently we only allow URLs for agreements. To comply with the MLH Member Agreement, members are required to phrase agreements in a certain format. This format is currently not supported with the new agreement model.
Reference: https://docs.google.com/document/d/1K7HSIEO8tA7vbD0dtwvesMOhDAea8nDKJlu3f3kS8pE/edit",0,allow agreements to be fully customizable currently we only allow urls for agreements to comply with the mlh member agreement members are required to phrase agreements in a certain format this format is currently not supported with the new agreement model reference ,0